Search is not available for this dataset
text
string
meta
dict
The assignment is to use the \textbf{Game of Life}(GOL) \footnote{See \cite{gol-wiki} and the references therein. Try playing it at \href{https://playgameoflife.com/}{playgameoflife.com}} as the basis for exploring parallel programming. This document contains: \begin{enumerate} \item[\S\ref{sec:gol}:] An overview of the Game-of-Life. If familiar with GOL, please continue to \S\ref{sec:code}. \item[\S\ref{sec:code}:] An overview of the code repository. \item[\S\ref{sec:tasks}:] The assessable tasks. \end{enumerate} \section{Game of Life}\label{sec:gol} The Game of Life is a cellular automaton devised by the British mathematician John Horton Conway in 1970\cite{gol}. It is a zero-player game, with the game's evolution determined completely by its initial state. A player interacts by creating an initial configuration and observing how it evolves. The game is simple: there is a 2D grid where each cell in the grid can either be alive or dead at any one time. The state of the system at the next time step is determined from the number of nearest neighbours each cell has at the present time (see Fig.\ref{fig:gol}). The evolution of the grid resembles cells moving on a plane. \begin{figure}[!h] \centering \fbox{\includegraphics[width=0.2\textwidth, valign=c]{figs/GOL.grid-500-by-500.step-0010.png}} \fbox{\includegraphics[width=0.2\textwidth, valign=c]{figs/GOL.grid-500-by-500.step-0019.png}} \includegraphics[width=0.1\textwidth, valign=B]{figs/neighbour} \caption{The Game of Life. An example of a large grid at two different times (separated by 9 steps) is shown in the left and middle panels. Here live cells are in black, dead are white. The evolution of the grid is governed by the birth and death of cells, where a cell's state is defined by its 8 neighbouring cells (right panel).} \label{fig:gol} \end{figure} \par The rules for evolving a system to the next time level are as follows: \begin{itemize} \item {\color{ForestGreen}new cell born} if the cell has exactly three live neighbours - \textbf{\color{ForestGreen} ready to breed}. \item {\color{CornflowerBlue}state of cell unchanged} if the cell has exactly two live neighbours - \textbf{\color{CornflowerBlue}content}. \item {\color{Red}dies or stays dead} if the cell has $<2$ live neighbours - \textbf{\color{Red}lonely}. \item {\color{Red}dies or stays dead} if the cell has $>3$ live neighbours - \textbf{\color{Red}overcrowding}. \end{itemize} \par These rules are set so as to generate an equilibrium between living and dead cells. One can explore how altering the rules can affect the evolution of the system. To lax a {\color{ForestGreen} ready to bread} rule coupled with large amounts of {\color{Red}overcrowding allowed} can give rise to grids that quickly fill up with cells only to collapse entirely. \subsection{The ``Life'' in GOL} Using this fairly simple set of rules, some fairly complex structures can emerge\footnote{a nice list of types of structures is found on \href{https://en.wikipedia.org/wiki/Conway\%27s_Game_of_Life}{Wikipedia}}. The game is in fact Turing complete and can simulate a universal constructor or any other Turing machine. We highlight some common structures found in a typical game in Fig.\ref{fig:gol-structures}. \begin{figure}[!h] \centering \includegraphics[height=0.06\textwidth, valign=c]{figs/Game_of_life_loaf.png} \includegraphics[height=0.17\textwidth, valign=c]{figs/Game_of_life_pulsar.png} \includegraphics[height=0.06\textwidth, valign=c]{figs/Game_of_life_animated_glider.png} \includegraphics[height=0.11\textwidth, valign=c]{figs/Game_of_life_glider_gun.svg.png} \caption{Structures in GOL: We show an example of still life (left); oscillating life (middle); travelling life, so-called spaceships (right); and a constructor that generates gilder spaceships (bottom).} \label{fig:gol-structures} \end{figure} GOL may appear chaotic and can remain chaotic for long periods of time (even indefinitely) before settling into a combination of still lifes, oscillators, and spaceships. GOL is in fact undecidable, that is given an initial pattern and a later pattern, no algorithm exists that can tell whether the later pattern is ever going to appear using this IC. This is a corollary of the halting problem: the problem of determining whether a given program will finish running or continue to run forever from an initial input. \par The undecidability of the game depends on the rules and the dimensionality of the grid. For instance, increasing the dimension from 2 to 3 means that the rule for equilibrium goes from 2 neighbours out of 8 to 2 out of 26. You can try exploring the impact rules have on the game using the provided code repository discussed in the following section. \subsection{Grid-based simulations} \label{sec:gol:gridsims} At a basic level, \textbf{GOL} is a simulation of the temporal evolution of a 2-D grid. The rules governing the "life" can be arbitrarily complex and the grid need not be just two dimensional. If one changes a cell from just being alive or dead to having a complex internal state, increases the complexity of the rules and adds one more dimension, \textit{you would have gone from GOL to any number of grid-based codes that are used to model physical processes}. This simple grid-based program should be viewed as the first step in writing more complex grid-based codes. \par As an example, ENZO (\cite{enzo}, repository is located \href{https://github.com/enzo-project/enzo-dev}{here}) is a Adaptive Mesh Refinement (AMR) code using Cartesian coordinates which can be run in one, two, and three dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics. GOL would be equivalent to running ENZO in two dimensions where the "fluid" moves on the 2-D surface based on highly simplified equations. \section{Code Repository}\label{sec:code} The code that forms the basis of the assignment is found on the course website. This repository contains the following \begin{itemize} \item basic files like a README, License \item \texttt{Makefile} setup to compile codes with a variety of different flags and features. For a quick tutorial on GNU Make, see \href{https://www.gnu.org/software/make/}{here}. Students are expected to be familiar with Make and the associated commands. Please familiarise yourself with this material. \item Source code in \texttt{src/*c} (\texttt{C/C++}) and \texttt{src/*f90} (\texttt{Fortran}). We expect students to be familiar with \texttt{C/C++} and/or \texttt{Fortran}. All the source code provided is written in \texttt{C} and \texttt{Fortran}. Fortran source codes are noted by having \texttt{\_fort.f90} endings. Please feel free to change the make file to use a \texttt{C++} compiler when compiling the c codes and use C++ syntax if you so desire. \item documentation and \LaTeX\ source code in \texttt{docs} \item a script to produce movies from png files when visualising with the PNG library. \end{itemize} To familiarise yourself with the contents you can see what make options are available and browse the source directory. \begin{center} \begin{minipage}{0.95\textwidth} \small \begin{minted}[frame=single,]{sh} make allinfo # provides all the information of the commands listed below make configinfo # provides the different options available make makecommands # what you can make by typing these commands make buildinfo # current compilers used \end{minted} \end{minipage} \end{center} You'll note that the make file is setup to accept command line arguments that can set compiler families such as \texttt{GCC}, \texttt{CRAY} when you type \texttt{make configinfo}. Try the following \begin{center} \begin{minipage}{0.95\textwidth} \small \begin{minted}[frame=single,]{sh} # use the GNU CC family of compilers, gcc, g++, & gfortran AND # compile the serial version of the code, both C and Fortran sources. make COMPILERTYPE=GCC cpu_serial # use the intel family of compilers and compile the openmp source codes (if present) make COMPILERTYPE=INTEL cpu_openmp # use Cray compilers (useful for systems like magnus) and just compile the C sources make COMPILERTYPE=CRAY cpu_openmp_cc \end{minted} \end{minipage} \end{center} Note that the code does require recent compilers so on machines such as zeus make sure to load a recent gcc compiler using commands such as \texttt{module swap gcc gcc/8.3.0}) \subsection{Source} The common function calls with interface is provided in \texttt{src/common.*}, provide functions like visualising GOL, getting timing information, etc. We recommend students have a look at the prototypes in \texttt{src/common.h}. The key functions are: \begin{center} \begin{minipage}{0.95\textwidth} \small \begin{minted}[frame=single,]{c} /// visualise the game of life void visualise(enum VisualiseType ivisualisechoice, int step, int *grid, int n, int m); /// generate IC void generate_IC(enum ICType ic_choice, int *grid, int n, int m); /// UI void getinput(int argc, char **argv, struct Options *opt); ///GOL stats protoype void game_of_life_stats(struct Options *opt, int steps, int *current_grid); /// GOL prototype void game_of_life(struct Options *opt, int *current_grid, int *next_grid, int n, int m); \end{minted} \end{minipage} \end{center} The \texttt{Fortran} source of \texttt{src/common\_fort.f90} provides a module with the same set of interfaces and subroutines. \begin{center} \begin{minipage}{0.95\textwidth} \small \begin{minted}[frame=single,]{fortran} module gol_common ! ascii visualisation subroutine visualise_ascii(step, grid, n, m) ! png visualisation subroutine visualise_png(step, grid, n, m) ! no visualisation subroutine visualise_none() ! visualisation routine subroutine visualise(ivisualisechoice, step, grid, n, m) ! generate random IC subroutine generate_rand_IC(grid, n, m) ! generate IC subroutine generate_IC(ic_choice, grid, n, m) ! UI subroutine getinput(opt) ! get some basic timing info real*8 function init_time() ! get the elapsed time relative to start subroutine get_elapsed_time(start) end module \end{minted} \end{minipage} \end{center} The main program consists of (here we just highlight the \texttt{C} source as the \texttt{Fortran} source is similar): \begin{center} \begin{minipage}{0.95\textwidth} \begin{minted}[frame=single,]{c} int main(int argc, char **argv) { struct Options *opt = (struct Options *) malloc(sizeof(struct Options)); getinput(argc, argv, opt); // allocate some memory ... // generate initial conditions generate_IC(opt->iictype, grid, n, m); // start GOL while loopt while(current_step != opt->nsteps){ visualise(opt->ivisualisetype, current_step, grid, n, m); game_of_life_stats(opt, current_step, grid); game_of_life(opt, grid, updated_grid, n, m); // swap current and updated grid tmp = grid; grid = updated_grid; updated_grid = tmp; current_step++; } // free mem ... } \end{minted} \begin{comment} \begin{minted}{fortran} ! Fortran code program GameOfLife use gol_common implicit none ... ! get input call getinput(opt) ! allocate some mem ... ! generate IC call generate_IC(opt%iictype, grid, n, m) do while (current_step .ne. nsteps) call visualise(opt%ivisualisetype, current_step, grid, n, m); call game_of_life_stats(opt, current_step, grid); call game_of_life(opt, grid, updated_grid, n, m); ! update current grid grid(:,:) = updated_grid(:,:) current_step = current_step + 1 end do ! deallocate mem .. end program GameOfLife \end{minted} \end{comment} \end{minipage} \end{center} Implementations of the {\color{blue}\texttt{game\_of\_life}} and {\color{blue}\texttt{game\_of\_life\_stats}} functions can be found in \texttt{src/01\_cpu\_serial.c} (\& \texttt{src/01\_cpu\_serial\_fort.f90}). Familiarise yourself with these functions (based on your language of choice). \par Running the code is relatively simple. \begin{center} \begin{minipage}{0.95\textwidth} \small \begin{minted}[frame=single,]{sh} make cpu_serial ./bin/01_cpu_serial "Usage: ./bin/01_gol_cpu_serial <grid height> <grid width> " "[<nsteps> <IC type> <Visualisation type> <Rule type> <Neighbour type>" "<Boundary type> <stats filename>]" ./bin/01_cpu_serial 500 500 4 # run a 500x500 grid for 4 steps giving \end{minted} \fbox{\includegraphics[width=0.2\textwidth, valign=c]{figs/GOL.grid-500-by-500.step-0000.png}} \fbox{\includegraphics[width=0.2\textwidth, valign=c]{figs/GOL.grid-500-by-500.step-0001.png}} \fbox{\includegraphics[width=0.2\textwidth, valign=c]{figs/GOL.grid-500-by-500.step-0002.png}} \fbox{\includegraphics[width=0.2\textwidth, valign=c]{figs/GOL.grid-500-by-500.step-0003.png}} \\ PNG Visualisation of output from GOL. FIlled black squares are living cells. \end{minipage} \end{center} \subsection{Expanded Rules} Although not required for the assignment, you can alter the rules of the game, changing the number of neighbours that decide certain states, even expand the number of neighbours used and change the boundary conditions. A sample of the serial code structured to easily alter the rules for the game is \texttt{src/01\_gol\_serial\_expanded.c}. \par This particular version of the code is an excellent start point for implementing far more complex rules, such as those governing the flow of a fluid or the diffusion of a gas across a surface.
{ "alphanum_fraction": 0.7536913245, "avg_line_length": 59.9691629956, "ext": "tex", "hexsha": "bd80961bce91e5da531f08e00f4421b0ddbe7967", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3905c84f28ddff848761b2a1cd9f2118ae6eabd4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dgsaf/game-of-life", "max_forks_repo_path": "docs/assignment/hpc-curtin-gol-info.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3905c84f28ddff848761b2a1cd9f2118ae6eabd4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dgsaf/game-of-life", "max_issues_repo_path": "docs/assignment/hpc-curtin-gol-info.tex", "max_line_length": 628, "max_stars_count": null, "max_stars_repo_head_hexsha": "3905c84f28ddff848761b2a1cd9f2118ae6eabd4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dgsaf/game-of-life", "max_stars_repo_path": "docs/assignment/hpc-curtin-gol-info.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3634, "size": 13613 }
\documentclass{article} \usepackage{tabularx} \usepackage{amsmath} \usepackage{graphicx} \usepackage[top = 2cm, bottom = 2cm, right = 2cm, left = 2cm]{geometry} \usepackage{cite} \usepackage[final]{hyperref} \usepackage{listings} \hypersetup{ colorlinks=true, linkcolor=blue, citecolor=blue, filecolor=magenta, urlcolor=blue } \begin{document} \title{Practicle 6\\Random with CUDA} \date{06/02/19} \maketitle \begin{abstract} \end{abstract} \section{Microfacet models for material} If you take a look on a material, like a metal for example, it seem perfectly flat, but, if you look very close to the surface you may see micro details. Like a light beam, each rays sent intersect the surface and are reflected or refracted. When a ray intersect a surface he lost energy. When a ray is sent between two object it will be reflected many time and lost a lot of energy. This part of the scene will be dark. This kind of shadow is called ambient occlusion and don't need light. \begin{figure}[h] \centering \includegraphics[scale=0.47]{figures/absorption.png} \caption{Absorption} \end{figure} For this practical we'll work on three different materials. We can define a material by a scatter function. The aim of this function is to evaluate the reflected or refracted ray from the initial ray. Because we want to use a stochastic methode to send ray we have to now how random works in CUDA. \section{An other layout} This section is optional. It's just to test an other way to organize pixel access on kernel. For the moment we read a pixel and we jump to the next one using the stride. Our rendering kernel will become very heavy so we need to use one thread per pixel. we'll divide our image into small image (8 pixels x 8 pixels). Create two variable in the host code for the number of blocks per grid and the number of threads per block. \begin{lstlisting} dim3 blocks(width/8+1, height/8+1); dim3 threads(8, 8); \end{lstlisting} We can now run our kernel using this variable. \begin{lstlisting} render<<<blocks, threads>>>(/*...*/); \end{lstlisting} Inside the kernel we can compute x and y in this way: \begin{lstlisting} int x = threadIdx.x + blockIdx.x * blockDim.x; int y = threadIdx.y + blockIdx.y * blockDim.y; int index = y*width + x; \end{lstlisting} If you don't use a power of two for your image resolution, make sure the x and y values aren't greater than the width and the height. \section{Generate random number} To generate random number we need the cuRAND library. For that we should include curand\_kernel.h. This library provides an efficient generation of pseudorandom numbers. That means a sequence of numbers who have the same property as a truly random sequence. This sequence is generate by a deterministic algorithm. \subsection{One random number} If you need only one random number per pixel you can use the curand host library. Add curand.lib into your link dependencies and the include curand.h. For this section we'll generate number in the device and send the data to the host. On the host code create the generator: \begin{lstlisting} curandGenerator_t generator; curandCreateGenerator(&generator, CURAND_RNG_PSEUDO_DEFAULT); curandSetPseudoRandomGeneratorSeed(generator, 4242ULL); \end{lstlisting} If needed more function are available here (https://docs.nvidia.com/cuda/curand/host-api-overview.html\#generator-types).\\ As usual allocate a buffer of float for each pixel on the host and device memory. We can now generate the number on the device using curandGenerateUniform for example. \begin{lstlisting} curandGenerateUniform(generator, deviceFloatArray, width*height); \end{lstlisting} (Optional) If you need the buffer you can grab it from the device. \begin{lstlisting} cudaMemcpy(hostFloatArray, deviceFloatArray, width*height*sizeof(float), cudaMemcpyDeviceToHost); \end{lstlisting} As the background clear color, write a kernel for drawing the random number. \begin{lstlisting} //... image[i] = Vector3(devData[index], devData[index], devData[index]); //... \end{lstlisting} You should get this kind of image. For information, noises like this one is part of a lot of algorithm for image generation. \begin{figure}[h] \centering \includegraphics[scale=0.47]{figures/random.png} \caption{Uniform noise} \end{figure} \newpage \subsection{Random sequence} A pseudorandom sequence have to be initialize on a kernel. This operation can be very heavy so we will check carefully errors. First, we need a curandState per pixel. Use cudaMalloc to allocate one curandState per pixel on the device memory. Then run a new kernel named random\_initialization. In this kernel you have to call curand\_init. \begin{lstlisting} curand_init(seed, index, 0, &state[index]); \end{lstlisting} The seed is a kind of id for the pseudorandom sequence generation. If you use the same seed, you will have the same sequence. If you want different random number per pixel with the same seed we just have to define a subsequence (second parameter). The parameter three is the offset.\\ On your application, disable all kernels except this one and run you exe using cuda-memcheck. If errors is occurred, use the index as offset instead of subsequence. (There are some bugs with this library with CUDA 10 for the moment) \begin{lstlisting} curand_init(seed, 0, index, &state[index]); \end{lstlisting} If it's good, reactivate your kernel.\\ Now, if you need random value, you just need to pass the curandState* buffer to your kernel and use the curand\_uniform function for pseudorandom uniform number. \begin{lstlisting} float number = curand_uniform(&states[index])); // 0.f <= number <= 1.f \end{lstlisting} \newpage \section{Diffuse material} On your rendering kernel, call a device function named computeColor. On this function we'll compute the color for a ray sent. \begin{lstlisting} __device__ Vector3 // The output color computeColor( const Ray& ray, // A ray sent Sphere** spheres, // The world unsigned int nbSphere, curandState* randomStates, // The buffer of states int nbRebound, const Vector3& backgroundColor); \end{lstlisting} On the first part of the function, compute the intersection with the ray and spheres. We'll use this struct for storing information. \begin{lstlisting} struct HitInformation { bool hit_; float t_; Vector3 intersection_; Vector3 normal_; }; \end{lstlisting} Your hit function can be simplify: \begin{lstlisting} if (root0>0.001f && root0<hit.t_) { hit.hit_ = true; hit.t_ = root0; hit.intersection_ = ray.getOrigin()+ray.getDirection()*hit.t_; hit.normal_ = hit.intersection_-center_; hit.normal_.normalize(); return true; } \end{lstlisting} If the ray send hit a sphere, you can compute recursively the scattered ray and the attenuation color. \begin{lstlisting} if (hit.hit_) { Ray scattered; Vector3 attenuation; if (nbRebound>0) { Ray scattered; Vector3 attenuation; //... return attenuation * computeColor( scattered, spheres, nbSphere, randomStates, --nbRebound, backgroundColor); } else { return Vector3(0.f, 0.f, 0.f); } } else { return backgroundColor; } \end{lstlisting} For a diffuse material, we use a simplify model of the Lambertian reflectance. Each rays will be reflected randomly while they hit a sphere. \begin{figure}[h] \centering \includegraphics[scale=0.47]{figures/diffuse.png} \caption{Lambertian} \end{figure} \begin{lstlisting} Vector3 target = hit.intersection_ + hit.normal_ + randomSphereUnitVector(states); rayScattered = Ray(hit.intersection_, (target-hit.intersection_)); attenuation = SphereColor_; \end{lstlisting} For generate a vector into the unit sphere you just have to generate a vector with random value minus $(0.5, 0.5, 0.5)$ and normalize it.\\ At this point you can run your recursive function 10 times but, not 50 time because you call stack is to small. You can modify the size of your call stack using cudaDeviceSetLimit(cudaLimitStackSize, value). The problem is, if you increase your call stack, you reduce the number of register for the computation. The best way is to make an iterative algorithm. \begin{lstlisting} Ray r = ray; Vector3 cur_attenuation = Vector3(1.f, 1.f, 1.f); for (int i = 0; i<nbRebound; ++i) { //... if (hit.hit_) { Ray scattered; Vector3 attenuation; Vector3 target = hit.intersection_ + hit.normal_ + randomSphereUnitVector(states); rayScattered = Ray(hit.intersection_, (target-hit.intersection_)); cur_attenuation *= SphereColor_; } else { return cur_attenuation*backgroundColor; } } return Vector3(0.f, 0.f, 0.f); \end{lstlisting} You may have a similar results: \begin{figure}[h] \centering \includegraphics[scale=0.47]{figures/oneRay.png} \caption{Ambiante Occlusion with one ray} \end{figure} \subsection{TDR} The Timeout Detection \& Recovery (TDR) is the Windows's feature to detect response problems from the graphics card. If your application run a kernel for more than two seconds, Windows reset the card. The aim of this feature is to avoid system freeze when a segmentation fault or an infinite loop appear on kernel.\\ \subsection{multiple ray per pixel} To increase the result, we need to send more than one ray per pixel. To avoid a TDR we'll run the rendering kernel 100 times instead of add a loop inside the kernel. \begin{lstlisting} for (int i = 0; i<nbRayPerPixel; ++i) { rayTraceGPU<<<blocks, threads>>>(/*...*/); } \end{lstlisting} Currently I use my image for storing the result. Now I need to use the image to store intermediate result. So, I have to clear my image with black value. After that, each call of the kernel has to accumulate the result into image. \begin{figure}[h] \centering \includegraphics[scale=0.47]{figures/result.png} \caption{Ambiante Occlusion with 100 rays} \end{figure} All computations we did are done on a linear space. This space is perfect for computation but, wrong for our perception. We need to apply a gamma correction. The common correction is the sRGB curve (gamma 2.2). But, for this exercice we just need a gamma 2 correction. On the writeBufferAsBMP compute the sqrt of the color before the cast of the color. \begin{lstlisting} unsigned char pixelc[3]{ unsigned char(255.99f*sqrt(buffer[x+w*y].b())), unsigned char(255.99f*sqrt(buffer[x+w*y].g())), unsigned char(255.99f*sqrt(buffer[x+w*y].r())), }; \end{lstlisting} \end{document}
{ "alphanum_fraction": 0.7578162348, "avg_line_length": 43.0411522634, "ext": "tex", "hexsha": "452b8a613db93827544fd04f19bbf3a6125886cb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e161a6fd1138ea39b08673c2c9cb04a8126332ce", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "robinfaurypro/cuda_lessons", "max_forks_repo_path": "06_practicle/06_practicle.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e161a6fd1138ea39b08673c2c9cb04a8126332ce", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "robinfaurypro/cuda_lessons", "max_issues_repo_path": "06_practicle/06_practicle.tex", "max_line_length": 490, "max_stars_count": null, "max_stars_repo_head_hexsha": "e161a6fd1138ea39b08673c2c9cb04a8126332ce", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "robinfaurypro/cuda_lessons", "max_stars_repo_path": "06_practicle/06_practicle.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2687, "size": 10459 }
\section{Implementation Details} In this section, we introduce and describe optimizations applied to baseline models after our replication experiments, in case someone is curious about these abrupt changes on results. These optimizations are mainly for the CodeSearchNet Challenge, and meanwhile, they do not affect the conclusions stated in the main text. Besides, considering the difficulty of exploring new ideas, we turn to our own Keras implementation instead of using the official implementation supplied by the challenge. There might be some minor influences on the concrete value of experimental results, but we have checked that results from two implementations are very close to each other. The most important thing is that our conclusions are consistent. \begin{table}[!ht] \centering \caption{Accuracy Comparison of Optimization Operations} ~\\ \label{tab:app-optimization} % \resizebox{\linewidth}{!}{ \begin{tabular}{cccccc} \toprule \multirow{2}*{Encoder} & \multirow{2}*{Operation} & \multicolumn{2}{c}{MRR Score} & \multicolumn{2}{c}{NDCG Score} \\ \cline{3-4}\cline{5-6} & & Python & Ruby & Python & Ruby \\ \midrule & - & 0.6432 & 0.3210 & 0.2994 & 0.1294 \\ & tuning & 0.7907 & 0.5486 & 0.2236 & 0.1652 \\ NBoW & indexing & 0.7878 & 0.5454 & 0.4111 & 0.2938 \\ & cleaning & 0.7968 & 0.5696 & 0.3945 & 0.2923 \\ & unifying & 0.7978 & 0.5607 & 0.3893 & 0.2955 \\ \bottomrule \end{tabular} % } \end{table} As shown in Table \ref{tab:app-optimization}, we introduce four types of optimizations and they are hyper-parameters tuning, exact indexing strategy, data cleaning, and code unifying respectively. In most cases, they bring improvements to both MRR scores and NDCG scores over Python corpus and Ruby Corpus. \subsection{Model Tuning} Considering the difficulty of working in the given searching framework, we turn to one relatively simpler Keras implementation. Besides, we make some changes to the hyper-parameters, such as enlarging the embedding size, adjusting the batch size, and switching the optimizer. Meanwhile, we introduce some negative optimizations like removing the Dropout layer to reduce the training time in the expectation of doing more experiments. \subsection{Indexing Strategy} When we convert code data and query data to embeddings at the shared vector space, we utilize the nearest neighbor algorithm to index semantically similar code snippets for each natural language query. In the given vanilla implementation, ANNOY, as one approximate nearest neighbor algorithm, is used for indexing. Compared with exact nearest neighbor algorithms, such as KNN, the approximate nearest neighbor is commonly for handling a massive amount of high-dimension data, but with the cost of a certain loss of accuracy. Therefore, the exact nearest neighbor algorithm would be more ideal for the given neural architecture. We turn to use the KNN algorithm to index 1000 nearest code embeddings for each query embedding and find the accuracy scores get significantly improved. \subsection{Data Cleaning} In the CodeSearchNet Corpus, raw code snippets contain numerous digitals, literals, and punctuations, but they are merely noise tokens. Besides, for different programming languages, code tokens may have varying levels of semantic quality. For example, Go produces redundant error handling, Java requires explicit type-declaration, and Ruby supports functional programming. The former two may contain lots of valueless tokens and the latter may contain numerous variables named with meaningless identifiers. Based on these ideas, we do data cleaning over code tokens, such as removing punctuations and character tokens, replacing digitals, and literals with corresponding descriptive tags. For the given evaluation set, some data are like a phrase query. We believe only keywords are needed for the task of semantic code search. There have been some observations \cite{Yan2020AreTC} indicate that a keyword query is more ideal than a phrase query. In contrast, query data in the CodeSearchNet Corpus are usually of low quality. There even exist lots of URLs, HTML tags, and other noise tokens, like JavaDoc keywords. To make sure each query is like a "keyword query", not a "phrase query". We mainly correct irregular writing, and also remove widespread noise tokens, such as punctuations, digitals, stopwords as well as character tokens. \paragraph{Keyword Query} A keyword query usually contains several keywords that need to be strictly matched with code snippets, like "image encoding base64". \paragraph{Phrase Query} A phrase query is usually in the form of a sentence or phrase, like "How to convert an image to base64 encoding?". \subsection{Code Unifying} To better utilize the multi-language corpus, we implement the idea of unifying code which prompts data of varying programming languages to be more alike. There are three rules which not bring obvious improvements for uni-language learning but are expected to benefit the multi-language learning. The first rule is converting various control-flow statements and identifiers to corresponding descriptive tags, such as converting for-loop, while-loop statements to the "loop" tag, and converting the concrete string to the "literal" tag. The second rule is to remove unnecessary reserve words, such as type declarations like "int" and "boolean", modifier keywords like "public" and "abstract", functional keywords like "async" and "await". The third rule is to unify the expression of various semantically similar tokens, such as unifying "function", "program", "define", "module" as "module", unifying "xor", "not", "in", "===" as "judge". Even though all these processing work could be done automatically or equivalently by a powerful neural network with a massive amount of training data, we do that with the expectation to reduce the requirements for the complexity of models or the quality and quantity of training data.
{ "alphanum_fraction": 0.7854521188, "avg_line_length": 108.9818181818, "ext": "tex", "hexsha": "0eddc7bdc8a00fd886f10b704ecfcd70bf3979f2", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-12-29T11:13:56.000Z", "max_forks_repo_forks_event_min_datetime": "2021-07-16T17:00:03.000Z", "max_forks_repo_head_hexsha": "9469fbd287f3168da0fc5261159bb31157abb1c6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jianguda/mrncs", "max_forks_repo_path": "doc/info/code_details.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "9469fbd287f3168da0fc5261159bb31157abb1c6", "max_issues_repo_issues_event_max_datetime": "2022-02-10T06:26:56.000Z", "max_issues_repo_issues_event_min_datetime": "2021-06-26T10:40:46.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jianguda/mrncs", "max_issues_repo_path": "doc/info/code_details.tex", "max_line_length": 1223, "max_stars_count": 16, "max_stars_repo_head_hexsha": "9469fbd287f3168da0fc5261159bb31157abb1c6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jianguda/mrncs", "max_stars_repo_path": "doc/info/code_details.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-19T07:30:12.000Z", "max_stars_repo_stars_event_min_datetime": "2021-07-16T14:23:23.000Z", "num_tokens": 1308, "size": 5994 }
\documentclass[a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{float} \usepackage{amsmath} \usepackage{indentfirst} \usepackage{hyperref} \usepackage[margin=2.8cm]{geometry} \title{Workflow documentation} \author{Gustave Li} \date{Last updated: July 2021} \begin{document} \maketitle \section{Introduction} The Carotenoid-Porphyrin-\(\text{C}_{60}\) (\(\text{CPC}_{60}\)) triad molecule consists of a porphyrin covalently linked with a carotenoid and a \(\text{C}_{60}\) molecule (Figure~\ref{fig:CPC60}). Carotenoid is the excited-state electron donor and the \(\text{C}_{60}\) serves as the electron acceptor, while porphyrin acts as a bridge to separate the two parts and transfer electrons. The molecule is a mimicry of the natural photosynthetic center which utilize photons to initiate a complex series of electronic transitions to achieve a high-energy charge separated state. It absorbs UV visible light and produces a charge separated state (\(\text{CT}_{2}\)) where an electron is transferred from C to \(\text{C}_{60}\), producing a large dipole moment of 150 D. Due to its outstanding performance in photoinduced charge transfer, it has a great potential in organic solar cells. \begin{figure}[H] \centering \includegraphics[width=0.75\linewidth]{projects/Gustave_Li/Docs/Triad.png} \caption{The \(\text{CPC}_{60}\) molecule} \label{fig:CPC60} \end{figure} However, Manna et al. reported that the triad spatial conformation strongly affects the process of charge separation and concluded that the linear conformations have better charge separation effeciency over the bent conformations \cite{MannaArun}. Olguin et al. further investigated the effect of structural changes on \(\text{CPC}_{60}\) charge transfer states, they summarized several factors influencing charge transfer, including donor-acceptor distance, distances and torsions between the three components \cite{OlguinMarco}. In summary, the charge transfer process in \(\text{CPC}_{60}\) is very conformation-dependent, the molecular structure has a rather dramatic effect on the the charge transfer performance. Thus, finding the optimal structure for charge transfer is critical. Thanks to the development of computer sciences, tens of thousands of possible \(\text{CPC}_{60}\) molecules can be generated based on molecular dynamics. Considering the heavy computing load of calculation and the complexity of the \(\text{CPC}_{60}\) molecule, it is unrealistic to calculate the charge transfer rate for all the molecules. Researchers are currently working on different directions to address this issue. Brian and co-worker proposed novel formulations for calculating charge transfer rate, which reduced a maximum of 80\% of computational cost \cite{BrianDomi}. In this project, we aim to make use of machine learning to cluster the many molecules into different groups. By taking the cluster center as representation conformations, we expect the computational cost to decrease for a great amount while maintaining as much structural information as possible. \pagebreak \section{Triad molecule visualization} The triad molecule trajectory was loaded with the python \texttt{mdtraj} module, and the \texttt{nglview} module was applied for visualization. The ball-and-stick representation of the 100,000 triad molecules in the dataset was obtained, which is similar to that in Figure~\ref{fig:CPC60}. All the 100,000 molecules follow the same C-P-\(\text{C}_{60}\) sequence, but the overall conformation varies from bent to linear. Torsion around carotenoid-porphyrin and porphyrin-\(\text{C}_{60}\) also exists. \section{Descriptors} The goal of this project is to cluster the thousands of triad molecules, so different descriptors are needed to represent the features of molecules for machine learning algorithms to work. \subsection{Key atoms} The key atoms chosen for descriptors are \(\text{C}_{33}\), \(\text{C}_{21}\), \(\text{C}_{61}\), \(\text{C}_{65}\), \(\text{C}_{66}\), \(\text{C}_{69}\), \(\text{C}_{60}\), \(\text{C}_{89}\), \(\text{C}_{95}\), \(\text{C}_{96}\), \(\text{C}_{128}\) and \(\text{N}_{6}\) (Figure \ref{fig:key_atoms}) \begin{figure}[H] \centering \includegraphics[width=0.75\linewidth]{projects/Gustave_Li/Docs/Key-atoms.jpg} \caption{Key atoms for descriptors} \label{fig:key_atoms} \end{figure} \subsection{Geometric descriptors} \begin{table}[ht] \centering \caption{Definitions of geometric descriptors} \begin{tabular}{c|c} \hline \hline \textbf{Name} & \textbf{Description} \\ \hline \hline EuclidianDist\_1 & The euclidian distance between \(\text{C}_{33}\) \& \(\text{C}_{128}\) \\ Angle\_1 & The angle between atoms \(\text{C}_{33}\)-\(\text{C}_{96}\)-\(\text{C}_{128}\) \\ Angle\_2 & The angle between atoms \(\text{C}_{33}\)-\(\text{C}_{69}\)-\(\text{C}_{96}\) \\ Angle\_3 & The angle between atoms \(\text{C}_{69}\)-\(\text{C}_{96}\)-\(\text{C}_{128}\) \\ Dihedral\_1 & The dihedral between atoms \(\text{C}_{21}\)-\(\text{C}_{61}\)-\(\text{C}_{66}\)-\(\text{C}_{65}\) \\ Dihedral\_2 & The dihedral between atoms \(\text{C}_{89}\)-\(\text{N}_{6}\)-\(\text{C}_{95}\)-\(\text{C}_{96}\) \\ RMSD\_Linear & RMSD for the conformation to the Linear triad \\ RMSD\_Bent & RMSD for the conformation to the Bent triad \\ \hline \hline \end{tabular} \label{tab:descriptors} \end{table} \pagebreak \bibliographystyle{unsrt} \bibliography{references.bib} \end{document}
{ "alphanum_fraction": 0.7338129496, "avg_line_length": 71.2820512821, "ext": "tex", "hexsha": "fa451c45ec96c1f60e680f3c375ccc5bbd620f47", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ed1dded4aae34e1e5987170ec1aebba390d4c1e6", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "dominikusbrian/durf_hq", "max_forks_repo_path": "projects/Gustave_Li/Docs/Workflow_documentation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ed1dded4aae34e1e5987170ec1aebba390d4c1e6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "dominikusbrian/durf_hq", "max_issues_repo_path": "projects/Gustave_Li/Docs/Workflow_documentation.tex", "max_line_length": 883, "max_stars_count": null, "max_stars_repo_head_hexsha": "ed1dded4aae34e1e5987170ec1aebba390d4c1e6", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "dominikusbrian/durf_hq", "max_stars_repo_path": "projects/Gustave_Li/Docs/Workflow_documentation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1505, "size": 5560 }
\section{$k$-Means Clustering Problem} $k$-Means clustering problem is an unsupervised problem that can be described as follow. \begin{definition}[$k$-Means Clustering Problem\footnote{High frequency definition.}] \label{alo: def1} Given an observation set \begin{equation} X=\{x_{1}, x_{2}, \dots, x_{N} | x_{i} \in\mathbb{R}^n\} \end{equation} The goal is to arrange the $N$ observation instances into $k$ sets, $\mathcal{S} = \{S^{1}, S^{2}, \dots, S^{k}\}$, and choose one cluster, $\mathcal{C} = \{c^{1}, c^{2}, \dots, c^{k}\}$. So as to minimize the \textbf{Objective Function}: \begin{equation} \phi_{X}(\mathcal{C}) = \sum_{i = 1}^{N} d^{2}(x_{i}, \mathcal{C}) \end{equation} where \begin{equation} d^{2}(x_{i}, \mathcal{C}) = \min_{l = 1, \dots, k} ||x_{i} - c^{l}||_{2}^{2} \end{equation} Noting that $\phi_{X}$ has the additivity about $X$, i.e., if $X = X_{1}\cup X_{2}$, then $\phi_{X} = \phi_{X_{1}} + \phi_{X_{2}}$. There is also one equivalent optimization version of $k$-Means problem. \end{definition} \begin{definition}[$k$-Means Clustering Problem\footnote{Optimization form.}] \label{alo: def2} Minimizing the \textbf{Objective Function}: \begin{equation}{\label{eq:1}} \phi(W, c) = {\sum_{l=1}^{k}\sum_{i=1}^{N}} w_{li} \cdot ||x_{i} - c^{l}||^{2}_{2} \end{equation} where \begin{equation}{\label{eq:2}} c = (c^{1}, c^{2}, \dots, c^{k}), c^{l} \in\mathbb{R} \end{equation} \begin{equation} W =(w_{li}) \in M_{k\times N}(\mathbb{R}) \end{equation} \begin{equation} w_{li} = \begin{cases} 1,&\text{$x_{i} \to S^{l}$},\\ 0,&\text{otherwise}. \end{cases} \end{equation} s. t. \begin{equation} \sum_{l=1}^{k} w_{li} = 1, i = 1, 2, \dots, N; \sum_{i=1}^{N}w_{li}\geq 1, l = 1, 2, \dots, k. \end{equation} \begin{equation}{\label{eq:3}} w_{li} \in \{0, 1\}, i = 1, 2, \ldots, N, l = 1, 2, \dots, k. \end{equation} \end{definition} For convenience, we named $W$ as \textbf{assignment matrix}. By the format of Definition \ref{alo: def1}, we have transformed $k$-means Problem into one \textbf{Optimization Problem} consists of \textbf{Objective Function} (\ref{eq:1}) and \textbf{Constraints} (\ref{eq:2}-\ref{eq:3}). \begin{theorem}[Definition \ref{alo: def1} is equivalent to \ref{alo: def2}] \begin{proof} Firstly, the formula (\ref{eq:1}) can be rewritten as $\phi(W, c) = {\sum_{i=1}^{N}\sum_{l=1}^{k}} w_{li} \cdot ||x_{i} - c^{l}||^{2}_{2}$. It's trivial that given $x_{i}$, $$\min_{l = 1, \dots, k} ||x_{i} - c^{l}||_{2}^{2} \leq \sum_{l=1}^{k} w_{li} \cdot ||x_{i} - c^{l}||^{2}_{2}$$ where $\sum_{l=1}^{k} w_{li} = 1$. Besides, the best optimizer of definition \ref{alo: def1} has the corresponding assignment matrix, $\hat{W}$ of definition \ref{alo: def2}, here it goes, $$\sum_{i = 1}^{N}\min{||x_{i} - c^{l}||} = {\sum_{i=1}^{N}\sum_{l=1}^{k}} \hat{w}_{li} \cdot ||x_{i} - c^{l}||^{2}_{2}\geq \min{\sum_{l=1}^{k}\sum_{i=1}^{N}} w_{li} \cdot ||x_{i} - c^{l}||^{2}_{2}$$ \end{proof} \end{theorem} \begin{remark} $k$-means problem is one optimization problem with nonlinear, non-convex objective function and discrete constraints. More directly, it can be written as follow, \begin{equation} \min \phi(W, c) \quad s.t. (\ref{eq:2})-(\ref{eq:3}). \end{equation} \end{remark} Then, we give one theorem from Shokri and M. A. in 1984\cite{ref01}. \begin{theorem}{\label{alo:convergence}} $k$-means Problem has local minimum points. \end{theorem} For completeness, We rewrite the proof here by giving some definitions and lemmas and add some details based on the previous proof. \begin{definition} Consider the set $\Omega$ given by \begin{equation} \Omega = \{W \in M_{k\times N}(\mathbb{R}) : \sum_{l=1}^{k} w_{li} = 1, i = 1, 2, \dots, N; \sum_{i=1}^{N}w_{li}\geq 1; w_{li} \geq 0 \}. \end{equation} \end{definition} \begin{lemma} \label{alo:lem1} The set $\Omega$ is convex, and the extreme points of $\Omega$ satisfy constraints $$w_{li} \in \{0, 1\}, i = 1, 2, \dots, N, l = 1, 2, \dots, k.$$ \end{lemma} \begin{proof} $\forall W^{1}, W^{2}$, and $0 \leq \lambda \leq 1$, \begin{equation} \sum_{l=1}^{k} [\lambda w_{li}^{1}+(1-\lambda)w_{li}^{2}] = \lambda\sum_{l=1}^{k} w_{li}^{1}+(1-\lambda)\sum_{l=1}^{k}w_{li}^{2} = 1. \end{equation} \begin{equation} \sum_{i=1}^{N} [\lambda w_{li}^{1}+(1-\lambda)w_{li}^{2}] = \lambda\sum_{i=1}^{N} w_{li}^{1}+(1-\lambda)\sum_{i=1}^{N}w_{li}^{2} \geq 1. \end{equation} On the one hand, $\forall W$ satisfies constraint $w_{ki} = 0$ or $1$, if $\exists W^{l_{1}}, W^{l_{2}} \in \Omega$, s.t. \[W = \lambda_{0}W^{l_{1}}+(1-\lambda_{0})W^{l_{2}}, 0 < \lambda_{0} <1\] or \[w_{li}=\lambda_{0} w_{li}^{l_{1}}+(1-\lambda_{0})w_{li}^{l_{2}}\] \begin{itemize} \item If $w_{li} = 0$, then \[w_{li}^{l_{1}}=w_{li}^{l_{2}}=0\] \item if $w_{li} = 1$, then \[w_{li}^{l_{1}}=w_{li}^{l_{2}}=1.\] \end{itemize} That is to say, $W^{l_{1}}= W^{l_{2}}$. The points satisfy $w_{li} \in \{0, 1\}$ must be extreme points of $\Omega$. On the other hand, each extreme point of $\Omega$ is associated with a basis of the constraints in $\Omega$. Hence, each basic variable will have value $1$ and nonbasic variables will be zeros. Then we have completed the proof. \end{proof} \begin{remark} With the principle of combination, there are at most $M$ extreme points of $\Omega$, \[M := k^{N} - \sum_{i=1}^{k-1} \binom{k}{i} \cdot (k-i)^{N} \] and denote the extreme points of $\Omega$ as $\{W^{1}, W^{2}, \dots, W^{M}\}$. \end{remark} \begin{definition}[Reduced Problem of $k$-means Problem]\label{alo: reduce} The reduced problem of $k$-means problem is given by minimizing $\Phi(W)$, \begin{equation} \Phi (W) = \min_{c \in \mathbb{R}^{nk}} \phi(W, c), \quad s.t. W \in \Omega \end{equation} \end{definition} \begin{lemma} The function $\Phi(W)$ is a concave function, which subjects to $W\in\Omega$. Reduced problem gets local minimum value at the extreme point of $\Omega$. \end{lemma} \begin{proof} First, we prove that $\Phi$ is concave. $\forall W^{{l}_{1}}, W^{{l}_{2}}\in \Omega$ and $0 \leq \gamma \leq 1$, \begin{equation} \begin{aligned} \Phi(\gamma W^{{l}_{1}}+(1-\gamma)W^{{l}_{2}}) & =\min_{c \in\mathbb{R}^{nk}} \phi(\gamma W^{{l}_{1}}+(1-\gamma) W^{{l}_{2}}, c)\\ &= \min_{c \in \mathbb{R}^{nk}} [\gamma\phi(W^{{l}_{1}}, c)+(1-\gamma)\phi (W^{{l}_{2}}, c)]\\ &\geq \gamma \min_{c \in \mathbb{R}^{nk}} \phi(W^{{l}_{1}}, c)+\min_{c \in \mathbb{R}^{nk}}(1-\gamma) \phi(W^{{l}_{2}}, c)\\ &=\gamma \Phi(W^{{l}_{1}})+(1-\gamma)\Phi(W^{{l}_{2}}). \end{aligned} \end{equation} Second, We illustrate $\Phi(W)$ has smaller value at extreme points. Fix one $\hat W \in \Omega$, which is not one extreme point but near one extreme point. We denote all extreme points of $\Omega$ as above. So here exists one constant vector $\alpha=(\alpha_{1}, \alpha_{2}, \dots, \alpha_{M})$, s.t. \begin{equation} {\hat W}= \sum_{j=1}^{M} \alpha_{j} W^{j}, \quad where \sum_{j=1}^{M}\alpha_{j}=1; 0 \leq \alpha <1 \end{equation} \begin{equation} \begin{aligned} \Phi({\hat W}) &= \Phi(\sum_{l=1}^{m} \alpha_{l} W^{l}) \geq \sum_{l=1}^{m} \alpha_{l}\Phi(W^{l}) \end{aligned} \end{equation} Then the local minimum of $\Phi(W)$ must be obtained at the extreme point of $\Omega$. \end{proof} \begin{lemma} The reduced problem of $k$-means problem and $k$-means problem are equivalent. \end{lemma} \begin{proof} It equals to prove that \begin{equation}\label{eq: 4} {\arg\min} \phi(W, c) \iff \arg\min_{W \in \Omega} \Phi(W) \end{equation} Suppose $(W^{*}, c^{*}) = {\arg\min} \phi(W, c)$ and $W_{*} = \arg\min_{W \in \Omega} \Phi(W)$. At first, given $\tilde W \in \Omega$, we denote or find the $c_{*}= (c_{*}^{1}, \dots, c_{*}^{k})$ in reduced problem. \begin{equation} \min_{c \in \mathbb{R}^{nk}} \phi(\tilde W, c) = \min_{c \in \mathbb{R}^{nk}} {\sum_{l=1}^{k}\sum_{i=1}^{N}} \tilde{w_{li}} \cdot ||x_{i} - c^{l}||^{2}_{2} \end{equation} It's obvious that \begin{equation} c_{*}^{l} = \frac{\sum_{i=1}^{N} \tilde{w_{li}}\cdot x_{i}}{\sum_{i=1}^{N} \tilde{w_{li}}} \end{equation} Denote that \[\Phi(W_{*}, c_{*}) = \Phi(W_{*})\] Then, we prove that $\phi (W^{*}, c^{*}) = \Phi(W_{*}, c_{*})$. ``$\phi (W^{*}, c^{*}) \leq \Phi(W_{*}, c_{*})$'' can be gotten by the definition and Lemma of concave function $\Phi(W)$ easily. ``$\phi (W^{*}, c^{*}) \geq \Phi(W_{*}, c_{*})$'': $\forall c \in \mathbb{R}^{nk}$, $\phi(W^{*}, c^{*}) \leq \phi(W^{*}, c)$, and fix $W^{*}$, \[\phi(W^{*}, c_{*}) \leq \phi(W^{*}, c)\] Or directly, $\phi(W^{*}, c_{*}) = \phi(W^{*}, c^{*})$. But $\phi(W^{*}, c_{*}) \geq \phi(W_{*}, c_{*})$, so we conclude that \[\phi(W^{*}, c^{*}) \geq \phi(W_{*}, c_{*})\] Further, we can prove there exists $(W^{*}, c^{*}) = (W_{*}, c_{*})$. \end{proof} Now we complete the proof of the \textbf{Theorem \ref{alo:convergence}}. $k$-means problem is one mixed integer programming with nonlinear objective, which is NP-hard. Here is difficulty consist of two part. First, the constraints are discrete. Secondly, the objective function is nonlinear and non-convex. However, next part we will list heuristic algorithms of $k$-means problem and analyze the pros and cons for each. \section{Heuristic Algorithm of $k$-Means Problem}\label{alo:Heuristic} Though the $k$-Means problem is NP-hard, there are many efficient heuristic algorithm to solve it. The most common one is the standard $k$-Means algorithm. To estimate different algorithms, we import some definition of validation entry of $k$-Means. \begin{definition}[$\alpha-approximation$] Let $\phi^{*}$ be the objective of optimal $k$-Means clustering, a set of centers, $\mathcal{C}$ is $\alpha-approximation$, if \begin{equation} \phi_{X}(\mathcal{C}) \leq \alpha \phi^{*} \end{equation} \end{definition} \subsection{Basic Knowledge} \begin{lemma} \label{sum} Let $S$ be a set of points with center of $x^{*}$, and let $z$ be an arbitrary point. Then, \begin{equation} \sum_{x \in S}||x - z||^{2} = \sum_{x \in S} ||x - x^{*}||^{2} + |S|\cdot ||z - x^{*}||^{2} \end{equation} \end{lemma} \begin{lemma}[Power-mean Inequality] Let $a_{1}, \dots, a_{m}\in\mathbb{R}$, then \begin{equation} \sum_{i = 1}^{m} a_{i}^{2} \geq \frac{1}{m} \left(\sum_{i = 1}^{m}a_{i}\right)^{2} \end{equation} \label{powermean} \end{lemma} \begin{lemma} \label{optle} Let $S$ be an arbitrary clustering set in optimal clustering, and $\mathcal{C}$ be the clustering with just one center, which is chosen uniformly at random from $S$, then \begin{equation} E[\phi_{S}(\mathcal{C})] = 2 \phi^{opt}_{S} \end{equation} \label{powermean} \end{lemma} \subsection{The Standard $k$-Means Algorithm/Lloyd's Algorithm} The Standard $k$-Means algorithm can be written in Algorithm \ref{alo:kmeans}. Lloyd's Algorithm is not a good clustering algorithm in terms of efficiency or quality, whose running time can be exponential in the worst case and solution is locally optimal. Nevertheless, the unbeatable speed and simplicity of $k$-Means make its good reputation in industry. %%%%%%% \begin{algorithm} \caption{The Standard $k$-Means/Lloyd's Algorithm} \label{alo:kmeans} \textbf{Input:} $X=\lbrace x_{1},...,x_{N}| x_{i}\in\mathbb{R}^n\rbrace$ and $k$\\ \textbf{Output:} The cluster centers $\mathbf c_1,...,\mathbf c^{k}\in\mathbb{R}^n$\\ \begin{algorithmic}[1] \State Arbitrarily choose the initial centers, $c^{1},c^{2},...,c^{k}$. \State \textbf{Repeat} for $1\leq l \leq k$, $$ S^{l} = \{x: ||x-c^{l}||^{2} \leq ||x-c^{j}||^{2}, \forall 1\leq j \leq k\} $$ Update $$ c^{l}=\cfrac{\sum_{x\in S^{l}}x}{|S^{l}|}. $$ \textbf{Until} $S^{l}$ don't change, for $\forall l=1,2,\dots,k$. \end{algorithmic} \end{algorithm} From the figure \ref{alo:kmeans basic} below, we can clearly see the processes of the standard $k$-Means($k$ = 2) algorithm in $\mathbb{R}^{2}$. Figure $(a)$ scans the data set $X$. We initialize two cluster centers, which are marked by red and blue in figure $(b)$. Figure $(c), (d), (e)$ show the iteration of above algorithm. We stoped at the state of figure $(f)$. \begin{figure}[htbp] \centering{\includegraphics[width=10cm]{cluster4.png}} \caption{A concrete $k$-Means clustering process} \label{alo:kmeans basic} \end{figure} Shokri, etc. put forward the partial convergence of the standard $k$-Means algorithm. And they also described how to obtain a local minimum of the $k$-Means clustering problem under certain given conditions with Minkowsky metric. But how to find the global minimum is still a open problem. \begin{theorem}[Partial Convergence] The standard $k$-Means algorithm converges to a partial optimal solution of $k$-Means clustering problem in a finite number of iterations. \end{theorem} Here we give no specific proof but the definition of partial optimal solution of $k$-Means clustering problem. \begin{definition} A point $(W^{*}, c^{*})$ is partial optimal solution of $k$-Means clustering problem if it satisfies the following: \begin{equation} \phi(W^{*}, c^{*}) \leq \phi(W^{*}, c), \forall c \in \mathbb{R}^{nk}; \end{equation} and \begin{equation} \phi(W^{*}, c^{*}) \leq \phi(W, c^{*}), \forall W \in M_{k \times N}(R). \end{equation} \end{definition} We give another form of the standard $k$-Means algorithm, which can give proof of theorem \textbf{Partial Convergence}. \begin{algorithm} \caption{The Standard $k$-means(another form)} \label{alo:kmeans2} \textbf{Input:} Data set $X=\lbrace x_{1},...,x_{N}| x_{i}\in\mathbb{R}^n\rbrace$ and $k$(and Tol) \textbf{Output:} The cluster centers $c = ( c^{1},..., c^{k})$ \begin{algorithmic}[1] \State Choose the \textbf{initial} centers arbitrarily, $c_{0}=(c_{0}^{1},c_{0}^{2},...,c_{0}^{k})$. \State \textbf{Repeat:} for $j \geq 0$ $$ w_{li}^{j+1} = I(l = \arg\min_{1 \leq l \leq k} ||x_{i}-c_{j}^{l}||_{2}^{2}) $$ Update $$ c_{j+1}^{l} = \frac{\sum_{i=1}^{N} w_{li}^{j+1} \cdot x_{i}}{\sum_{i=1}^{N} w_{li}^{j+1} } $$ \textbf{Stop} criterion $$ W^{j+1} = W^{j}$$ or ($||c_{j+1}-c_{j}|| \leq Tol$) \State \textbf{Output} $$c = c_{j}$$ \end{algorithmic} \end{algorithm} \subsection{The $k$-Means++ Algorithm} The standard $k$-Means algorithm is highly sensitive to the initialization of cluster centers. It is easy to construct situations in which the standard $k$-Means algorithm converges to a local minimum that is arbitrarily bad compared to the optimal solution. Such an example is shown in figure \ref{alo:algorithm ratio} for $k=3$ and where $x<y<z$. \begin{figure}[htbp] \centering{\includegraphics[width=10cm]{algo.png}} \caption{High approximation ratio} \label{alo:algorithm ratio} \end{figure} If we initialize cluster centers $(1, 2, 3)$, we can get the optimal centers shown at the middle with the standard $k$-Means algorithm. \begin{equation} \phi^{opt} = (\frac{x}{2})^{2}+(\frac{x}{2})^{2}=(\frac{x^{2}}{2}) \end{equation} Unfortunately, if we initialize centers $(2, 3 ,4)$, it is easy to verify that the centers shown at the bottom is the solution, a bad solution. \begin{equation} \phi_{heu} = (\frac{y}{2})^{2}+(\frac{y}{2})^{2}=(\frac{y^{2}}{2}) \end{equation} We can see the two different initialization such that the algorithm converges to very different solutions. The following advanced algorithm provides a better initialization of the clustering and gives the $O(\log k)-approximation$ solution. \begin{theorem} \label{competitive} If $\mathcal{C}$ is the cluster result of $k$-Means ++, then the corresponding potential function $\phi$ satisfies that \begin{equation} E[\phi] \leq 8(\ln k + 2)\phi^{*} \end{equation} \end{theorem} The k-means++ algorithm addresses the second of these obstacles above by specifying a procedure to initialize the cluster centers before proceeding with the standard k-means optimization iterations. With the k-means++ initialization, the algorithm is guaranteed to find a solution that is $ O(\log k) $ competitive to the optimal k-means solution. It improves the running time of the standard $k$-Means algorithm (i.e. Lloyd's algorithm), and the quality of the final solution. This algorithm is written in Algorithm \ref{alo:kmean++}. \begin{algorithm}[H] \begin{algorithmic}[1] \caption{$k$-Means++} \label{alo:kmean++} \State $\mathcal{C}\leftarrow$ Choose $c_1$ with a uniform distribution among the data points ($c_1$ is the first center). %均匀分布 \State \textbf{For $l \leq k$}, Sample $x$ from $X$ with probability $\frac{d^{2}(x, \mathcal{C})}{\phi_{X}(\mathcal{C})}$. $\mathcal{C} \leftarrow \mathcal{C}\cup\{x\}$ \State Use the standard $k$-Means algorithm. \end{algorithmic} \end{algorithm} Here we import the definition of probability sampling. These days, we tend to use computers as the mechanism for generating random numbers as the basis for selection. One popular way to pick $x$ in the above algorithm is to: \begin{enumerate} [1.] \item Random one number $r \in [0, \phi_{X}(\mathcal{C})]$; \item Do subtraction $r = r - d^{2}(x, \mathcal{C})$ until $r \leq 0$; \item Choose the above $x$ as the next seed. \end{enumerate} Next we give the proof of theorem \ref{competitive} according to its creator's idea\cite{kmeansplus}. \begin{lemma} \label{corele} Let $\mathcal{C}$ be an arbitrary cluster, and $\mathcal{C}^{*}$ be one cluster of the optimal clustering. Choose $u$ centers from $\mathcal{C}^{*} \setminus \mathcal{C}$ and the union of clustering be $X^{u}$. Define $X^{c}$ as $X - X^{u}$. If adding $t \leq u$ randomly centers to $\mathcal{C}$ as the above algorithm \ref{alo:kmean++}'s step $\textbf{1}$ and $\textbf{2}$, let $\tilde{\mathcal{C}}$ denote the result set, and $\tilde{\phi}$ denote the corresponding potential function. Then, \begin{equation} E[\phi_{X}(\tilde{\mathcal{C}})] \leq \left(\phi_{X^{c}}(\mathcal{C}) + 8\phi^{opt}_{X^{u}}\right)\cdot (1+H_{t}) + \frac{u-t}{u}\cdot \phi_{X^{u}}(\mathcal{C}) \end{equation} where \begin{equation} H_{t} = \begin{cases}& 1 + \frac{1}{2} + \cdots + \frac{1}{t}, \ \text{$t > 0$}\\ & 0,\ \text{ $t = 0$} \end{cases} \end{equation} %proof \begin{proof} For convenience, we define probability events as follows. \begin{equation} \begin{cases} & \mathscr{A} = \{\text{add the first center from $X$}\}\\ & \mathscr{A}_{1} = \{\text{add the first center from $X^{u}$}\}\\ &\mathscr{A}_{2}= \{\text{add the first center from $X^{c}$}\}\\ &\mathscr{A}_{1} \cap \mathscr{A}_{2} = \empty, \mathscr{A}_{1} \cup \mathscr{A }_{2} = \mathscr{A} \end{cases} \end{equation} One can split the whole expectation into two branches. \begin{equation} E[\phi_{X}(\tilde{\mathcal{C}})] = Prob_{\mathscr{A}_{1}} \cdot E[\phi_{X}(\tilde{\mathcal{C}})|\mathscr{A}_{1}] + Prob_{\mathscr{A}_{1}} \cdot E[\phi_{X}(\tilde{\mathcal{C}})|\mathscr{A}_{2}] \end{equation} Firstly, when $t = 0$ and $u > 0$, i.e. $\phi_{X}(\mathcal{C}^{'}) = \phi_{X}(\mathcal{C})$. it following $1 + H_{t} = 1$ naturally has \begin{equation} E[\phi_{X}(\tilde{\mathcal{C}})] \leq \phi_{X^{c}}(\mathcal{C}) + 8\phi^{opt}_{X^{u}} + \phi_{X^{u}}(\mathcal{C}) \end{equation} If $t = u = 1$, there has \begin{equation} \label{eqlemma} \begin{aligned} E[\phi_{X^{u}}(\tilde{\mathcal{C}})|\mathscr{A}_{1}] = &\sum_{x_{0} \in X^{u}} \frac{d^{2}(x_{0}, \mathcal{C})}{\sum_{x \in X^{u}} d^{2}(x, \mathcal{C})} \sum_{x \in X^{u}} \min \{d^{2}(x, \mathcal{C}), ||x - x_{0}||^{2}\} & \tilde{\mathcal{C}} = \mathcal{C} \cup \{x_{0}\} \\ \leq & \sum_{x_{0} \in X^{u}}\frac{ \left(\frac{2}{|X^{u}|} \sum_{x \in X^{u}} d^{2}(x, \mathcal{C}) + \frac{2}{|X^{u}|} \sum_{x\in X^{u}} ||x - x_{0}||^{2} \right)}{\sum_{x \in X^{u}} d^{2}(x, \mathcal{C})} \cdot & Lemma\ \ref{powermean}\\ &\sum_{x \in X^{u}} \min \{d^{2}(x, \mathcal{C}), ||x - x_{0}||^{2}\} \\ \leq & \frac{4}{|X^{u}|} \sum_{x_{0} \in X^{u}} \sum_{x\in X^{u}} ||x-x_{0}||^{2} \\ = & 4\cdot \frac{1}{|X^{u}|} \sum_{x_{0} \in X^{u}} \left(\sum_{x\in X^{u}} ||x - c^{u}||^{2} + |X^{u}|\cdot ||x_{0} - c^{u}||^{2}\right)& Lemma \ \ref{sum}\\ = & 8\phi^{opt}_{X^{u}} & Lemma \ref{optle} \end{aligned} \end{equation} where $c^{u}$ is the true center of $X^{u}$ in optimal cluster. Then \begin{equation}\begin{aligned} E[\phi_{X}(\tilde{\mathcal{C}})] = &\frac{\phi_{X^{u}}(\mathcal{C})}{\phi_{X}(\mathcal{C})} E[\phi_{X}(\tilde{\mathcal{C}})|\mathscr{A}_{1}] + \frac{\phi_{X^{c}}(\mathcal{C})}{\phi_{X}(\mathcal{C})} E[\phi_{X}(\tilde{\mathcal{C}})|\mathscr{A}_{2}]\\ \leq &\frac{\phi_{X^{u}}(\mathcal{C})}{\phi_{X}(\mathcal{C})} \left(8\phi^{opt}_{X^{u}} +\phi_{X^{c}}(\mathcal{C}) \right) + \frac{\phi_{X^{c}}(\mathcal{C})}{\phi_{X}(\mathcal{C})} \phi_{X}(\mathcal{C})\\ \leq & 2\phi_{X}(\mathcal{C}) + 8\phi^{opt}_{X^{u}} \end{aligned} \end{equation} %%%%induction By using induction method, it is sufficient to suppose that the result holds for the cases that $(t-1, u)$ and $(t-1, u-1)$. With similar analysis method as the case of $t = u =1$, if the event $\mathscr{A}_{2}$ happen with probability, $\frac{\phi_{X^{c}}(\mathcal{C})}{\phi_{X}(\mathcal{C})}$, then we still need choose $t-1$ centers with $u$ being unchanged, i.e., \begin{equation} E[\phi_{X}(\tilde{\mathcal{C}})|\mathscr{A}_{2}] \leq \left(\phi_{X^{c}}(\mathcal{C}) + 8\phi^{opt}_{X^{u}}\right)\cdot (1+H_{t-1}) + \frac{u-t + 1}{u}\cdot \phi_{X^{u}}(\mathcal{C}) \end{equation} Supposed that $\mathscr{A}_{1}$ happened, more specifically, let the first center be chosen from one clustering set $S \subseteq X^{u}$ and $Prob_{x}$ denote the probability of choosing $x \in S$ as the first center. Besides, we define the event $\mathscr{A}_{1}^{s} = \{\text{add the first center from $S$}\}$ and we have its probability of $\frac{\phi_{S}(\mathcal{C})}{\phi_{X}(\mathcal{C})}$. We can conclude similar conclusion as the equation \ref{eqlemma}. $$ \sum_{x \in S} Prob_{x} \cdot \phi_{S}(\mathcal{C}) \leq 8\phi^{opt}_{S} $$ \begin{equation} \begin{aligned} &E[\phi_{X}(\tilde{\mathcal{C}})|\mathscr{A}_{1}^{s}] \\ = & \sum_{x \in S} Prob_{x} \cdot \left( \left(\phi_{X^{c}}(\mathcal{C}) + \phi_{S}(\mathcal{C}) + 8\phi^{opt}_{X^{u} - S}\right)\cdot (1+H_{t-1}) + \frac{u-t}{u-1}\cdot \phi_{X^{u}-S}(\mathcal{C})\right)\\ \leq & \left(\phi_{X^{c}}(\mathcal{C}) + 8\phi^{opt}_{X^{u} }\right)\cdot (1+H_{t-1}) + \frac{u-t}{u-1}\cdot \phi_{X^{u}-S}(\mathcal{C}) \end{aligned} \end{equation} \begin{equation} \begin{aligned} &\frac{\phi_{X^{u}}(\mathcal{C})}{\phi_{X}(\mathcal{C})} \cdot E[\phi_{X}(\tilde{\mathcal{C}})|\mathscr{A}_{1}] \\ = & \sum_{S \subseteq X^{u}} \frac{\phi_{S}(\mathcal{C})}{\phi_{X}(\mathcal{C})} E[\phi_{X}(\tilde{\mathcal{C}})|\mathscr{A}_{1}^{s}]\\ = & \frac{\phi_{X^{u}}(\mathcal{C})}{\phi_{X}(\mathcal{C})} \cdot \left(\phi_{X^{c}}(\mathcal{C}) + 8\phi^{opt}_{X^{u} }\right)\cdot (1+H_{t-1}) + \frac{u-t}{u-1}\cdot \sum_{S \subseteq X^{u}} \frac{\phi_{S}(\mathcal{C})}{\phi_{X}(\mathcal{C})}\cdot \phi_{X^{u} - S}(\mathcal{C})\\ \leq & \frac{\phi_{X^{u}}(\mathcal{C})}{\phi_{X}(\mathcal{C})} \cdot \left(\phi_{X^{c}}(\mathcal{C}) + 8\phi^{opt}_{X^{u} }\right)\cdot (1+H_{t-1}) \\ & + \frac{u-t}{u-1}\cdot \frac{1}{\phi_{X}(\mathcal{C})} \left(\phi_{X^{u}}^{2}(\mathcal{C}) - \frac{1}{u} \cdot \phi_{X^{u}}^{2}(\mathcal{C}) \right) \\ = & \frac{\phi_{X^{u}}(\mathcal{C})}{\phi_{X}(\mathcal{C})} \cdot \left(\left(\phi_{X^{c}}(\mathcal{C}) + 8\phi^{opt}_{X^{u} }\right)\cdot (1+H_{t-1}) + \frac{u - t}{u} \cdot \phi_{X^{u}}(\mathcal{C}) \right) \\ \end{aligned} \end{equation} Then \begin{equation} \begin{aligned} E[\phi_{X}(\tilde{\mathcal{C}})] = &\frac{\phi_{X^{u}}(\mathcal{C})}{\phi_{X}(\mathcal{C})} E[\phi_{X}(\tilde{\mathcal{C}})|\mathscr{A}_{1}] + \frac{\phi_{X^{c}}(\mathcal{C})}{\phi_{X}(\mathcal{C})} E[\phi_{X}(\tilde{\mathcal{C}})|\mathscr{A}_{2}] \\ \leq & \left(\phi_{X^{c}}(\mathcal{C}) + 8\phi^{opt}_{X^{u}}\right)\cdot (1+H_{t-1}) + \frac{u-t }{u}\cdot \phi_{X^{u}}(\mathcal{C}) + \frac{\phi_{X^{c}}(\mathcal{C})}{\phi_{X}(\mathcal{C})} \cdot \frac{\phi_{X^{u}}(\mathcal{C})}{\phi_{X}(\mathcal{C})} \\ \leq & \left(\phi_{X^{c}}(\mathcal{C}) + 8\phi^{opt}_{X^{u}}\right)\cdot (1+H_{t-1} + \frac{1}{u}) + \frac{u-t }{u}\cdot \phi_{X^{u}}(\mathcal{C})\\ \leq & \left(\phi_{X^{c}}(\mathcal{C}) + 8\phi^{opt}_{X^{u}}\right)\cdot (1+H_{t}) + \frac{u-t }{u}\cdot \phi_{X^{u}}(\mathcal{C})\\ \end{aligned} \end{equation} \end{proof} \end{lemma} Considering the cluster $\mathcal{C}$ that just cover the first clustering set $S$ in optimal clustering, after algorithm \ref{alo:kmean++}'s step $\textbf{1}$ and $\textbf{2}$, we get the initial seeding and clustering $\tilde{\mathcal{C}}$, which means applying lemma \ref{corele} with $t = u = k-1$, i.e., we choose $X^{c} = S$, then \begin{equation} \begin{aligned} E[\phi_{X}(\tilde{\mathcal{C}})] \leq &\sum_{x \in S} Prob_{x} \cdot\left(\phi_{S}(\mathcal{C}) + 8\phi^{opt}_{X} - 8\phi_{S}^{opt}\right)\cdot (1+H_{k-1}) &\text{Lemma \ref{optle}} \\ \leq &8\phi^{opt}_{X}\cdot (1+\ln k) & H_{k-1} \leq 1+\ln k\\ \end{aligned} \end{equation} \subsection{Scalable $k$-Means++} \newcommand{\RNum}[1]{\uppercase\expandafter{\romannumeral #1}} $k$-Means$\RNum{2}$ uses an oversampling factor $l = \Omega(k)$, which is inspired by $k$-Means++. As we will see, \begin{algorithm}[H] \begin{algorithmic}[1] \caption{$k$-Means$\RNum{2}$} \label{alo:kmean2} \State $\mathcal{C}\leftarrow$ Choose $c_1$ uniformly at random among the data points ($c_1$ is the first center). %均匀分布 \State $\psi \leftarrow \phi_{X}(\mathcal{C})$ \State For $O(\log \psi)$ times do $\mathcal{C}^{'} \leftarrow $sample independently $x$ from $X$ with probability $\frac{l \cdot d^{2}(x, \mathcal{C})}{\phi_{X}(\mathcal{C})}$. $\mathcal{C} \leftarrow \mathcal{C}\cup\{x\}$ end \State For $x \in \mathcal{C}$, set $\omega_{x}$ to be the number of points in $X$ cluster to $x$ than any other point in $\mathcal{C}$ \State Recluster the weighted points in $\mathcal{C}$ into $k$ clusters. \end{algorithmic} \end{algorithm} \begin{theorem} \label{alo:the} If an $\alpha-approximation$ algorithm is used in Step 5, the Algorithm $\ref{alo:kmean2}$ obtains a solution that is an $O(\alpha)-approximation$ to $k$-Means. \end{theorem} More mathematically, there has \begin{theorem} Let $\alpha = exp(-(1- e^{-l/(2k)})) \approx e^{-\frac{l}{2k}}$. In Algorithm $\ref{alo:kmeans2}$, \begin{equation} E[\phi_{X}(\mathcal{C}\cup \mathcal{C}^{'})] \leq 8 \phi^{*} + \frac{1+\alpha}{2} \phi_{X}(\mathcal{C}) \end{equation} \end{theorem} \begin{corollary}\label{alo:cor1} If $\phi^{(i)}$ is the objective of the clustering after the $i$-th round of Algorithm $\ref{alo:kmeans2}$, then \begin{equation} E[\phi^{(i)}] \leq \frac{16}{1 - \alpha} \phi^{*} + \left(\frac{1+\alpha}{2}\right)^{i} \psi \end{equation} \end{corollary} Corollary $\ref{alo:cor1}$ implies that after $O(\log \psi)$ rounds, the result can touch $O(\phi^{*})$. Then the Theorem is an immediate consequence. \section{The Choice of $k$} In general, we don't know the optimal number of clusters, $k$ in the practical probems. Here we give one method named ``elbow''. Elbow method has two step can be written as following: \begin{enumerate} \item Compute the sum of squared error $(SSE)$ for some values of $l$ (for example $2, 4, 6, 8,$ etc.): \begin{equation} SSE={\sum_{l=1}^{k}\sum_{x \in S_{l}}} ||x-c_{l}||^{2} \end{equation} \item Plot $l$ against the SSE, and choose the $k$ at which the $SSE$ decreases abruptly. \end{enumerate} For one example as the figure shown, we will understand how elbow method works. \begin{figure}[htbp] \centering{\includegraphics[width=10cm]{cluster_numbers.png}} \caption{Using the Elbow Method to Determine the Optimal Number of Clusters} \end{figure} We note Dataset A on the left. At the top we see a number line plotting each point in the dataset, and below we see an elbow chart showing the SSE after running $k$-Means clustering for $k$ going from $1$ to $10$. We see a pretty clear elbow at $k = 3$, indicating that $3$ is the best number of clusters. However, the elbow method doesn't always work well; especially if the data is not very clustered. Notice how the elbow chart for Dataset B does not have a clear elbow. Instead, we see a fairly smooth curve, and it's unclear what is the best value of $k$ to choose. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Other Contents} $k$-Means is one type of $k$-Partition clustering, which is about that given a set of $N$ points in Euclidean space and an integer $k$, find a partition of these points into $k$ subsets, each with a center. There are three common formulations of $k$-partition clustering depending on the particular objective used. \begin{enumerate}[a.] \item $k$-Center use the objective to minimize the maximum distance between a point and its nearest cluster center. \item $k$-Median's objective is to minimize the sum of the distance of each point and its center. \end{enumerate} \subsection{Convolutional $k$-Means Clustering} Convolutional $k$-Means clustering is another type of algorithms for different problems, because CNN is in general a supervised learning algorithm. We mentioned it here just for complementary. Convolutional $k$-Means clustering proposed to train a deep convolutional network based on an enhanced version of the $k$-Means clustering algorithm, which reduces the number of correlated parameters in the form of similar filters, and thus increases test categorization accuracy. Generally speaking, this algorithm uses $k$-Means to cluster the parameters of CNN.
{ "alphanum_fraction": 0.5949075535, "avg_line_length": 55.312390925, "ext": "tex", "hexsha": "1547227e140c44c4c51077239a83cd59546fdabb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_path": "6DL/Kmeans.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_path": "6DL/Kmeans.tex", "max_line_length": 504, "max_stars_count": null, "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_path": "6DL/Kmeans.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 11434, "size": 31694 }
% 9.5.07 % This is a sample documentation for Compass in the tex format. % We restrict the use of tex to the following subset of commands: % % \section, \subsection, \subsubsection, \paragraph % \begin{enumerate} (no-nesting), \begin{quote}, \item % {\tt ... }, {\bf ...}, {\it ... } % \htmladdnormallink{}{} % \begin{verbatim}...\end{verbatim} is reserved for code segments % ...'' % \section{Friend Declaration Modifier} \label{FriendDeclarationModifier::overview} {\it The Elements of C++ Style} item \#96 states that \begin{quote} Friend declarations are often indicative of poor design because they bypass access restrictions and hide dependencies between classes and functions. \end{quote} \subsection{Parameter Requirements} This checker takes no parameters and inputs source file. \subsection{Implementation} This pattern is checked with a simple AST traversal that seeks declaration statements and determines if any use the ``friend'' modifier keyword. Any declaration statements found with the ``friend'' modifier are flagged as violations. \subsection{Non-Compliant Code Example} This non-compliant example uses ``friend'' to access private data. \begin{verbatim} class Class { int privateData; friend int foo( Class & c ); public: Class(){ privateData=0; } }; //class Class int foo( Class & c ) { return c.privateData + 1; } //foo( Class & c ) \end{verbatim} \subsection{Compliant Solution} The compliant solution simply uses an accessor function instead. \begin{verbatim} class Class { int privateData; public: Class(){ privateData=0; } int getPrivateData(){ return privateData; } }; //class Class int foo( Class & c ) { return c.getPrivateData() + 1; } //foo( Class & c ) \end{verbatim} \subsection{Mitigation Strategies} \subsubsection{Static Analysis} Compliance with this rule can be checked using structural static analysis checkers using the following algorithm: \begin{enumerate} \item Perform simple AST traversal and visit all declaration statement nodes \item For each declaration statement check the ``friend'' modifier. If ``friend'' modifier is set then flag violation. \item Report any violations. \end{enumerate} \subsection{References} Bumgardner G., Gray A., and Misfeldt T. {\it The Elements of C++ Style}. Cambridge University Press 2004.
{ "alphanum_fraction": 0.7417962003, "avg_line_length": 29.3164556962, "ext": "tex", "hexsha": "a01c050058bf99e146c70383bd497135d55c275a", "lang": "TeX", "max_forks_count": 146, "max_forks_repo_forks_event_max_datetime": "2022-03-04T07:32:53.000Z", "max_forks_repo_forks_event_min_datetime": "2015-04-27T02:48:34.000Z", "max_forks_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "sujankh/rose-matlab", "max_forks_repo_path": "projects/compass/extensions/checkers/friendDeclarationModifier/friendDeclarationModifierDocs.tex", "max_issues_count": 174, "max_issues_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_issues_repo_issues_event_max_datetime": "2022-03-31T16:51:05.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-28T18:41:32.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "sujankh/rose-matlab", "max_issues_repo_path": "projects/compass/extensions/checkers/friendDeclarationModifier/friendDeclarationModifierDocs.tex", "max_line_length": 233, "max_stars_count": 488, "max_stars_repo_head_hexsha": "7597292cf14da292bdb9a4ef573001b6c5b9b6c0", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "maurizioabba/rose", "max_stars_repo_path": "projects/compass/extensions/checkers/friendDeclarationModifier/friendDeclarationModifierDocs.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T07:15:46.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-09T08:54:48.000Z", "num_tokens": 545, "size": 2316 }
\input{../header_function} %---------- start document ---------- % \section{equation -- solving equations, congruences }\linkedzero{equation} In the following descriptions, some type aliases are used. \begin{description} \item[poly\_list]\linkedone{equation}{poly\_list}:\\ \param{poly\_list} is a list {\tt [a0, a1, \ldots, an]} representing a polynomial coefficients in ascending order, i.e., meaning \(a_0 + a_1 X + \cdots + a_n X^n\). The type of each {\tt ai} depends on each function (explained in their descriptions). \item[integer]\linkedone{equation}{integer}:\\ \param{integer} is one of {\it int}, {\it long} or \linkingone{rational}{Integer}. \item[complex]\linkedone{equation}{complex}:\\ \param{complex} includes all number types in the complex field: \linkingone{equation}{integer}, {\it float}, {\it complex} of \python, \linkingone{rational}{Rational} of \nzmath, etc.\\ \end{description} % \subsection{e1 -- solve equation with degree 1}\linkedone{equation}{e1} \func{e1}{\hiki{f}{\linkingone{equation}{poly\_list}}}{\out{\linkingone{equation}{complex}}}\\ \spacing % document of basic document \quad Return the solution of linear equation $ax + b = 0$.\\ \spacing % added document %\spacing % input, output document \quad \param{f} ought to be a \linkingone{equation}{poly\_list} {\tt [b, a]} of \linkingone{equation}{complex}.\\ % \subsection{e1\_ZnZ -- solve congruent equation modulo n with degree 1}\linkedone{equation}{e1\_ZnZ} \func{e1\_ZnZ}{\hiki{f}{\linkingone{equation}{poly\_list}},\ \hiki{n}{integer}}{\out{integer}}\\ \spacing % document of basic document \quad Return the solution of $ax + b \equiv 0 \pmod{\param{n}}$.\\ \spacing % added document %\spacing % input, output document \quad \param{f} ought to be a \linkingone{equation}{poly\_list} {\tt [b, a]} of \linkingone{equation}{integer}.\\ % \subsection{e2 -- solve equation with degree 2}\linkedone{equation}{e2} \func{e2}{\hiki{f}{\linkingone{equation}{poly\_list}}}{\out{tuple}}\\ \spacing % document of basic document \quad Return the solution of quadratic equation $ax^2 + bx + c = 0$.\\ \spacing % added document %\spacing % input, output document \quad \param{f} ought to be a \linkingone{equation}{poly\_list} {\tt [c, b, a]} of \linkingone{equation}{complex}. \\ The result tuple will contain exactly 2 roots, even in the case of double root.\\ % \subsection{e2\_Fp -- solve congruent equation modulo p with degree 2}\linkedone{equation}{e2\_Fp} \func{e2\_Fp}{\hiki{f}{\linkingone{equation}{poly\_list}},\ \hiki{p}{integer}}{\out{list}}\\ \spacing % document of basic document \quad Return the solution of $ax^2 + bx + c \equiv 0 \pmod{\param{p}}$.\\ \spacing % added document \quad If the same values are returned, then the values are multiple roots. \\ \spacing % input, output document \quad \param{f} ought to be a \linkingone{equation}{poly\_list} of \linkingone{equation}{integer}s {\tt [c, b, a]}. In addition, \param{p} must be a prime \linkingone{equation}{integer}. \\ % \subsection{e3 -- solve equation with degree 3}\linkedone{equation}{e3} \func{e3}{\hiki{f}{\linkingone{equation}{poly\_list}}}{\out{list}}\\ \spacing % document of basic document \quad Return the solution of cubic equation $ax^3 + bx^2 + cx + d = 0$.\\ \spacing % added document %\spacing % input, output document \quad \param{f} ought to be a \linkingone{equation}{poly\_list} {\tt [d, c, b, a]} of \linkingone{equation}{complex}. \\ The result tuple will contain exactly 3 roots, even in the case of including double roots.\\ % \subsection{e3\_Fp -- solve congruent equation modulo p with degree 3}\linkedone{equation}{e3\_Fp} \func{e3\_Fp}{\hiki{f}{\linkingone{equation}{poly\_list}},\ \hiki{p}{integer}}{\out{list}}\\ \spacing % document of basic document \quad Return the solutions of $ax^3 + bx^2 + cx + d \equiv 0 \pmod{\param{p}}$.\\ \spacing % added document \quad If the same values are returned, then the values are multiple roots. \\ \spacing % input, output document \quad \param{f} ought be a \linkingone{equation}{poly\_list} {\tt [d, c, b, a]} of \linkingone{equation}{integer}. In addition, \param{p} must be a prime \linkingone{equation}{integer}. \\ \subsection{Newton -- solve equation using Newton's method}\linkedone{equation}{Newton} \func{Newton}{% \hiki{f}{\linkingone{equation}{poly\_list}},\ % \hikiopt{initial}{\linkingone{equation}{complex}}{1},\ % \hikiopt{repeat}{integer}{250}}{\out{complex}}\\ \spacing % document of basic document \quad Return one of the approximated roots of $a_nx^n + \cdots + a_1x + a_0=0$.\\ \spacing % added document \quad If you want to obtain all roots, then use \linkingone{equation}{SimMethod} instead.\\ \negok If \param{initial} is a real number but there is no real roots, then this function returns meaningless values. \\ \spacing % input, output document \quad \param{f} ought to be a \linkingone{equation}{poly\_list} of \linkingone{equation}{complex}. \param{initial} is an initial approximation \linkingone{equation}{complex} number. \param{repeat} is the number of steps to approximate a root.\\ % \subsection{SimMethod -- find all roots simultaneously}\linkedone{equation}{SimMethod} \func{SimMethod}{% \hiki{f}{\linkingone{equation}{poly\_list}},\ % \hikiopt{NewtonInitial}{\linkingone{equation}{complex}}{1},\ % \hikiopt{repeat}{integer}{250}}{\out{list}}\\ \spacing % document of basic document \quad Return the approximated roots of $a_nx^n + \cdots + a_1x + a_0$.\\ \spacing % added document \quad \negok If the equation has multiple root, maybe raise some error. \\ \spacing % input, output document \quad \param{f} ought to be a \linkingone{equation}{poly\_list} of \linkingone{equation}{complex}.\\ \param{NewtonInitial} and \param{repeat} will be passed to \linkingone{equation}{Newton} to obtain the first approximations.\\ % \subsection{root\_Fp -- solve congruent equation modulo p}\linkedone{equation}{root\_Fp} \func{root\_Fp}{\hiki{f}{\linkingone{equation}{poly\_list}},\ \hiki{p}{integer}}{\out{integer}}\\ \spacing % document of basic document \quad Return one of the roots of $a_nx^n + \cdots + a_1x + a_0 \equiv 0 \pmod{\param{p}}$. \\ \spacing % added document \quad If you want to obtain all roots, then use \linkingone{equation}{allroots\_Fp}.\\ \spacing % input, output document \quad \param{f} ought to be a \linkingone{equation}{poly\_list} of \linkingone{equation}{integer}. In addition, \param{p} must be a prime \linkingone{equation}{integer}. \\ \quad If there is no root at all, then nothing will be returned.\\ % \subsection{allroots\_Fp -- solve congruent equation modulo p}\linkedone{equation}{allroots\_Fp} \func{allroots\_Fp}{\hiki{f}{\linkingone{equation}{poly\_list}},\ \hiki{p}{integer}}{\out{integer}}\\ \spacing % document of basic document \quad Return all roots of $a_nx^n + \cdots + a_1x + a_0 \equiv 0 \pmod{\param{p}}$. \\ \spacing % added document %\spacing % input, output document \quad \param{f} ought to be a \linkingone{equation}{poly\_list} of \linkingone{equation}{integer}. In addition, \param{p} must be a prime \linkingone{equation}{integer}. \\ \quad If there is no root at all, then an empty list will be returned.\\ % \begin{ex} >>> equation.e1([1, 2]) -0.5 >>> equation.e1([1j, 2]) -0.5j >>> equation.e1_ZnZ([3, 2], 5) 1 >>> equation.e2([-3, 1, 1]) (1.3027756377319946, -2.3027756377319948) >>> equation.e2_Fp([-3, 1, 1], 13) [6, 6] >>> equation.e3([1, 1, 2, 1]) [(-0.12256116687665397-0.74486176661974479j), (-1.7548776662466921+1.8041124150158794e-16j), (-0.12256116687665375+0.74486176661974468j)] >>> equation.e3_Fp([1, 1, 2, 1], 7) [3] >>> equation.Newton([-3, 2, 1, 1]) 0.84373427789806899 >>> equation.Newton([-3, 2, 1, 1], 2) 0.84373427789806899 >>> equation.Newton([-3, 2, 1, 1], 2, 1000) 0.84373427789806899 >>> equation.SimMethod([-3, 2, 1, 1]) [(0.84373427789806887+0j), (-0.92186713894903438+1.6449263775999723j), (-0.92186713894903438-1.6449263775999723j)] >>> equation.root_Fp([-3, 2, 1, 1], 7) >>> equation.root_Fp([-3, 2, 1, 1], 11) 9L >>> equation.allroots_Fp([-3, 2, 1, 1], 7) [] >>> equation.allroots_Fp([-3, 2, 1, 1], 11) [9L] >>> equation.allroots_Fp([-3, 2, 1, 1], 13) [3L, 7L, 2L] \end{ex}%Don't indent!(indent causes an error.) \C %---------- end document ---------- % \input{../footer}
{ "alphanum_fraction": 0.6531278128, "avg_line_length": 43.568627451, "ext": "tex", "hexsha": "a394cb2ca574a9217fc062e3d7089eb214a3edde", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "turkeydonkey/nzmath3", "max_forks_repo_path": "manual/en/equation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "turkeydonkey/nzmath3", "max_issues_repo_path": "manual/en/equation.tex", "max_line_length": 124, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "turkeydonkey/nzmath3", "max_stars_repo_path": "manual/en/equation.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-26T19:22:17.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-26T19:22:17.000Z", "num_tokens": 3020, "size": 8888 }
\documentclass[a4paper]{siamart190516} \usepackage{damacros} % PACKAGES added by TB % PACKAGES FOR ALGORITHMS (PSEUDO-CODE) \usepackage{algorithm} \usepackage{algorithmic} % Sets running headers as well as PDF title and authors \headers{Coupling intra-cellular and multi-cellular dynamics in spatially-extended models of root-hair initiation}{D. Avitabile, S. Perotto, N. Ferro, T. Babini} % Title. If the supplement option is on, then "Supplementary Material" % is automatically inserted before the title. \title{Coupling intra-cellular and multi-cellular dynamics in spatially-extended models of root-hair initiation} % Authors: full names plus addresses. \author{% Daniele Avitabile% \thanks{% Vrije Universiteit Amsterdam, Department of Mathematics, Faculteit der Exacte Wetenschappen, De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands. \protect\\ Inria Sophia Antipolis M\'editerran\'ee Research Centre, MathNeuro Team, 2004 route des Lucioles-Boîte Postale 93 06902, Sophia Antipolis, Cedex, France. \protect\\ (\email{[email protected]}, \url{writemywebpage}). } % .... mancano altri % \and % Paul T. Frank \thanks{Department of Applied Mathematics, Fictional University, Boise, ID % (\email{[email protected]}, \email{[email protected]}).} % \and Jane E. Smith\footnotemark[3] } \begin{document} \maketitle \begin{abstract} This thesis deals with novel models and numerical approximations of spatially-extended multi-cellular models of Rho Of Plants (ROPs), that is, a family of proteins responsible for root-hair initiation in the plant cell Arabidopsis thaliana. The study of this dynamical system is of great relevance in the so-called agriculture 4.0, since it is instrumental to optimise plant uptake. In particular, ascertaining how intra-cellular protein distributions and extra-cellular coupling influence root-hair initiation is a challenging but pressing problem. Current studies have focussed on two separate model types: on the one hand, ROPs dynamics is studied in single-cell models, which resolve patterns at sub cellular level; on the other, multi- cellular models with realistic geometries neglect intra-cellular patterning. In this thesis we make progress on coupling these two model descriptions. We initially focus on a well-established single-cell, nonlinear reaction-diffusion model, here approximated for the first time with a finite-element scheme. In addition, we present a new model which couples multiple cells through ROP flux at the interface. We present numerical evidence that such coupling has a bearing on the patterns supported by the model. It is shown that, under variations of auxin gradients, the model robustly forms ROP hotspots from ROP stripes, and that spots are later advected downstream. Finally, we consider a novel model in which the auxin dynamics are not prescribed, but derive from the interaction between this hormone and other membranals proteins (PIN). We show that self-sustained auxin oscillations influence ROP intracellular patterning. \end{abstract} \section{Introduction (Daniele+Teresa+Simona)} mettere il modello Capitolo 4 (solo equazioni) + posizionamento in letteratura + novelty paper % from chapter 4 physical model sec. the dynamics of auxin and carriers proteins PIN in each cell $\Omega_i$ with $i \in \{ 1, ..., N \}$ can be written as the following system of coupled ordinary differential equations: \begin{equation}\begin{aligned} \begin{cases} {\displaystyle d a_i\over\displaystyle d t} & = {\displaystyle 1 \over \displaystyle V_i} \sum_{j=1}^{N} A_{ij} \Phi_{ij} +k - \delta a_i \\[8pt] {\displaystyle d \Tilde{P}_{ij}\over\displaystyle d t} & = h\left(\Phi_{ij}\right) + \rho_0 - \mu \Tilde{P}_{ij}, \end{cases} \end{aligned} \end{equation} % ... The original system of equations can be rescaled and simplified. In particular, under the assumptions of cells having same volume $V =V_i$ and exchange surface areas $A = A_{ij}$, rescaling properly the diffusion coefficient $D_a$ and variables $P_{ij}$, a new system is obtained. Thus, the final system of equations we work on is: \begin{equation}\label{eq:Sys_auxPIN}\begin{aligned} \begin{cases} {\displaystyle d a_i\over\displaystyle d t} & = \sum_{j \in \mathcal{N}_i} \displaystyle \Phi_{ji} +k - \delta a_i \\[8pt] {\displaystyle d P_{ij}\over\displaystyle d t} & = h\left(\Phi_{ij}\right) - \mu P_{ij}, \end{cases} \end{aligned} \end{equation} % ... from chapter 2 sec physical model The dimentional reaction-diffusion model summarizing binding process, autocatalytic activation and catalysis of ROPs proteins described in Section \ref{sec:intromodel} is formulated as follows: % As mentioned before, the dimentional reaction-diffusion model is: \begin{equation} \label{eq:FM}\begin{aligned} \left\lbrace \begin{matrix} \partial_t u = & D_1 \Delta_s u + k_{20} \alpha(x,y)u^2 v - \left(c+r\right) u + k_1 v & \ \text{in} \ \Omega\\ \partial_t v = & D_2 \Delta_s v - k_{20} \alpha(x,y) u^2 v - k_1 v + c \ u + b & \ \text{in} \ \Omega. \end{matrix} \right. \end{aligned}\end{equation} % ... The RD system is rewritten to explicit its mathematical formulation and methods in order to solve it in the most general way, including both the case of the RD system with the original parameters and the RD system with the rescaled ones. As a consequence, we define a unified system of equations comprehensive of both \eqref{eq:FM} and \eqref{eq:adim} systems: \begin{equation} \label{eq:final} \left\lbrace \begin{matrix} \partial_t u = & \Tilde{D_1} \Delta_s u + \Tilde{a_1} u + \Tilde{b_1} v + \Tilde{c_1} u^2 v \ \ \text{in} \ \Omega\\ \partial_t v = & \Tilde{D_2} \Delta_s v + \Tilde{a_2} v + \Tilde{b_2} u + \Tilde{c_2} u^2 v + f_2 \ \ \text{in} \ \Omega, \end{matrix} \right. \end{equation} with tilde parameters defined differently in the two cases under study \section{A starting modeling (Teresa+Daniele)} Modello di Capitolo 3 \textbf{Physical model - Sec.3.1} We consider the root-hair cell projection onto a 2D rectangular domain, neglecting axial dimension. % as in Chapter \ref{cap:2} A system of four cells is schematically presented in Figure \ref{fig:2cell}. We can see that each cell has longitudinal and transverse boundaries in common with close cells. We recall the single cellular model, namely: \begin{equation} \label{eq:singModel} \left\lbrace \begin{matrix} \begin{aligned} & \partial_t u = \Tilde{D_1} \Delta_s u + \Tilde{a_1} u + \Tilde{b_1} v + \Tilde{c_1} u^2 v & \ \text{in} \ \Omega\\ & \partial_t v = \Tilde{D_2} \Delta_s v + \Tilde{a_2} v + \Tilde{b_2} u + \Tilde{c_2} u^2 v + f_2 & \ \text{in} \ \Omega \\ & \Tilde{D_1} \nabla_s u \cdot \mathbf{n} = 0 & \ \text{on} \ \partial \Omega \\ & \Tilde{D_2} \nabla_s v \cdot \mathbf{n} = 0 & \ \text{on} \ \partial \Omega. \end{aligned} \end{matrix} \right. \end{equation} No-flux on $\partial \Omega$, namely Neumann homogeneous boundary conditions, characterizes the system behaviour along the cell boundary. % prima spieghiamo il significato fisico, poi come rappresentarlo matematicamente In the multi-cellular model, communication between cells is represented by allowed flux of ROPs, active and inactive, through localized channels along boundaries between neighboring cells. We define as neighbor of cell $\Omega_i$ the set of cells with index in $\mathcal{N}_i = \{ j : \partial \Omega_j \cap \partial \Omega_i \neq \emptyset \}$. The flux of concentration of active and inactive ROPs $(u_i, v_i)$ is proportional to the difference of concentration $(u_j, v_j)$ in neighbouring cells for $j \ \in \ \mathcal{N}_i$. We formulate the new model still focusing on one single cell domain $\Omega_i$, taking into account the new flux generated from the discrepancy of concentrations with the neighboring cells. The new flux results in adding a non-homogeneous Neumann boundary condition on the common interfaces, as follows: \begin{equation} \label{eq:pluriModel} \left\lbrace \begin{matrix} \begin{aligned} & \partial_t u_i = \Tilde{D_1} \Delta_s u_i + \Tilde{a_1} u_i + \Tilde{b_1} v_i + \Tilde{c_1} (u_i)^2 v_i & \ \text{in} \ \Omega_i\\[6pt] & \partial_t v_i = \Tilde{D_2} \Delta_s v_i + \Tilde{a_2} v_i + \Tilde{b_2} u_i + \Tilde{c_2} (u_i)^2 v_i + f_2 & \ \text{in} \ \Omega_i \\[6pt] & \Tilde{D_1} \nabla_s u_i \cdot \mathbf{n} = 0 & \ on \ \partial \Omega_i \backslash \cup_{j \in \mathcal{N}_i} \Gamma_{j,i} \\[6pt] & \Tilde{D_2} \nabla_s v_i \cdot \mathbf{n} = 0 & \ on \ \partial \Omega_i \backslash \cup_{j \in \mathcal{N}_i} \Gamma_{j,i} \\[6pt] & \Tilde{D_1} \nabla_s u_i \cdot \mathbf{n} = \beta_{uRR} \ \alpha_{uRR} \left(u_j - u_i \right) & \ on \ \Gamma_{j,i} \ \forall j \in \mathcal{N}_i \\[6pt] & \Tilde{D_2} \nabla_s v_i \cdot \mathbf{n} = \beta_{vRR} \ \alpha_{vRR} \left(v_j - v_i \right) & \ on \ \Gamma_{j,i}\ \forall j \in \mathcal{N}_i , \end{aligned} \end{matrix} \right. \end{equation} where we define as $(u_i, v_i)$ the concentrations of active and inactive ROPs restricted to cell $\Omega_i$: $(u_i, v_i): \Omega_i \times \left(0, T_{max} \right) \longrightarrow \RSet^2$ and $\Gamma_{j,i}$ represents the common side between cell $\Omega_i$ and cell $\Omega_j \ \in \mathcal{N}_i $, therefore defined as: $\Gamma_{j,i} = \partial \Omega_i \cap \partial \Omega_j$. Each of the neighboring cells follows the same model for hair formation, meaning that system in \eqref{eq:pluriModel} holds $\forall \ i$ cells composing the pluricellular system. As a consequence, the newly defined boundary conditions is coupled with the solutions $(u_j, v_j)$ with $j \ \in \mathcal{N}_i$. Therefore, the pluricellular system requires a proper iterative method for setting correctly boundary conditions depending on solutions in the neighboring cells. Not communicating with other RH cells boundaries have as before no-flux. The new boundary conditions are characterized by a function and a coefficient for both active active ROPs $u$ and inactive ROPs $v$, having the same meaning: \begin{itemize} \item $\beta_{u/v RR} \ [\frac{1}{\mu m^2}]$ are indicator functions defined on boundaries of cells, equal to $1$ where the communicating channels are open and $0$ where no-flux is assumed; \item $\alpha_{u/v RR} \ [\frac{1}{\mu m}]$ are transport efficiency coefficients, representing a sort of flux quantity allowed through channels. \end{itemize} These channel parameters aim at representing the average active transport along the sides of confining cells, set equal to the flux of proteins from one cell to the neighbouring ones. We have no physical insight on previously cited functions modeling open channels for ROPs. A whole set of simulations for the proper tuning of parameters is required, in order to find a sufficiently plausible setting of the system. \section{A new dd-wise coupling approach (Simona+Daniele+Nicola+Teresa)} DD sul modello del Capitolo 3 + parte discreta \textbf{Numerical treatment - Sec.3.2} % brutto mettere gli stessi titoli? % -> prima questione: trattarlo con uno schema iterativo simile robin robin algorithm of domain decomposition, in modo da accoppiare le soluzioni; quindi descrivere bene lo schema (dalla strong formulation); già partendo dalo schema semi implicit nel tempo The communication between cells requires a proper iterative algorithm in order to deal with the mutual interplay between confining cells. Every subdomain $\Omega_i$ of the pluricellular system $\Omega$ represents the single cell and the original system of equations in \eqref{eq:final} is solved in $\Omega_i$ for all $i = 1, ..., N$. We solve such systems by means of the semi-implicit method described in Section \ref{sec:SI method}. Let us consider the weak formulation restricted to $\Omega_i$, defining the functional space $V_i = \{ w_i \in \ H^1\left(\Omega_i\right)\}$, the finite element subspace $V_{i,h} \subset V_i $ and the time interval discretization used in Section \ref{sec:SI method}. In particular, we divide the time interval $\left[0, T_{max}\right]$ in $N_{max}$ time steps such that $t^n = n \Delta t$ with $\Delta t = T_{max} / N_{max} $. We rewrite the full discretized formulation, identifying $u_{i,h}$ with $u_h|_{\Omega_i}$, as: given the initial state $(u_{i,h}^0, v_{i,h}^0) $, find $(u_{i,h}^{n+1}, v_{i,h}^{n+1}) \ \in V_{i,h} \times V_{i,h}$ such that \begin{equation} \label{eq:fullGalerkin} \left\lbrace \begin{matrix} \begin{aligned} a_{i,u}(u_{i,h}^{n+1}, w_{i,h}) + b_{i,u}(v_{i,h}^{n+1}, w_{i,h}) + c_{i,u}(v_{i,h}^{n+1}, w_{i,h}) = f_{i,u}(w_{i,h}) \ \forall \ w_{i,h} \ \in V_{i,h} \\[6pt] a_{i,v}(v_{i,h}^{n+1}, w_{i,h}) + b_{i,v}(u_{i,h}^{n+1}, w_{i,h}) + c_{i,v}(v_{i,h}^{n+1}, w_{i,h}) = f_{i,v}(w_{i,h}) \ \forall \ w_{i,h} \ \in V_{i,h}, \end{aligned} \end{matrix} \right. \end{equation} $\forall n = 0, ... N_{max}$, where \begin{subequations} \label{eq:Gvarfmono} \begin{align} a_{i,u}(u_{i,h}^{n+1}, w_{i,h}) = & \int_{\Omega_i} \left( \frac{1}{\Delta t} u_{i,h}^{n+1} w_{i,h} + \Tilde{D}_1 \nabla_s u_{i,h}^{n+1} \cdot w_{i,h} - \Tilde{a}_1 u_i^{n+1} w_{i,h} \right) \label{Gmono:au}\\ - & \int_{\partial \Omega_i}\left(\Tilde{D}_1 \nabla_s u_{i,h}^{n+1} \cdot \mathbf{n} w_{i,h}\right) \nonumber\\ b_{i,u}(v_{i,h}^{n+1}, w_{i,h}) = & \int_{\Omega_i} \left(- \Tilde{b}_1 v_{i,h} w_{i,h} \right) \label{Gmono:bu} \\ c_{i,u}(v_{i,h}^{n+1}, w_{i,h}) = & \int_{\Omega_i} \left(- \Tilde{c}_1 (u_{i,h}^{n})^2 v_{i,h}^{n+1} w_{i,h} \right) \label{Gmono:cu} \\[6pt] a_{i,v}(v_{i,h}^{n+1}, w_{i,h}) = & \int_{\Omega_i} \left(\frac{1}{\Delta t} v_{i,h}^{n+1} w_{i,h} + \Tilde{D}_2 \nabla_s v_{i,h}^{n+1} \cdot w_{i,h} - \Tilde{a}_2 v_i^{n+1} w_{i,h} \right) \label{Gmono:av} \\ - & \int_{\partial \Omega_i} \left( \Tilde{D}_1 \nabla_s u_{i,h}^{n+1} \cdot \mathbf{n} w_{i,h} \right) \nonumber\\ b_{i,v}(v_{i,h}^{n+1}, w_{i,h}) = &\int_{\Omega_i} \left( - \Tilde{b}_2 u_{i,h} w_{i,h} \right) \label{Gmono:bv}\\ c_{i,v}(v_{i,h}^{n+1}, w_{i,h}) =& \int_{\Omega_i} \left( - \Tilde{c}_2 (u_{i,h}^{n})^2 v_{i,h}^{n+1} w_{i,h} \right) \label{Gmono:cv}\\[6pt] f_{i,u}(w_{i,h}) = & \int_{\Omega_i} \left( \frac{1}{\Delta t} u_{i,h}^n \ w_{i,h} \right) \label{Gmono:fu}\\ f_{i,v}(w_{i,h}) = & \int_{\Omega} \left( \frac{1}{\Delta t} v_{i,h}^n \ w_{i,h} + f_2 w_{i,h} \right). \label{Gmono:fv} \end{align} \end{subequations} The introduction of different boundary conditions will require to modify the bilinear forms \eqref{Gmono:au} and \eqref{Gmono:av} and to add contributions in the right hand sides \eqref{Gmono:fu} and \eqref{Gmono:fv}. To this aim, we synthetically rewrite the model problem \eqref{eq:fullGalerkin}, assuming generic boundary conditions, through a linear operator $\mathcal{L}$ in the following way: Given the initial state $(u_i^0, v_i^0)$, find $(u_i^{n+1}, v_i^{n+1}) \ \in \Omega_i$ such that: \begin{equation}\label{eq:modelpb} % \begin{cases} \mathcal{L}^n (u_i^{n+1}, v_i^{n+1}) = \mathbf{f}^n \ \text{in} \ \Omega_i % \Tilde{D_1} \nabla_s u_i^{n+1} \cdot \mathbf{n} = 0 \ \text{on} \ \partial \Omega_i \\ % \Tilde{D_2} \nabla_s v_i^{n+1} \cdot \mathbf{n} = 0 \ \text{on} \ \partial \Omega_i % \end{cases} \end{equation} $\forall n = 0, ... N_{max}$. \textbf{A new iterative modeling algorithm - Sec 3.2.3 } The model that we propose to make cells communicate can be regarded as a simplification of the classical domain decomposition scheme with Robin boundary conditions. We start for simplicity from a two cells problem and rewrite the common interface boundary conditions in \eqref{eq:RR} to recover the modelled open channels in \eqref{eq:pluriModel}. In the spirit of a block-Gauss-Seidel algorithm, we solve in sequence: \begin{equation} \label{eq:RR_final} \begin{aligned} & \begin{cases} \mathcal{L}^n (u_1^{k+1}, v_1^{k+1}) = \mathbf{f}^n \ \text{in} \ \Omega_1 \\ \Tilde{D_1} \nabla_s u_1^{k+1} \cdot \mathbf{n} = 0 \ \text{on} \ \partial \Omega_1 \setminus \Gamma \\ \Tilde{D_1} \displaystyle{\partial u_1^{k+1}\over\partial \mathbf{n}} = \alpha_{uRR} u_2^{k} - \alpha_{uRR} u_1^{k+1} \ \text{on} \ \Gamma \\ \Tilde{D_2} \nabla_s v_1^{k+1} \cdot \mathbf{n} = 0 \ \text{on} \ \partial \Omega_1 \setminus \Gamma \\ \Tilde{D_2} \displaystyle{\partial v_1^{k+1}\over\partial \mathbf{n}} = \alpha_{vRR} v_2^{k} -\alpha_{vRR} v_1^{k+1} \ \text{on} \ \Gamma \end{cases} \\[6pt] & \begin{cases} \mathcal{L}^n (u_2^{k+1}, v_2^{k+1}) = \mathbf{f}^n \ \text{in} \ \Omega_2 \\ \Tilde{D_1} \nabla_s u_2^{k+1} \cdot \mathbf{n} = 0 \ \text{on} \ \partial \Omega_2 \setminus \Gamma \\ \Tilde{D_1} \displaystyle{\partial u_2^{k+1}\over\partial \mathbf{n}} = \alpha_{uRR} u_1^{k+1} - \alpha_{uRR} u_2^{k+1} \ \text{on} \ \Gamma \\ \Tilde{D_2} \nabla_s v_2^{k+1} \cdot \mathbf{n} = 0 \ \text{on} \ \partial \Omega_2 \setminus \Gamma\\ \Tilde{D_2} \displaystyle{\partial v_2^{k+1}\over\partial \mathbf{n}} = \alpha_{vRR} v_1^{k+1} -\alpha_{vRR} v_2^{k+1} \ \text{on} \ \Gamma. \end{cases} \end{aligned}\end{equation} The flux imposed depends on the difference of the neighbouring solutions. As a consequence, we are imposing a not necessarily null Neumann boundary condition. Equation \eqref{eq:RR_final} defines a RR iterative method applied to two cells using proper parameters $\beta_{u/vRR}$ and $\alpha_{u/v RR}$ from the model formulated in Section \ref{sec:PluriMod}: starting from $(u_2^{k = 0}, v_2^{k = 0}) = (u_2^n, v_2^n)$, find $(u_1^{k+1}, v_1^{k+1}) \ \in V_{1}$ and $(u_2^{k+1}, v_2^{k+1}) \ \in V_{2}$: \begin{equation}\label{eq::RRmod} \begin{aligned} & \begin{cases} \mathcal{L}^n (u_1^{k+1}, v_1^{k+1}) = \mathbf{f}^n \ \text{in} \ \Omega_1 \\ \Tilde{D_1} \nabla_s u_1^{k+1} \cdot \mathbf{n} = 0 \ \text{on} \ \partial \Omega_1 \setminus \Gamma \\ \Tilde{D_1} \displaystyle{\partial u_1^{k+1}\over\partial \mathbf{n}} = \beta_{uRR} \alpha_{uRR} \left( u_2^{k} - u_1^{k+1} \right) \ \text{on} \ \Gamma \\ \Tilde{D_2} \nabla_s v_1^{k+1} \cdot \mathbf{n} = 0 \ \text{on} \ \partial \Omega_1 \setminus \Gamma \\ \Tilde{D_2} \displaystyle{\partial v_1^{k+1}\over\partial \mathbf{n}} = \beta_{vRR} \alpha_{vRR} \left( v_2^{k} - v_1^{k+1} \right) \ \text{on} \ \Gamma \end{cases} \\[6pt] & \begin{cases} \mathcal{L}^n (u_2^{k+1}, v_2^{k+1}) = \mathbf{f}^n \ \text{in} \ \Omega_2 \\ \Tilde{D_1} \nabla_s u_2^{k+1} \cdot \mathbf{n} = 0 \ \text{on} \ \partial \Omega_2 \\ \Tilde{D_1} \displaystyle{\partial u_2^{k+1}\over\partial \mathbf{n}} = \beta_{uRR} \alpha_{uRR} \left( u_1^{k+1} - u_2^{k+1} \right) \ \text{on} \ \Gamma \\ \Tilde{D_2} \nabla_s v_2^{k+1} \cdot \mathbf{n} = 0 \ \text{on} \ \partial \Omega_2 \setminus \Gamma\\ \Tilde{D_2} \displaystyle{\partial v_2^{k+1}\over\partial \mathbf{n}} = \beta_{vRR} \alpha_{vRR} \left( v_1^{k+1} - v_2^{k+1} \right) \ \text{on} \ \Gamma. \end{cases} \end{aligned}\end{equation} for $k \geq 0$ up to convergence. We remark that in \eqref{eq::RRmod} the coefficients $\beta_{u/vRR}$ and $\alpha_{u/v RR}$ have physical meaning since they come from the model \eqref{eq:pluriModel}. This is in contrast with the model and method presented in Section \ref{sec:RRclassic}, where the Robin coefficients are arbitrary. Let $V_{i,h}$ denote the finite dimensional subspace of $H^1\left(\Omega_i\right)$, wih $\Omega_i$ being the sub-domain of the pluricellular system $\Omega$ corresponding to cell. We find solutions $\left(u_{h}^{n+1}, v_{h}^{n+1}\right)|_{\Omega_i}$ identified with $\left(u_{i,h}, v_{i,h}\right) \ \in V_{i,h}$ for each time step $t^{n+1}$, solving up to convergence the iteration step, whose Galerkin formulation is: \begin{equation}\begin{aligned} a_{i,u}^{RR}(u_{i,h}^{k+1}, w_{i,h}) + b_{i,u}(v_{i,h}^{k+1}, w_{i,h}) + c_{i,u}^n(v_{i,h}^{k+1}, w_{i,h}) = f_{i,u}^{RR}(w_{i,h}) \ \forall w_{i,h} \in V_{i,h} \\ a_{i,v}^{RR}(v_{i,h}^{k+1}, w_{i,h}) + b_{i,v}(u_{i,h}^{k+1}, w_{i,h}) + c_{i,v}^n(v_{i,h}^{k+1}, w_{i,h}) = f_{i,v}^{RR}(w_{i,h}) \ \forall w_{i,h} \in V_{i,h}. \end{aligned}\end{equation} The bilinear forms used are equal to \eqref{eq:Gvarfmono} - \eqref{eq:au&avRR} for classic Robin-Robin algorithm. The only difference is in the right hand side \eqref{eq:fu&fvRR} in which has been neglected the weak normal derivative of the neighbour solutions, as follows: \begin{align} f_{i,u}^{RR}(w_{i,h}) & = f_{i,u}(w_{i,h}) + \int_{\Gamma} \left(\beta_{uRR} \alpha_{uRR} \mathcal{I}_{i,j} u_{j,h}^{k} \mathcal{I}_{i,j}w_{j,h} \right), \\ f_{i,v}^{RR}(w_{i,h}) & = f_{i,v}(w_{i,h}) + \int_{\Gamma} \left(\beta_{vRR} \alpha_{vRR} \mathcal{I}_{i,j} v_{j,h}^{k} \mathcal{I}_{i,j} w_{j,h} \right). \end{align} Consequently, the algebraic formulation of the new iterative method used is formulated similarly as in \eqref{eq:LinSysRR}, with time-dependent block matrix that need to be reassembled at each time-step. The right-hand sides depend on the previous solution found for the neighbouring cells and their contributions need to be interpolated by means of a interpolation matrix as in Robin-Robin classic method. We here explicit the whole iterative method for a two cells composed system. Starting from initial guess given by the previous time-step solution $\begin{bmatrix} \mathbf{U}_2^{0} \\ \mathbf{V}_2^{0} \end{bmatrix} = \begin{bmatrix} \mathbf{U}_2^{n} \\ \mathbf{V}_2^{n} \end{bmatrix}$, solve problem for $i = 1$ to find $\begin{bmatrix} \mathbf{U}_1^{k+1} \\ \mathbf{V}_1^{k+1} \end{bmatrix}$: \begin{equation*} \begin{aligned} \begin{bmatrix} A_u^1 & B_u^1 + C_u^1\left( \mathbf{U}_1^n\right) \\ B_v^1 & A_v^1 + C_v^1\left( \mathbf{U}_1^n\right) \end{bmatrix} \begin{bmatrix} \mathbf{U}_1^{k+1} \\ \mathbf{V}_1^{k+1} \end{bmatrix} = \begin{bmatrix} F^1_u \left(\mathbf{U}_2^k\right) \\ F^1_v \left(\mathbf{V}_2^k\right) \end{bmatrix} \end{aligned} \end{equation*} and then solve problem for $ i = 2$ to find $\begin{bmatrix} \mathbf{U}_2^{k+1} \\ \mathbf{V}_2^{k+1} \end{bmatrix}$: \begin{equation*} \begin{aligned} \begin{bmatrix} A_u^2 & B_u^2 + C_u^2\left( \mathbf{U}_2^n\right) \\ B_v^2 & A_v^2 + C_v^2\left( \mathbf{U}_2^n\right) \end{bmatrix} \begin{bmatrix} \mathbf{U}_2^{k+1} \\ \mathbf{V}_2^{k+1} \end{bmatrix} = \begin{bmatrix} F^2_u \left(\mathbf{U}_1^k\right) \\ F^2_v \left(\mathbf{V}_2^k\right) \end{bmatrix} \end{aligned}\end{equation*} for $k \geq 0$ up to convergence. Iterations end when the normalized residual of consecutive computed solutions is smaller than a proper tolerance or when a maximum number of iterations is performed and we update the new solution as: $$\begin{bmatrix} \mathbf{U}_1^{n+1} \\ \mathbf{V}_1^{n+1} \end{bmatrix} = \begin{bmatrix} \mathbf{U}_1^{k+1} \\ \mathbf{V}_1^{k+1} \end{bmatrix}, \ \ \ \begin{bmatrix} \mathbf{U}_2^{n+1} \\ \mathbf{V}_2^{n+1} \end{bmatrix} = \begin{bmatrix} \mathbf{U}_2^{k+1} \\ \mathbf{V}_2^{k+1} \end{bmatrix}$$ Matrices and vectors used are defined in the following way: \begin{equation} \begin{aligned} & \left[ A_u^i\right]_{j,l} & = a_{i,u}^{RR}(\phi_l, \phi_j), \ \ \ \left[ A_v^i\right]_{j,l} & = a_{i,v}^{RR}(\phi_l, \phi_j) \\ & \left[ B_u^i\right]_{j,l} & = b_{i,u}(\phi_l, \phi_j), \ \ \ \left[ B_v^i\right]_{j,l} & = b_{i,v}(\phi_l, \phi_j)\\ & \left[ C_u^i\right]_{j,l} & = c_{i,u}(\phi_l, \phi_j),\ \ \ \left[ C_v^i\right]_{j,l} & = c_{i,v}(\phi_l, \phi_j)\\ & \left[F_u^i\right]_{j} & = f_{i,u}^{RR}(\phi_j), \ \ \ \left[F_v^i\right]_{j} & = f_{i,v}^{RR}(\phi_j), \end{aligned} \end{equation} being $\{\phi_l\}_{l = 1}^{N_h}$ the functional basis of $V_{i,h}$ finite dimensional space defined on each cell $\Omega_i$ with $i =1,2$. A sketch of the procedure to be adopted to deal with a generic N cells pluricellular system using Robin-Robin modifed algorithm is schematically given in Algorithm \ref{alg:RRmod}; $r_i$ are different coefficients characterizing initial state of concentrations, necessary for having flux between communicating cells. For physical reasons, we choose same initial guesses in the direction of the auxin gradient. We have implemented a solver for a system of four cells. As expressed in \eqref{eq::RRmod}, the iterative procedure is formulated such that the pluricellular domain is solved sequentially, in the sense that the boundary conditions characterizing sub-domain $i$, depending on sub-domain solutions of $j \in \mathcal{N}_i$, are computed using the newly updated solutions. In view of a parallel implementation, the method can be reformulated such that the new boundary conditions are a function of the previous iteration solution. \begin{algorithm}[t] \caption{Pluricellular system solver procedure: RR} \label{alg:RRmod} Given $N \geq 1$ cells, $r_i$ \begin{algorithmic}[1] \STATE Initialization: $\forall i = 1, ..., N$ \STATE \verb|[U0i, V0i]| $\gets [r_i u_0, r_i v_0]$ \STATE \verb|[Uiprec, Viprec]| $\gets$ \verb|[U0i, V0i]| \WHILE{$t < T_{max}$} \STATE{\verb|assemble| matrix for $\forall i = 1,..., N$} \FOR{$iter < Niter$} \STATE{$\forall i =1, ..., N$} \STATE{compute BC contribute from $j \in \mathcal{N}_i$} \STATE{\verb|interpolate| on $i$} \STATE{update \verb|rhs|} \STATE{\verb|solve| $\Omega_i$ problem \eqref{eq:pluriModel}} \STATE{update residual, check tolerance, update $iter$} \STATE \verb|[Uiprec, Viprec]| $\gets$ \verb|[Ui, Vi]| \ENDFOR \STATE \verb|[U0i, V0i]| $\gets$ \verb|[Ui, Vi]| \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Convergence check (Simona+Nicola+Teresa+Daniele)} \subsection{Reliability (Simona+Nicola+Teresa+Daniele)} \subsection{Eventuali sensitivity (Simona+Nicola+Teresa+Daniele)} \section{DD per il modello completo (Simona+Nicola+Teresa+Daniele)} \tb{si intende solo i risultati o anche la discretizzazione?} \section{Conclusions and future developments (Simona+Daniele)} \textbf{Conclusions and future developments}% In this thesis we developed a multi-cellular model accounting for a spatially-extended intra-cellular system for ROPs pattern formation. In the proposed framework, we solved ROPs pattern formation in a system composed by multiple cells, together with a transport model for hormone auxin. The RD model explains the auxin-mediated action of ROPs in an Arabidopsis root hair cell leading to formation of the localized patches of activated ROPs. Our simulations support conclusions reached in other works regarding ROPs dynamics under a-priori defined auxin distribution \cite{phdthesis:victor, intra1_R, intra2}. We have analyzed various scenarios such that a stripe-like patch forms where auxin concentration is higher. Then, instability of stripe into spot-like states occurs and multiple spots align with auxin gradient or travel towards auxin minimum. Several results confirm that, for a transversally independent gradient, lateral stripes become unstable states. Successively as a new contribution, we extended the intra-cellular dynamics for root-hair initiation model to a multi-cellular system, developing a new model taking into account communication between cells. In order to do so, we have defined a boundary value problem which assumes new boundary conditions between neighboring cells. Subsequently, we develop an iterative procedure to solve it. We take as reference scheme a Robin-Robin domain decomposition method. We impose fluxes of ROPs betweeen neighboring cells, depending on the difference of the ROPs concentrations, through localised open channels. Such connections are tuned in order to visualize considerably different results with respect to a configuration characterized by stagnant cells. In addition,we aim at preserving all previous analyses over important parameters characterizing the system. We numerically assessed the robustness of the proposed model in cooperating with auxin distribution in influencing ROPs pattern formation. Having defined a reliable multi-cellular model, we have taken into account also auxin concentration dynamics. Hormone auxin is regulated by carriers PIN following a non-linear ODEs system. We implement a semi-implicit method to solve such as a system on a two cells setting. The simulations we carried out show oscillating values of auxin concentration for specific sets of parameters, as expected from previous studies. A further confirmation of the robustness of pattern formation according to the proposed new multi-cellular model, when considering channel communication between cells, is given in Chapter \ref{cap:4}. It is shown that even if the system is under steady homogeneous auxin concentrations, the model robustly forms hotspots when considering open channels under a sufficiently high overall auxin level. The results show two ways spots of active ROPs can be generated. The first factor is the auxin gradient, which still guarantees and influence stripe to spot evolution. The second factor is the structural coupling between cells. Even if not characterized by variation in space but with a sufficiently high value of the auxin concentrations, the multi-cellular model leads the system to multiple spots. In conclusion, this thesis provides a first attempt in modeling communication between root-hair cells in pattern formation. Future developments include to study the structural model or to deeply analyze channels characterization. Moreover, the iterative procedure implemented could be applied to a multi-cellular system composed by more than four cells and eventually try to increase the computational performance through parallel computing. Another possible road to follow is to approach the problem through homogenization techniques. The cited method may increase the efficiency in solving the model, particularly when the number of cells involved becomes large. Finally, one could assume other auxin transport models, less simplified or accounting for spatial dependence inside the cells. \bibliographystyle{siamplain} \bibliography{references} \end{document}
{ "alphanum_fraction": 0.7106255213, "avg_line_length": 81.2330623306, "ext": "tex", "hexsha": "66bc7d0f057c1e9819f7fae12159752a884b94cc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b530efef392f3cabbc251ee5f0d7d50dea2271d3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "danieleavitabile/root-simulator", "max_forks_repo_path": "Manuscript/articolo_root.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b530efef392f3cabbc251ee5f0d7d50dea2271d3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "danieleavitabile/root-simulator", "max_issues_repo_path": "Manuscript/articolo_root.tex", "max_line_length": 997, "max_stars_count": null, "max_stars_repo_head_hexsha": "b530efef392f3cabbc251ee5f0d7d50dea2271d3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "danieleavitabile/root-simulator", "max_stars_repo_path": "Manuscript/articolo_root.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9835, "size": 29975 }
% This is a section template for reflective journal \section{<<Task heading>>} \blindtext \subsection{<<Task sub-heading>>} \blindtext
{ "alphanum_fraction": 0.7183098592, "avg_line_length": 10.9230769231, "ext": "tex", "hexsha": "6650d7fd15f4c309a80c00351f3a0a7825398f30", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c4baf1daec85b1994f6ab7387248e63f1c6472c8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "NUC-Studies-CySec/AssessmentSubmission-Template-LaTeX", "max_forks_repo_path": "tex/03_body_section1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c4baf1daec85b1994f6ab7387248e63f1c6472c8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NUC-Studies-CySec/AssessmentSubmission-Template-LaTeX", "max_issues_repo_path": "tex/03_body_section1.tex", "max_line_length": 51, "max_stars_count": null, "max_stars_repo_head_hexsha": "c4baf1daec85b1994f6ab7387248e63f1c6472c8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NUC-Studies-CySec/AssessmentSubmission-Template-LaTeX", "max_stars_repo_path": "tex/03_body_section1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 35, "size": 142 }
% to choose your degree % please un-comment just one of the following \documentclass[bsc,frontabs,twoside,singlespacing,parskip,deptreport]{infthesis} % for BSc, BEng etc. % \documentclass[minf,frontabs,twoside,singlespacing,parskip,deptreport]{infthesis} % for MInf \begin{document} \title{This is the Project Title} \author{Your Name} % to choose your course % please un-comment just one of the following \course{Artificial Intelligence and Computer Science} %\course{Artificial Intelligence and Software Engineering} %\course{Artificial Intelligence and Mathematics} %\course{Artificial Intelligence and Psychology } %\course{Artificial Intelligence with Psychology } %\course{Linguistics and Artificial Intelligence} %\course{Computer Science} %\course{Software Engineering} %\course{Computer Science and Electronics} %\course{Electronics and Software Engineering} %\course{Computer Science and Management Science} %\course{Computer Science and Mathematics} %\course{Computer Science and Physics} %\course{Computer Science and Statistics} % to choose your report type % please un-comment just one of the following %\project{Undergraduate Dissertation} % CS&E, E&SE, AI&L %\project{Undergraduate Thesis} % AI%Psy \project{4th Year Project Report} \date{\today} \abstract{ This is an example of {\tt infthesis} style. The file {\tt skeleton.tex} generates this document and can be used to get a ``skeleton'' for your thesis. The abstract should summarise your report and fit in the space on the first page. % You may, of course, use any other software to write your report, as long as you follow the same style. That means: producing a title page as given here, and including a table of contents and bibliography. } \maketitle \section*{Acknowledgements} Acknowledgements go here. \tableofcontents %\pagenumbering{arabic} \chapter{Introduction} The document structure should include: \begin{itemize} \item The title page in the format used above. \item An optional acknowledgements page. \item The table of contents. \item The report text divided into chapters as appropriate. \item The bibliography. \end{itemize} Commands for generating the title page appear in the skeleton file and are self explanatory. The file also includes commands to choose your report type (project report, thesis or dissertation) and degree. These will be placed in the appropriate place in the title page. The default behaviour of the documentclass is to produce documents typeset in 12 point. Regardless of the formatting system you use, it is recommended that you submit your thesis printed (or copied) double sided. The report should be printed single-spaced. It should be 30 to 60 pages long, and preferably no shorter than 20 pages. Appendices are in addition to this and you should place detail here which may be too much or not strictly necessary when reading the relevant section. \section{Using Sections} Divide your chapters into sub-parts as appropriate. \section{Citations} Note that citations (like \cite{P1} or \cite{P2}) can be generated using {\tt BibTeX} or by using the {\tt thebibliography} environment. This makes sure that the table of contents includes an entry for the bibliography. Of course you may use any other method as well. \section{Options} There are various documentclass options, see the documentation. Here we are using an option ({\tt bsc} or {\tt minf}) to choose the degree type, plus: \begin{itemize} \item {\tt frontabs} (recommended) to put the abstract on the front page; \item {\tt twoside} (recommended) to format for two-sided printing, with each chapter starting on a right-hand page; \item {\tt singlespacing} (required) for single-spaced formating; and \item {\tt parskip} (a matter of taste) which alters the paragraph formatting so that paragraphs are separated by a vertical space, and there is no indentation at the start of each paragraph. \end{itemize} \chapter{The Real Thing} Of course you may want to use several chapters and much more text than here. % use the following and \cite{} as above if you use BibTeX % otherwise generate bibtem entries \bibliographystyle{plain} \bibliography{mybibfile} \end{document}
{ "alphanum_fraction": 0.778593417, "avg_line_length": 32.4846153846, "ext": "tex", "hexsha": "7eb28ab4570ef0d1759085c5199f72617ba5508e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "50912dd3c2e88cd05daf5870ab6437d43a16cca8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "self-improving-agent/SomaticVariantCallingWithDeepLearning", "max_forks_repo_path": "reports/Thesis/skeleton.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "50912dd3c2e88cd05daf5870ab6437d43a16cca8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "self-improving-agent/SomaticVariantCallingWithDeepLearning", "max_issues_repo_path": "reports/Thesis/skeleton.tex", "max_line_length": 105, "max_stars_count": null, "max_stars_repo_head_hexsha": "50912dd3c2e88cd05daf5870ab6437d43a16cca8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "self-improving-agent/SomaticVariantCallingWithDeepLearning", "max_stars_repo_path": "reports/Thesis/skeleton.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1004, "size": 4223 }
\lab{Numerical Methods for Initial Value Problems; Harmonic Oscillators}{Numerical Methods for Initial Value Problems; Harmonic Oscillators} \label{lab:ivp} \objective{Implement several basic numerical methods for initial value problems (IVPs) and use them to study harmonic oscillators.} \section*{Methods for Initial Value Problems} Consider the \textit{initial value problem} (IVP) \begin{align} \begin{split} \x'(t) &= f(\x(t),t),\quad t_0 \leq t \leq t_f \\ \x(t_0) &= \x_0, \end{split} \label{ivp:generic} \end{align} where $f$ is a suitably continuous function. A solution of \eqref{ivp:generic} is a continuously differentiable, and possibly vector-valued, function $\x(t) = \left[x_1(t),\hdots,x_m(t)\right]\trp$, whose derivative $\x'(t)$ equals $f(\x(t),t)$ for all $t \in [t_0,t_f]$, and for which the \textit{initial value} $\x(t_0)$ equals $\x_0$. Under the right conditions, namely that $f$ is uniformly Lipschitz continuous in $\x(t)$ near $\x_0$ and continuous in $t$ near $t_0$, \eqref{ivp:generic} is well-known to have a unique solution. %[reference Volume 4 here]. However, for many IVPs, it is difficult, if not impossible, to find a closed-form, analytic expression for $\x(t)$. In these cases, numerical methods can be used to instead \textit{approximate} $\x(t)$. As an example, consider the initial value problem \begin{align} \begin{split} x'(t) &= \sin(x(t)), \\ x(0) &= x_0. \end{split}\label{ivp:example} \end{align} The solution $x(t)$ is defined implicitly by \[t = \ln \left|\frac{\cos(x_0) + \cot(x_0)}{\csc(x(t)) + \cot(x(t))} \right|.\] This equation cannot be solved for $x(t)$, so it is difficult to understand what solutions to \eqref{ivp:example} look like. Since $sin(n\pi)=0$, there are constant solutions $x_n(t) = n \pi,$ $n \in \mathbb{Z}$. Using a numerical IVP solver, solutions for different values of $x_0$ can be approximated. Figure \ref{ivp:int_curves} shows several of these approximate solutions, along with some of the constant, or \textit{equilibrium}, solutions. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{example2.pdf} \caption{Several solutions of \eqref{ivp:example}, using \li{scipy.integrate.odeint}. } \label{ivp:int_curves} \end{figure} \section*{Numerical Methods} For the numerical methods that follow, the key idea is to seek an approximation for the values of $\x(t)$ only on a finite set of values $t_0 < t_1 < \hdots < t_{n-1} < t_n \ (= t_f)$. In other words, these methods try to solve for $\x_1,\x_2,\hdots,\x_n$ such that $\x_i \approx \x(t_i)$. \subsection*{Euler's Method} For simplicity, assume that each of the $n$ subintervals $[t_{i-1},t_i]$ has equal length $h = (t_f-t_0)/n$. $h$ is called the \textit{step size}. Assuming $\x(t)$ is twice-differentiable, for each component function $x_j(t)$ of $\x(t)$ and for each $i$, Taylor's Theorem says that \begin{align*} x_j(t_{i+1}) &= x_j(t_{i}) + h x'_j(t_i) + \frac{h^2}{2} x''_j(c)\text{ for some } c \in [t_i,t_{i+1}]. \end{align*} The quantity $\frac{h^2}{2} x''_j(c)$ is negligible when $h$ is sufficiently small, and thus $x_j(t_{i+1}) \approx x_j(t_i) + h x'_j(t_i)$. Therefore, bringing the component functions of $\x(t)$ back together gives \begin{align*} \x(t_{i+1}) &\approx \x(t_i) + h \x'(t_i) ,\\ &\approx \x(t_{i}) + h f(\x(t_i),t_i). \end{align*} This approximation leads to the \textit{Euler method}: Starting with $\x_0 = \x(t_0)$, $\x_{i+1} = \x_i +hf(\x_i,t_i)$ for $i = 0, 1, \hdots, n-1$. Euler's method can be understood as starting with the point at $\x_0$, then calculating the derivative of $\x(t)$ at $t_0$ using $f(\x_0,t_0)$, followed by taking a step in the direction of the derivative scaled by $h$. Set that new point as $\x_1$ and continue. It is important to consider how the choice of step size $h$ affects the accuracy of the approximation. Note that at each step of the algorithm, the \textit{local truncation error}, which comes from neglecting the $x''_j(c)$ term in the Taylor expansion, is proportional to $h^2$. The error $||\x(t_i)-\x_i||$ at the \textit{ith} step comes from $i = \frac{t_i-t_0}{h}$ steps, which is proportional to $h^{-1}$, each contributing $h^2$ error. Thus the \textit{global truncation error} is proportional to $h$. Therefore, the Euler method is called a \textit{first-order method}, or a $\mathcal{O}(h)$ method. This means that as $h$ gets small, the approximation of $\x(t)$ improves in two ways. First, $\x(t)$ is approximated at more values of $t$ (more information about the solution), and second, the accuracy of the approximation at any $t_i$ is improved proportional to $h$ (better information about the solution). \begin{comment} Euler's method is a first order method, with error $\mathcal{O}(h^1)$. % \begin{enumerate} % \item Let $y_0 = y(a)$. % \item For $i = 0, 1, \hdots, n-1$, let $y_{i+1} = y_i +hf(x_i,y_i)$. % \end{enumerate} A similar application of Taylor's theorem shows that \begin{align*} y(x_{i}) &= y(x_{i+1}) - h y'(x_{i+1}) + \frac{h^2}{2} y''(\xi_i) \text{ for some } \xi_i \in [x_i,x_{i+1}]; \\ \end{align*} thus for small $h$ \begin{align*} y(x_{i+1}) &\approx y(x_{i}) + h f(x_{i+1},y(x_{i+1})). \end{align*} This approximation leads to another first order method called the backwards Euler method: Letting $y_0 = y(a)$, for $i = 0, \hdots, n-1$ we solve $y_{i} = y_{i+1}-hf(x_{i+1},y_{i+1})$ for $y_{i+1}$. Note that for both the Euler and backwards Euler methods, only $y_i, f,$ and other points in the interval $[x_i, x_{i+1}]$ are needed to find $y_{i+1}$. Because of this, they are called \textit{one-step methods}. Euler's method is an \textit{explicit method}. The backwards Euler method is an \textit{implicit method} since an equation must be solved at each step to find $y_{i+1}$. Explicit and implicit methods each have advantages and disadvantages. While implicit methods require an equation to be solved at each time step, they often have better stability properties than explicit methods. \end{comment} \begin{figure}[H] \centering \includegraphics[width=150mm]{euler.pdf} \caption{The solution of \eqref{ivp:prob1}, alongside several approximations using Euler's method.} \label{ivp:euler} \end{figure} \begin{problem} Write a function which implements Euler's method for an IVP of the form \eqref{ivp:generic}. Test your function on the IVP: \begin{align} \begin{split} x' (t)&= x(t) - 2t + 4,\quad 0 \leq t \leq 2, \\ x(0) &= 0, \end{split}\label{ivp:prob1} \end{align} where the analytic solution is $x(t) = -2+2t + 2e^t.$ Use the Euler method to numerically approximate the solution with step sizes $h = 0.2, 0.1$, and $0.05.$ Plot the true solution alongside the three approximations, and compare your results with Figure \ref{ivp:euler}. \end{problem} \subsection*{Midpoint Method} The midpoint method is very similar to Euler's method. For small $h$, use the approximation \begin{align*} \x(t_{i+1}) &\approx \x(t_{i}) + h f(\x(t_{i})+\frac{h}{2} f(\x(t_i),t_i),t_{i}+\frac{h}{2},). \end{align*} In this approximation, first set $\hat \x_i = \x_i+\frac{h}{2}f(\x_i,t_i)$, which is an Euler method step of size $h/2$. Then evaluate $f(\hat \x_i,t_i+\frac{h}{2})$, which is a more accurate approximation to the derivative $\x'(t)$ in the interval $[t_i,t_{i+1}]$. Finally, a step is taken in that direction, scaled by $h$. It can be shown that the local truncation error for the midpoint method is $\mathcal{O}(h^3)$, giving global truncation error of $\mathcal{O}(h^2)$. This is a significant improvement over the Euler method. However, it comes at the cost of additional evaluations of $f$ and a handful of extra floating point operations on the side. This tradeoff will be considered later in the lab. \subsection*{Runge-Kutta Methods} The Euler method and the midpoint method belong to a family called \textit{Runge-Kutta methods}. There are many Runge-Kutta methods with varying orders of accuracy. Methods of order four or higher are most commonly used. A fourth-order Runge-Kutta method (RK4) iterates as follows: \begin{align*} \begin{split} K_1 &= f(\x_i,t_i), \\ K_2 &= f(\x_i + \frac{h}{2} K_1,t_i + \frac{h}{2}),\\ K_3 &= f(\x_i + \frac{h}{2} K_2,t_i + \frac{h}{2}),\\ K_4 &= f(\x_i + h K_3,t_{i+1}),\\ \x_{i+1} &= \x_i + \frac{h}{6}(K_1 + 2K_2 + 2K_3 + K_4). \end{split} \end{align*} Runge-Kutta methods can be understood as a generalization of quadrature methods for approximating integrals, where the integrand is evaluated at specific points, and then the resulting values are combined in a weighted sum. For example, consider a differential equation $$x'(t) = f(t)$$ Since the function $f$ has no $x$ dependence, this is a simple integration problem. In this case, Euler's method corresponds to the left-hand rule, the midpoint method becomes the midpoint rule, and RK4 reduces to Simpson's rule. \section*{Advantages of Higher-Order Methods} It can be useful to visualize the order of accuracy of a numerical method. A method of order p has relative error of the form $$E(h) = Ch^p$$ taking the logarithm of both sides yields $$log(E(h)) =p \cdot log(h) + log(C)$$ Therefore, on a log-log plot against $h$, $E(h)$ is a line with slope $p$ and intercept $log(C)$. \begin{problem} Write functions that implement the midpoint and fourth-order Runge-Kutta methods. Use the Euler, Midpoint, and RK4 methods to approximate the value of the solution for the IVP \eqref{ivp:prob1} from Problem 1 for step sizes of $h = 0.2,$ $ 0.1,$ $0.05 $, $0.025,$ and $0.0125.$ Plot the following graphs \begin{itemize} \item The true solution alongside the approximation obtained from each method when $h=0.2$. \item A log-log plot (use \li{plt.loglog}) of the relative error $|x(2)-x_n|/{|x(2)|}$ as a function of $h$ for each approximation. \end{itemize} Compare your second plot with Figure \ref{ivp:relative_error}. \end{problem} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{prob2.pdf} \caption{Loglog plot of the relative error in approximating $x(2)$, using step sizes $h = 0.2,$ $0.1,$ $0.05,$ $0.025,$ and $0.0125$. The slope of each line demonstrates the first, second, and fourth order convergence of the Euler, Midpoint, and RK4 methods, respectively.} \label{ivp:relative_error} \end{figure} The Euler, midpoint, and RK4 methods help illustrate the potential trade-off between order of accuracy and computational expense. To increase the order of accuracy, more evaluations of $f$ must be performed at each step. It is possible that this trade-off could make higher-order methods undesirable, as (in theory) one could use a lower-order method with a smaller step size $h$. However, this is not generally the case. Assuming efficiency is measured in terms of the number of $f$-evaluations required to reach a certain threshold of accuracy, higher-order methods turn out to be much more efficient. For example, consider the IVP \begin{align} \begin{split} x'(t) &= x(t) \cos(t), \quad t \in [0,8],\\ x(0) &= 1. \end{split} \label{ivp:efficiency_problem} \end{align} Figure \ref{ivp:efficiency_figure} illustrates the comparative efficiency of the Euler, Midpoint, and RK4 methods applied to \eqref{ivp:efficiency_problem}. The higher-order RK4 method requires fewer $f$-evaluations to reach the same level of relative error as the lower-order methods. As $h$ becomes small, which corresponds to increasing functional evaluations, each method reaches a point where the relative error $|x(8)-x_n|/{|x(8)|}$ stops improving. This occurs when $h$ is so small that floating point round-off error overwhelms local truncation error. Notice that the higher-order methods are able to reach a better level of relative error before this phenomena occurs. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{Efficiency.pdf} \caption{The relative error in computing the solution of \eqref{ivp:efficiency_problem} at $x = 8$ versus the number of times the right-hand side of \eqref{ivp:efficiency_problem} must be evaluated. } \label{ivp:efficiency_figure} \end{figure} \begin{comment} Let $t^*$ be an approximation of some value $t$. The relative error of the approximation is \[ \frac{|t^*-t|}{|t|}. \] Note that the relative error is simply the absolute error $|t^*-t|$ normalized by the size of $t$. A method with order $p$ has error of the form \[E(h) = C h^p. \] This means that the graph of $\log (E)$ versus $\log(h)$ has slope $p$. The relative error of a numerical method can be approximated and graphed to verify that $p$th order convergence is occurring. For example, consider the IVP \begin{align} \begin{split} y' &= y - 2x + 4,\quad 0 \leq x \leq 2, \\ y(0) &= 0. \end{split} \label{ivp:prob2} \end{align} The following code solves the initial value problem on several grids using the Euler method, approximates the relative error in computing $y(2)$ and creates a plot (see Figure \ref{ivp:relative_error}). \begin{lstlisting} import matplotlib.pyplot as plt a, b, ya = 0., 2., 0. def ode_f(x,y): return np.array([y - 2*x + 4.]) best_grid = 320 # number of subintervals in most refined grid h = 2./best_grid X, Y, h, n = initialize_all(a, b, ya, h) # Requires an implementation of the euler method best_val = euler(ode_f, X, Y, h, n)[-1] smaller_grids = [10, 20, 40, 80] # number of subintervals in smaller grids h = [2./N for N in smaller_grids] Euler_sol = [euler(ode_f, initialize_all(a, b, ya, h[i])[0], initialize_all(a, b, ya, h[i])[1], h[i], N+1)[-1] for i, N in enumerate(smaller_grids)] Euler_error = [abs((val - best_val)/best_val) for val in Euler_sol] plt.loglog(h, Euler_error, '-b', label="Euler method", linewidth=2.) plt.show() \end{lstlisting} \end{comment} \section*{Harmonic Oscillators and Resonance} Harmonic oscillators are common in classical mechanics. A few examples include the pendulum (with small displacement), spring-mass systems, and the flow of electric current through various types of circuits. A harmonic oscillator $y(t)$\footnote{It is customary to write $y$ instead of $y(t)$ when it is unambiguous that $y$ denotes the dependent variable.} is a solution to an initial value problem of the form \begin{align*} &{}my'' + \gamma y' + ky = f(t) ,\\ &{}y(0) = y_0,\quad y'(0) = y'_0. \end{align*} Here, $m$ represents the mass on the end of a spring, $\gamma$ represents the effect of damping on the motion, $k$ is the spring constant, and $f(t)$ is the external force applied. \begin{comment} We will describe the construction of this mathematical model in the context of a spring-mass system. Suppose an object with mass $m$ is placed at the end of a horizontal spring. The natural position of the object is called the \textit{equilibrium position} for the system. If the object is displaced from its equilibrium position and given an initial velocity, it will act like a harmonic oscillator. The principal property of a harmonic oscillator $y(t)$ is that once $y$ leaves its equilibrium value $y = 0$, it experiences a restoring force $F_r = -ky.$ This force pushes $y$ back towards its equilibrium. Hooke's law says that this holds true for a spring-mass system if the displacement $y$ is small. Often there is an additional damping force $F_d$, often due to some type of friction. This force is usually proportional to the $y'$ (the \emph{velocity}), is always in the opposite direction of $y'$, and represents energy leaving the system. (You can think of it as drag.) Thus we have $F_d = -\gamma y', $ where $ \gamma \geq 0$ is constant. We may also need to consider an additional external force $f(t)$, or a driving force, that is interacting with our spring-mass system. By using Newton's law we obtain \begin{align*} ma &= F = F_r + F_d + f(t),\\ my'' &= -ky -\gamma y' + f(t). \end{align*} \end{comment} \section*{Simple harmonic oscillators} A simple harmonic oscillator is a harmonic oscillator that is not damped, $\gamma =0$, and is free, $f = 0$, rather than forced, $f \not = 0$. A simple harmonic oscillator can described by the IVP \begin{align*} &{}my'' + ky = 0,\\ &{}y(0) = y_0,\quad y'(0) = y_0'. \end{align*} The solution of this IVP is $y = c_1\cos (\omega_0 t) + c_2 \sin (\omega_0 t)$, where $\omega_0 = \sqrt{k/m}$ is the natural frequency of the oscillator and $c_1$ and $c_2$ are determined by applying the initial conditions. To solve this IVP using a Runge-Kutta method, it must be written in the form \[\x'(t) = f(\x(t),t) \] This can be done by setting $x_1 = y \ \text{and} \ x_2 = y'$. Then we have \[ \x'= \left[\begin{array}{c}x_1 \\x_2\end{array}\right]' = \left[\begin{array}{c}x_2 \\\frac{-k}{m}x_1\end{array}\right]\] Therefore$$f(\x(t),t) = \left[\begin{array}{c}x_2 \\\frac{-k}{m}x_1\end{array}\right]$$ \begin{problem} Use the RK4 method to solve the simple harmonic oscillator satisfying \begin{align} \begin{split} &{}my'' + ky = 0,\quad 0 \leq t \leq 20, \\ &{}y(0) = 2, \quad y'(0) = -1, \end{split} \label{ivp:simple_oscillator} \end{align} for $m = 1$ and $k =1$. Plot your numerical approximation of $y(t)$. Compare this with the numerical approximation when $m = 3$ and $k =1$. Consider: Why does the difference in solutions make sense physically? \end{problem} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{simple_oscillator.pdf} \caption{Solutions of \eqref{ivp:simple_oscillator} for several values of $m$.} \label{ivp:simple_oscillator_figure} \end{figure} \section*{Damped free harmonic oscillators} A damped free harmonic oscillator $y(t)$ satisfies the IVP \begin{align*} &{}my'' + \gamma y' + ky = 0 ,\\ &{}y(0) = y_0,\quad y'(0) = y'_0. \end{align*} \begin{comment} For fixed values of $m$ and $k$, it is interesting to study the effect of the damping coefficient $\gamma$. \end{comment} The roots of the characteristic equation are \[r_1,r_2 = \frac{-\gamma \pm \sqrt{\gamma^2 -4km}}{2m} .\] Note that the real parts of $r_1$ and $r_2$ are always negative, and so any solution $y(t)$ will decay over time due to a dissipation of the system energy. There are several cases to consider for the general solution of this equation: \begin{enumerate} \item If $\gamma^2 > 4km$, then the general solution is $y(t) = c_1 e^{r_1t} + c_2e^{r_2t}$. Here the system is said to be $\textit{overdamped}$. Notice from the general solution that there is no oscillation in this case. \item If $\gamma^2 = 4km$, then the general solution is $y(t) = c_1 e^{\gamma t/2m} + c_2 te^{\gamma t/2m}$. Here the system is said to be $\textit{critically damped}$. \item If $\gamma^2 < 4km$, then the general solution is \begin{align*} y(t) &= e^{-\gamma t/2m} \left[c_1\cos(\mu t) + c_2 \sin (\mu t)\right],\\ &= R e^{-\gamma t/2m} \sin (\mu t + \delta), \end{align*} where $R$ and $\delta$ are fixed, and $\mu = \sqrt{4km-\gamma^2}/2m.$ This system does oscillate. \end{enumerate} \begin{problem} Use the RK4 method to solve for the damped free harmonic oscillator satisfying \begin{align*} &{}y'' +\gamma y'+ y = 0, \quad 0 \leq t \leq 20,\\ &{}y(0) = 1, \quad y'(0) = -1. \end{align*} For $\gamma = 1/2,$ and $\gamma = 1$, simultaneously plot your numerical approximations of $y$. \end{problem} \section*{Forced harmonic oscillators without damping} Consider the systems described by the differential equation \begin{align} my''(t) + ky(t) &= F(t). \label{Forced_harm_osc} \end{align} In many instances, the external force $F(t)$ is periodic, so assume that $F(t) = F_0 \cos(\omega t)$. If $\omega_0 = \sqrt{k/m} \not = \omega,$ then the general solution of \ref{Forced_harm_osc} is given by \[y(t) = c_1 \cos (\omega_0 t) + c_2\sin (\omega_0 t) + \frac{F_0}{m(\omega_0^2 - \omega^2)} \cos (\omega t).\] If $\omega_0 = \omega$, then the general solution is \[y(t) = c_1 \cos (\omega_0 t) + c_2\sin (\omega_0 t) + \frac{F_0}{2m\omega_0} t \sin (\omega_0 t).\] When $\omega_0 = \omega$, the solution contains a term that grows arbitrarily large as $t \to \infty$. If we included damping, then the solution would be bounded but large for small $\gamma$ and $\omega$ close to $\omega_0$. Consider a physical spring-mass system. Equation \ref{Forced_harm_osc} holds only for small oscillations; this is where Hooke's law is applicable. However, the fact that the equation predicts large oscillations suggests the spring-mass system could fall apart as a result of the external force. This mechanical resonance has been known to cause failure of bridges, buildings, and airplanes. \begin{problem} Use the RK4 method to solve the damped and forced harmonic oscillator satisfying \begin{align} \begin{split} &{}2y'' + \gamma y' + 2y = 2 \cos (\omega t), \quad 0 \leq t \leq 40,\\ &{}y(0) = 2, \quad y'(0) = -1. \end{split} \label{ivp:damped_forced_oscillator} \end{align} For the following values of $\gamma$ and $\omega,$ plot your numerical approximations of $y(t)$: $(\gamma, \omega) = (0.5, 1.5),$ $(0.1, 1.1),$ and $(0, 1)$. Compare your results with Figure\ref{ivp:damped_forced_oscillator}. \end{problem} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{damped_forced_oscillator.pdf} \caption{Solutions of \eqref{ivp:damped_forced_oscillator} for several values of $\omega$ and $\gamma$.} \label{ivp:damped_forced_oscillator_figure} \end{figure}
{ "alphanum_fraction": 0.7105015083, "avg_line_length": 52.776119403, "ext": "tex", "hexsha": "a229989ec7c4beb98da87ffda47cba634e1bcac1", "lang": "TeX", "max_forks_count": 76, "max_forks_repo_forks_event_max_datetime": "2022-02-27T11:08:57.000Z", "max_forks_repo_forks_event_min_datetime": "2015-08-06T02:53:11.000Z", "max_forks_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "chrismmuir/Labs-1", "max_forks_repo_path": "Volume4/IVP/IVP.tex", "max_issues_count": 184, "max_issues_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_issues_repo_issues_event_max_datetime": "2021-10-06T23:47:14.000Z", "max_issues_repo_issues_event_min_datetime": "2015-07-16T17:56:06.000Z", "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "chrismmuir/Labs-1", "max_issues_repo_path": "Volume4/IVP/IVP.tex", "max_line_length": 292, "max_stars_count": 190, "max_stars_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "chrismmuir/Labs-1", "max_stars_repo_path": "Volume4/IVP/IVP.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-15T19:16:19.000Z", "max_stars_repo_stars_event_min_datetime": "2015-07-17T01:57:06.000Z", "num_tokens": 6635, "size": 21216 }
\documentclass[11pt]{article} \usepackage{fullpage} \usepackage{setspace} % load matlab package with ``framed'' and ``numbered'' option. %\usepackage[framed,numbered,autolinebreaks,useliterate]{mcode} \usepackage{listings} \lstset{ breakatwhitespace=true, breaklines=true, frame=single, keepspaces=true, numbers=left, numbersep=5pt, stepnumber=5, tabsize=3, language=MATLAB, basicstyle=\small } \usepackage{subfigure} \usepackage{wrapfig} \usepackage{graphicx} \usepackage{cite} %\usepackage{natbib} \usepackage{booktabs} \usepackage{fancyhdr} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb,amsmath,amsthm} \usepackage{bm} \usepackage[english]{babel} \usepackage{multirow} \usepackage{caption} \usepackage{enumitem} \usepackage{placeins} \usepackage{adjustbox} \usepackage[title]{appendix} \newcommand{\vb}{\boldsymbol} \newcommand{\vbh}[1]{\hat{\boldsymbol{#1}}} \newcommand{\vbb}[1]{\bar{\boldsymbol{#1}}} \newcommand{\vbt}[1]{\tilde{\boldsymbol{#1}}} \newcommand{\vbs}[1]{{\boldsymbol{#1}}^*} \newcommand{\vbd}[1]{\dot{{\boldsymbol{#1}}}} \newcommand{\by}{\times} \newcommand{\tr}{{\rm tr}} \newcommand{\sfrac}[2]{\textstyle \frac{#1}{#2}} \newcommand{\ba}{\begin{array}} \newcommand{\ea}{\end{array}} \newcommand{\sinc}{{\rm sinc}} \renewcommand{\equiv}{\triangleq} \newcommand{\cnr}{C/N_0} \newcommand{\sgn}{\rm sgn} \renewcommand{\Re}{\mathbb{R}} \renewcommand{\Im}{\mathbb{I}} \newcommand{\E}[1]{\mathbb{E}\left[ #1 \right]} \title{ASE 372N \\ \huge \bfseries Extended Global Positioning System} \author{\Large Matthew Cullen \textsc{Self}} \date{\today} %\title{\large ASE 372N \\ \Huge \bfseries Extended Global Positioning System} %\author{\Large Matthew Cullen \textsc{Self}} %\date{\vfill \hrule \vspace{1em} December 9, 2016 \pagebreak} \begin{document} %\onehalfspace \maketitle \hrule \begin{abstract} The feasibility of using GPS signals at altitudes high above the constellation has been evaluated. A model was synthesized and evaluated to produce carrier-to-noise ratios of observations at various places in orbit around Earth, and at various times. It has been found that, given modern tracking and acquisition systems, a receiver in High Earth Orbit will be able to determine their location based solely off of GPS signals if the geometry is favorable (more likely than not). The impact of antenna and receiver design on the observability of a position has been discussed (more gain results in a higher likelihood of observability). Antenna gain patterns have been observed in carrier-to-noise maps, validating the implemented models. Confirmation of prior experiments further indicates the model is correct. \end{abstract} \section{Introduction} Global Navigation Satellite System (GNSS) receivers work by very carefully beaming information from known locations to an unknown location and evaluating the resulting shifts and delays in the signals as they arrive. This commonly takes place as a mobile devices discover their locations in reference to the known positions of the GPS satellites. Usually these devices are at or near, the surface of the Earth - the use case for which GPS has been designed - thus GPS Satellite Vehicles (SVs) direct their information carrying signals towards the Earth. These signals are emitted by complex antenna systems, but the net result is effectively a hemisphere of radiation around the pointing direction (towards the center of the Earth). Because the GPS SVs are high in space ($\approx 20,200$ km altitude), the radiated signal does not fall only on Earth; some of the signal misses the Earth and is broadcast into an area surrounding the Earth. This overlap in signal coverage, forms the basis of the Space Service Volume (SSV), a region in space where GPS signals are available. The basis of this report is to examine the extent of the SSV and the viability of using GNSS signals in this region. \subsection{Motivation} Traditional methods of locating satellites rely on sporadic radar measurements to form a position and velocity estimate for a given SV. As more and more satellites are launched, the availability of radar measurements continues to decline (unless a SV is very important). To combat this, many satellites now have a GNSS receiver to provide navigational solutions formed not from ground based observations, but instead formed continuously from GNSS signals. This trend has been accelerated by the increasing quality and availability of GNSS receivers, making GNSS tracking of satellites cheaper and more reliable. Furthermore, it has been proven that GPS based navigation solutions are possible even from ``above the constellation,'' or at altitudes greater than the orbit of GPS satellites. Starting in 1990, a classified geosynchronous satellite ($\approx 36,000$ km altitude) used GPS signals at least in part of its navigation solution \cite{class}. The Radio Amateur Satellite Corporation (AMSAT) OSCAR-40 satellite was able to sporadically track satellites at an altitude of 59,000 km \cite{ao40}. NASA's Manetospheric Multi-Scale (MMS) mission, with a recently developed high gain receiver, was able to continuously track at least four satellites (the number required for a navigation solution) at altitudes of 70,000 km \cite{ssv}. These results form the baseline for additional studies into the feasibility of GPS navigation in High Earth Orbit. \subsection{Theory} The governing equations that determine whether or not a signal can be tracked or not are presented below in Equations \ref{eq:pow} and \ref{eq:cn0} (from \cite{ao40}). \begin{equation} \hat{P_r} = P_{out} + G_t + L_d + G_r \label{eq:pow} \end{equation} \begin{equation} \widehat{C/N_0} = \hat{P_r} + N_T + 228.6 + L_{sys} \label{eq:cn0} \end{equation} The estimated received power, $\hat{P_r}$, is a function of the emitted power, $P_{out}$, the transmitter gain, $G_t$, the receiver gain, $G_r$, and the free space attenuation $L_d < 0$. The attenuation follows the inverse-square law such that the decibel reduction is equal to \[ 20\log(\frac{\lambda}{4\pi \rho}). \] Once the received power has been estimated, an estimate of the carrier to noise ratio ($\cnr$) can be formed based on the thermal noise, $N_T < 0$, and the system losses due to front end noise and conversion losses, $L_{sys} < 0$. The thermal noise can be modeled as \[ N_T = -10 \log(T_{sys})\] where $T_sys$ is the equivalent system noise temperature. The resulting $\cnr$ determines whether a satellite can be tracked. If the $\cnr$ is above a threshold particular to the receiver, then the signal is deemed ``observable.'' Many of the parameters are dependent on the receiver (e.g. receiver gain, system losses, thermal noise, and $\cnr$ threshold). These values can be systematically determined, and are particular to the antenna being used, the receiver hardware and software, and other implementation details. The parameter of interest for this report is the transmitter gain, $G_t$. Lockheed Martin has released the antenna gain patterns of many GPS satellites, making it possible to model the antenna gain accurately as a function of transmitter and receiver locations \cite{lockheed}. Crucially, the gain is dependent on two angles: the offbore angle, $\theta$, and the polar angle, $\varphi$. These two angles can be calculated based on the attitude of the transmitting satellite and the relative position vector. Then those angles can be used to determine the resulting directive gain of the transmission. Figure \ref{fig:examplegain} shows an example of the gain pattern. Inside low values of $\theta$, the gain is extremely high - this corresponds to the portion of the satellite's signal that is broadcast towards the Earth. There are additional ``sidelobes'' that extend outside this primary range, providing additional gain at some angles of $\varphi$. \begin{figure}[h] \centering \includegraphics[width=.75\textwidth]{../Images/examplegain.png} \caption{Typical L1 Gain Pattern} \label{fig:examplegain} \end{figure} To calculate $\theta$ and $\varphi$, a yaw-steering attitude model was assumed \cite{orient}. This means that the satellite maintains a constant orientation towards the sun by varying a yaw angle around the primary pointing axis. This yaw angle, $\psi$ can be found in terms of the $\beta$ and $\mu$, the angles between the sun and the orbital plane and the satellite and midnight (see Fig. \ref{fig:geom}). \begin{equation} \psi = \text{atan2}(-\tan(\beta),\sin(\mu)) \label{eq:yaw} \end{equation} \begin{figure}[h] \centering \includegraphics[width=.5\textwidth]{../Images/geom.png} \caption{Sun - Earth - Satellite Geometry} \label{fig:geom} \end{figure} This yaw angle, combined with a local orbital frame defined by the radial, along-track, and cross-track directions, makes it possible to relate the satellite position in ECEF to the Lockheed Martin prescribed body fixed axis. \FloatBarrier \subsection{Implementation} Taken together, the previous section describes a model for transmission and reception of a SV's signal. This was coded in MATLAB as a series of functions that can be called from a master script. Core to the process is the \verb|findRotationAngle| function, which takes a structure containing a satellite's position, velocity, the relative position vector, and the time, and returns $\varphi$ and $\theta$. This is completed by first finding the position of the Sun in the ECEF frame (nontrivial, relies on \verb|findSun| \cite{astro}), then relating that to the yaw angle, and finally using the yaw angle and position to find the directional cosine matrices necessary to find the body fixed axes in the ECEF frame. A lookup function was implemented as \verb|SatDirectivityGain|, which takes $\varphi$ and $\theta$ as inputs, then performs the necessary bilinear interpolation. These were synthesized together in several experiments that tested the effects of various parameters on the number of tracked satellites. \section{Results and Analysis} \subsection{Single Satellite} A single transmitting satellite was modeled. At a given instant in GPS time, the position of a satellite was determined, then the effects of changing several parameters were determined. The angles for $\phi$ and $\theta$ were calculated as a function of height above the North Pole, a region which was determined to have moderate visibility of the satellite. Figure \ref{fig:phiandtheta} shows the two plots. Note how the offbore angle $\theta$ is negative and drops below $-90\deg$. The drop below $-90\deg$ reflects that, at a certain altitude, the receiver is now behind the transmitter, and is unable to acquire any signals. \begin{figure}[h] \centering \begin{minipage}{0.45\linewidth} \includegraphics[width=\textwidth]{../images/phivsheight.png} \caption{Phi} \end{minipage} \begin{minipage}{0.45\linewidth} \includegraphics[width=\textwidth]{../images/thetavsheight.png} \caption{Theta} \end{minipage} \caption{Angles vs Altitude} \label{fig:phiandtheta} \end{figure} Figure \ref{fig:cnonorthpole} shows the modeled unbiased carrier to noise ratio. The unbiased carrier to noise ratio is found by setting the net gain and attenuation of the receiver and antenna to 0 dB, modeling a ``nuetral'' receiver. The biased, or actual received $\cnr$ can be found by adding back in the appropriate net gain to the unbiased $\cnr$. At precisely the same altitude that $\theta = 90\deg$, the $\cnr$ drops to 0, indicating that the signal isn't observable under any circumstances. The $\cnr$ plot also shows the presence of two side lobes extending beyond the main center lobe. \begin{figure}[h] \centering \includegraphics[width=.75\textwidth]{../Images/cn0vsheight.png} \caption{$\cnr$ vs Altitude} \label{fig:cnonorthpole} \end{figure} To better visualize the space in which a signal is observable, a mesh of locations was generated along the equatorial axis, extending to $\pm$100,000 km in both the ECEF x and ECEF y directions. The unbiased $\cnr$ was then plotted at each individual location, yielding Figure \ref{fig:cn0actual}. The plot has two main leaves, separated by the Earth's umbra, and bounded by the plane of the transmitting antenna. In each leaf, the side lobe gain patterns can be seen as certain lines of constant $\theta$ tend to have exceptionally high gains. \begin{figure}[h] \centering \includegraphics[width=.75\textwidth]{../Images/cn0mapactual.png} \caption{$\cnr$ vs Position (Real Position)} \label{fig:cn0actual} \end{figure} To get a idealized plot of the side lobes, the transmitting satellite was modeled as being on the ECEF x-axis instead of in its proper 3 dimensional space. This had the effect of constraining $\varphi$ and $\theta$ to varying along the x-y plane that the receiver is modeled in. Figure \ref{fig:cn0xaxis} shows clearly the antenna gain pattern in the contour map on the floor of the plot. The highest $\cnr$ is closest to the antenna boresight, right outside the Earth's shadow. In actual mission planning, care must be taken not to optimize $\cnr$ so much that the received sigal actually passes through the Earth's atmosphere. \begin{figure}[h] \centering \includegraphics[width=.75\textwidth]{../Images/cn0mapxaxis.png} \caption{$\cnr$ vs Position (Modeled Position)} \label{fig:cn0xaxis} \end{figure} \FloatBarrier \subsection{Multiple Satellites} Using the same mesh of possible receiver locations, the $\hat{P_r}$ and unbiased $\cnr$ for the L1 signal from all the GPS Block IIR and IIR-M satellites was calculated. Because extracting each individual gain matrix involves opening and processing by hand an Excel Spreadsheet contained in a PowerPoint presentation, the same gain characterization was assumed to be roughly valid for all satellites. By setting the tracking threshold to $25$ dB-Hz (consistent with NASA's latest GNSS receiver \cite{ssv}), the number of tracked satellites was mapped to the x-y plane in Figure \ref{fig:many}. The resulting map is slightly misleading, as it includes results from inside the Earth reported as valid. The map is more than likely flawed in some way as the GPS constellation is generally symmetric, which should result in a largely symmetric map as well. Instead, this figure indicates that on one side of the Earth, 18 satellites are visible, and zero satellites are visible on the other. \begin{figure}[h] \centering \includegraphics[width=.75\textwidth]{../Images/many.png} \caption{Tracked Satellites vs Position} \label{fig:many} \end{figure} \FloatBarrier \subsection{AMSAT OSCAR-40 Replication} To coarsely verify that the results are sensible, an experiment was set up to replicate the results of AMSAT OSCAR-40's mission. The receiver parameters were set to be equal to those provided by Moreau et al. and placed at an altitude equal to the apogee of the actual AO40 satellite. The number of tracked satellites was calculated every 15 minutes during a simulated 48 hour interval, and the resulting quantity vs time plot is presented below (see Fig. \ref{fig:ao40copy}). Note that, for the majority of the time, no satellites are tracked, with periodic bursts of one or two tracked satellites. This is consistent with the reported findings in \cite{ao40}. \begin{figure}[h] \centering \includegraphics[width=.66\textwidth]{../Images/ao40actual.png} \caption{Tracked Satellites vs Time} \label{fig:ao40copy} \end{figure} In order to reflect a more modern version of AO40, the tracking threshold was decreased from $40$ dB-Hz to $25$ dB-Hz, while the net gains were assumed to remain the same. The changes reflect an improvement in tracking systems, and a miniaturization of early 2000's components. Figure \ref{fig:ao40new} shows the resulting number of tracked satellites, over the same time period. The drastic increase in number of tracked satellites shows that simply increasing the quality of the tracking and acquisition system can make the difference between being able to compute a navigational system, and being forced to rely on other external tracking systems. The mean number of satellites tracked is 2.5, compared to the old version's 0.25. \begin{figure}[h] \centering \includegraphics[width=.66\textwidth]{../Images/ao40better.png} \caption{Tracked Satellites vs Time (Improved System)} \label{fig:ao40new} \end{figure} \FloatBarrier \section{Conclusions} GPS provides a system for determining navigational solutions not just on or near Earth's surface, but also at altitudes high above Earth's surface. Modern tracking and acquisition systems allow relatively weak signals to be captured, and it is possible to model the received power of GPS signals at a given location in space. The model relies on GPS gain patterns and attitude models, and varies in time and with receiver parameters. Increasing the receiver sensitivity greatly increases the number of satellites able to be tracked, which intuitively makes sense. A radio telescope pointed at Earth from Pluto would probably be able to determine a navigational solution, its just a matter of high enough gain and low enough noise. More so than providing an upper limit on how high GPS works, this report has illuminated the design considerations that must be accounted for when developing a high altitude GPS receiver. More work could be done to implement more satellite's gain patterns, rather than just relying on one as a proxy for all of them. Additionally, Lockheed has provided the gain patterns for the signal broadcast at L2 - these could be included to augment the number of observable satellites. In order to actually model what a given satellite would receive, a better model of the receiver is needed. Manufacturer data sheets do not provide a map from reception angles to gain: this would need to be empirically determined. Additionally, testing would reveal better values for other RX parameters. Finally, there is the potential to somehow incorporate the received signal strength into the measurement model of the navigational solution. This report has defined a model for the $\cnr$ as a function of $r$. By comparing the estimated $\cnr$ to the observed $\cnr$ in terms of the partial derivative, the model could fit into the general navigational solution. For example, if a satellite receives signals from GPS satellites $a$ and $b$ and the receiver knows $r_a$ and $r_b$, then that immediately constrains the possible solution space to those areas in front of satellites $a$ and $b$. \pagebreak \begin{appendices} \section{Selected Code} \lstinputlisting[caption={Single Satellite},label={lst:lab1_crs}]{../tests/singlesat.m} %\pagebreak \lstinputlisting[caption={Many Satellites},label={lst:lab2_hrs}]{../tests/many.m} \lstinputlisting[caption={AO-40 Validation},label={lst:lab2_hrs}]{../tests/repeatAO40.m} \lstinputlisting[caption={Determine Angles},label={lst:lab2_hrs}]{../code/findRotationAngle.m} \lstinputlisting[caption={Find Sun},label={lst:lab2_hrs}]{../code/findSun.m} \lstinputlisting[caption={Gain Lookup},label={lst:lab2_hrs}]{../code/SatDirectivityGain.m} %\pagebreak \bibliographystyle{ieeetr} \bibliography{pangea} \end{appendices} \end{document}
{ "alphanum_fraction": 0.784226502, "avg_line_length": 82.3620689655, "ext": "tex", "hexsha": "48d163edbcd2c21f829938db96687344e032dbf3", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-10-19T14:11:35.000Z", "max_forks_repo_forks_event_min_datetime": "2019-06-01T03:08:03.000Z", "max_forks_repo_head_hexsha": "d8c844eddfa4469eafa7c67e56e9e0d4ad6c0725", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cullenself/ExtendedGNSS", "max_forks_repo_path": "Report/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d8c844eddfa4469eafa7c67e56e9e0d4ad6c0725", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cullenself/ExtendedGNSS", "max_issues_repo_path": "Report/report.tex", "max_line_length": 1192, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d8c844eddfa4469eafa7c67e56e9e0d4ad6c0725", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cullenself/ExtendedGNSS", "max_stars_repo_path": "Report/report.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-28T18:23:31.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-28T18:23:31.000Z", "num_tokens": 4626, "size": 19108 }
\chapter{Void Beyond ??? AF} Leave. \newline
{ "alphanum_fraction": 0.6888888889, "avg_line_length": 11.25, "ext": "tex", "hexsha": "e5377c27f966a371406bb39dbe7bb1fc92f63025", "lang": "TeX", "max_forks_count": 15, "max_forks_repo_forks_event_max_datetime": "2021-10-03T12:58:27.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-06T10:30:25.000Z", "max_forks_repo_head_hexsha": "8045824bbe960721865ddb9c216fe4e2377a2aae", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "HannibalSnekter/Final-Fantasy-Speedruns", "max_forks_repo_path": "Final Fantasy XIII-2/Chapters/voidbeyond-0.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "8045824bbe960721865ddb9c216fe4e2377a2aae", "max_issues_repo_issues_event_max_datetime": "2020-11-18T11:44:28.000Z", "max_issues_repo_issues_event_min_datetime": "2019-11-05T08:11:06.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "HannibalSnekter/Final-Fantasy-Speedruns", "max_issues_repo_path": "Final Fantasy XIII-2/Chapters/voidbeyond-0.tex", "max_line_length": 28, "max_stars_count": 10, "max_stars_repo_head_hexsha": "8045824bbe960721865ddb9c216fe4e2377a2aae", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "HannibalSnekter/Final-Fantasy-Speedruns", "max_stars_repo_path": "Final Fantasy XIII-2/Chapters/voidbeyond-0.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-18T09:01:43.000Z", "max_stars_repo_stars_event_min_datetime": "2020-02-27T04:50:16.000Z", "num_tokens": 12, "size": 45 }
% $Id$ %\subsubsection{Restrictions and Future Work} \begin{enumerate} \item {\bf Limit on rank.} The values for type, kind and rank passed into the ArraySpec class are subject to the same limitations as Arrays. The maximum array rank is 7, which is the highest rank supported by Fortran. \end{enumerate}
{ "alphanum_fraction": 0.7444794953, "avg_line_length": 17.6111111111, "ext": "tex", "hexsha": "6452d8c213dc2e51575d4845c46f6ec409081201", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_forks_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_forks_repo_name": "joeylamcy/gchp", "max_forks_repo_path": "ESMF/src/Infrastructure/ArraySpec/doc/ArraySpec_rest.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_issues_repo_issues_event_max_datetime": "2022-03-04T16:12:02.000Z", "max_issues_repo_issues_event_min_datetime": "2022-03-04T16:12:02.000Z", "max_issues_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_issues_repo_name": "joeylamcy/gchp", "max_issues_repo_path": "ESMF/src/Infrastructure/ArraySpec/doc/ArraySpec_rest.tex", "max_line_length": 70, "max_stars_count": 1, "max_stars_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_stars_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_stars_repo_name": "joeylamcy/gchp", "max_stars_repo_path": "ESMF/src/Infrastructure/ArraySpec/doc/ArraySpec_rest.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-05T16:48:58.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-05T16:48:58.000Z", "num_tokens": 79, "size": 317 }
\filetitle{resample}{Resample from a VAR object}{VAR/resample} \paragraph{Syntax}\label{syntax} \begin{verbatim} Outp = resample(V,Inp,Range,NDraw,...) Outp = resample(V,[],Range,NDraw,...) \end{verbatim} \paragraph{Input arguments}\label{input-arguments} \begin{itemize} \item \texttt{V} {[} VAR {]} - VAR object to resample from. \item \texttt{Inp} {[} struct \textbar{} tseries {]} - Input database or tseries used in bootstrap; not needed when \texttt{'method=' 'montecarlo'}. \item \texttt{Range} {[} numeric {]} - Range for which data will be returned. \end{itemize} \paragraph{Output arguments}\label{output-arguments} \begin{itemize} \itemsep1pt\parskip0pt\parsep0pt \item \texttt{Outp} {[} struct \textbar{} tseries {]} - Resampled output database or tseries. \end{itemize} \paragraph{Options}\label{options} \begin{itemize} \item \texttt{'deviation='} {[} \texttt{true} \textbar{} \emph{\texttt{false}} {]} - Do not include the constant term in simulations. \item \texttt{'group='} {[} numeric \textbar{} \emph{\texttt{NaN}} {]} - Choose group whose parameters will be used in resampling; required in VAR objects with multiple groups when \texttt{'deviation=' false}. \item \texttt{'method='} {[} \texttt{'bootstrap'} \textbar{} \emph{\texttt{'montecarlo'}} \textbar{} function\_handle {]} - Bootstrap from estimated residuals, resample from normal distribution, or use user-supplied sampler. \item \texttt{'progress='} {[} \texttt{true} \textbar{} \emph{\texttt{false}} {]} - Display progress bar in the command window. \item \texttt{'randomise='} {[} \texttt{true} \textbar{} \emph{\texttt{false}} {]} - Randomise or fix pre-sample initial condition. \item \texttt{'wild='} {[} \texttt{true} \textbar{} \emph{\texttt{false}} {]} - Use wild bootstrap instead of standard Efron bootstrap when \texttt{'method=' 'bootstrap'}. \end{itemize} \paragraph{Description}\label{description} \paragraph{Example}\label{example}
{ "alphanum_fraction": 0.6925, "avg_line_length": 28.5714285714, "ext": "tex", "hexsha": "63caa1e772e3ff514a3db7e44b8ca2976a438bac", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_path": "-help/VAR/resample.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_path": "-help/VAR/resample.tex", "max_line_length": 72, "max_stars_count": 1, "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_path": "-help/VAR/resample.tex", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "num_tokens": 632, "size": 2000 }
\subsubsection{\stid{1.14} GASNet-EX}\label{subsubsect:gasnet-ex} \paragraph{Overview} The Lightweight Communication and Global Address Space Support project (Pagoda) is developing GASNet-EX~\cite{gasnet-site}, a portable high-performance communication layer supporting multiple implementations of the Partitioned Global Address Space (PGAS) model. GASNet-EX clients include Pagoda's PGAS programming interface UPC++~\cite{upcxx-ipdps19,upcxx-site} and the Legion Programming System~\cite{bauer2012legion,legion-site} (WBS~2.3.1.08). GASNet-EX's low-overhead communication mechanisms are designed to maximize injection rate and network utilization, tolerate latency through overlap, streamline unpredictable communication events, minimize synchronization, leverage hardware support for communication involving accelerator memory, and efficiently support small- to medium-sized messages arising in ECP applications. GASNet-EX enables the ECP software stack to exploit the best-available communication mechanisms, including novel features still under development by vendors. The GASNet-EX communications library and the PGAS models built upon it offer a complementary, yet interoperable, approach to ``MPI + X'', enabling developers to focus their effort on optimizing performance-critical communication. We are co-designing GASNet-EX with the UPC++ development team with additional input from the Legion and (non-ECP) Cray Chapel~\cite{chapel-chapter,chapel-site} projects. \paragraph{Key Challenges} Exascale systems will deliver exponential growth in on-chip parallelism and reduced memory capacity per core, increasing the importance of strong scaling and finer-grained communication events. The pervasive use of accelerators introduces heterogeneous systems in which the engines providing the majority of the compute capability are not well suited to other tasks. Success at exascale demands that software minimize the overheads incurred upon lightweight cores and accelerators, especially avoiding long, branchy serial code paths; this motivates a requirement for efficient fine-grained communication. These problems are exacerbated by application trends; many of the ECP applications require adaptive meshes, sparse matrices, or dynamic load balancing. All of these characteristics favor the use of low-overhead communication mechanisms that can maximize injection rate and network utilization, tolerate latency through overlap, accommodate unpredictable communication events, minimize synchronization, leverage hardware support for communication involving accelerator memory, and efficiently support small- to medium-sized messages. The ECP software stack needs to expose the best-available communication mechanisms, including novel features being developed by the vendor community. \paragraph{Solution Strategy} The PGAS model is a powerful means of addressing these challenges and is critical in building other ECP programming systems, libraries, and applications. We use the term {\em PGAS} for models that support one-sided communication, including contiguous and non-contiguous remote memory access (RMA) operations such as put/get and atomic updates. Some of these models also include support for remote function invocation. GASNet-EX~\cite{gasnet-lcpc18} is a communications library that provides the foundation for implementing PGAS models, and is the successor to the widely-deployed GASNet library. We are building on over 15 years of experience with the GASNet~\cite{gasnet-site,gasnet-spec} communication layer to provide production-quality implementations that include improvements motivated by technology trends and application experience. The goal of the GASNet-EX team is to provide a portable, high-performance PGAS communication layer for exascale and pre-exascale systems, addressing the challenges identified above. GASNet-EX provides interfaces that efficiently match the RDMA capabilities of modern inter-node network hardware and intra-node communication between distinct address spaces. New interfaces for atomics and collectives have enabled offload to current and future network hardware with corresponding capabilities. These design choices and their implementations supply the low-overhead communication mechanisms required to address the requirements of exascale applications. \begin{figure}[htb] \centering \subfloat[8-byte RMA Latencies\label{fig:rma-lat-bars}]{ \includegraphics[width=0.432\textwidth]{projects/2.3.1-PMR/2.3.1.14-UPCxx-GASNet/latency_bars.pdf} } \subfloat[Summit Flood Bandwidth\label{fig:summit-bw}]{ \includegraphics[width=0.504\textwidth]{projects/2.3.1-PMR/2.3.1.14-UPCxx-GASNet/Summit-slide-BW.pdf} } \caption{\label{fig:gasnet-ex-rma} Selected GASNet-EX vs. MPI RMA Performance Results} \end{figure} Figure~\ref{fig:gasnet-ex-rma} shows representative results from a paper~\cite{gasnet-lcpc18} comparing the RMA performance of GASNet-EX against MPI on multiple systems including NERSC's Cori and OLCF's Summit% \footnote{The paper's results from Summitdev have been replaced by more recent (June 2019) results from OLCF's newer Summit system.}. These results demonstrate the ability of a PGAS-centric runtime to deliver performance as good as MPI, and often better. % The paper presents experimental methodology and system descriptions, which are also available online~\cite{gasnet-site}, along with results for additional systems. Figure~\ref{fig:rma-lat-bars} shows the latency of 8-byte RMA Put and Get operations on four systems, including two distinct network types and three distinct MPI implementations. % GASNet-EX's latency is 6\% to 55\% better than MPI's on Put and 5\% to 45\% better on Get. % Algorithms sensitive to small-transfer latency may become practical in PGAS programming models due to these improvements relative to MPI. Figure~\ref{fig:summit-bw} shows flood bandwidth of RMA Put and Get over the dual-rail InfiniBand network of OLCF's Summit. GASNet-EX's bandwidth is seen to rise to saturation at smaller transfer sizes than IBM Spectrum MPI, with the most pronounced differences appearing between 4KiB and 32KiB. % Comparison to the bandwidth of MPI message-passing (dashed green series) illustrates the benefits of one-sided communication, a major feature of PGAS models. \paragraph{Recent Progress} The most notable work on GASNet-EX in the past year has been in two areas: \textbf{Device (GPU) Memory Support}. ``Memory kinds'' is the GASNet-EX term for support for communication involving memory other than host memory, and in the context of ECP refers primarily to accelerator devices such GPUs. The GASNet-EX APIs for memory kinds have been co-designed with the developers of UPC++ and the Realm runtime layer of the Legion Programming System (WBS~2.3.1.08). Starting in October 2020, GASNet-EX can now leverage the GPUDirect RDMA (GDR) capabilities of modern NVIDIA GPUs and Mellanox network adapters (such as those on Summit) to perform one-sided RMA involving GPU memory without the overheads of staging through intermediate buffers in host memory. \textbf{Scalability}. We have devoted effort in the past year to reducing the memory footprint of the GASNet runtime as the job size grows. This has included efforts in collaboration with the ExaBiome (WBS~2.2.4.04) team to run their applications at previously unattainable scales on Summit at the OCLF and on Cori at NERSC. \paragraph{Next Steps} Our next efforts include: \textbf{Device (GPU) Memory Support}. We will continue work in the area of GASNet-EX memory kinds, including the hardening and tuning of the implementation featured in the October 2020 release. As access to other ECP-relevant systems is secured, we plan to extend support to accelerators from additional vendors, including those from AMD and Intel which are scheduled to appear in early exascale systems. \textbf{Client-Driven Tuning}. In collaboration with authors of client runtimes using GASNet-EX (most notably UPC++ and Legion) and their users (such as ExaBiome), we will continue to identify and address any significant scalability issues or performance anomalies which are discovered. \clearpage
{ "alphanum_fraction": 0.8150048876, "avg_line_length": 49.6, "ext": "tex", "hexsha": "bdd3fe52717f5021ea834f888709f64a92a0e3c7", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-10-07T14:40:24.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-07T14:40:24.000Z", "max_forks_repo_head_hexsha": "6ac85f302f3f5b1fbf51191f99392a5502a164fa", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "egboman/ECP-ST-CAR-PUBLIC", "max_forks_repo_path": "projects/2.3.1-PMR/2.3.1.14-UPCxx-GASNet/2.3.1.14-GASNet-EX.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "6ac85f302f3f5b1fbf51191f99392a5502a164fa", "max_issues_repo_issues_event_max_datetime": "2020-10-12T19:39:54.000Z", "max_issues_repo_issues_event_min_datetime": "2020-10-12T19:39:54.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "egboman/ECP-ST-CAR-PUBLIC", "max_issues_repo_path": "projects/2.3.1-PMR/2.3.1.14-UPCxx-GASNet/2.3.1.14-GASNet-EX.tex", "max_line_length": 106, "max_stars_count": null, "max_stars_repo_head_hexsha": "6ac85f302f3f5b1fbf51191f99392a5502a164fa", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "egboman/ECP-ST-CAR-PUBLIC", "max_stars_repo_path": "projects/2.3.1-PMR/2.3.1.14-UPCxx-GASNet/2.3.1.14-GASNet-EX.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1881, "size": 8184 }
\section{Distributed Shared Persistent Memory} \label{sec:hotpot:dspm} The datacenter application and hardware trends described in Section~\ref{sec:hotpot:motivation} clearly point to one promising direction of using \nvm\ in datacenter environments --- as distributed, shared, persistent memory (\dsnvm). A \dsnvm\ system manages a distributed set of \nvm{}-equipped machines and provides the abstraction of a global virtual address space and a data persistence interface to applications. This section gives a brief discussion on the \dsnvm\ model. \subsection{\dsnvm\ Benefits and Usage Scenarios} \dsnvm\ offers low-latency, shared access to vast amount of durable data in distributed \nvm, and the reliability and high availability of these data. Application developers can build in-memory data structures with the global virtual address space and decide how to name their data and when to make data persistent. Applications that fit \dsnvm\ well have two properties: accessing data with memory instructions and making data durable explicitly. We call the time when an application makes its data persistent a {\em commit point}. There are several types of datacenter applications that meet the above two descriptions and can benefit from running on \dsnvm. First, applications that are built for single-node \nvm\ can be easily ported to \dsnvm\ and scale out to distributed environments. These applications store persistent data as in-memory data structures and already express their commit points explicitly. Similarly, storage applications that use memory-mapped files also fit \dsnvm\ well, since they operate on in-memory data and explicitly make them persistent at well-defined commit points (\ie, \msync). Finally, \dsnvm\ fits shared-memory or DSM-based applications that desire to incorporate durability. These applications do not yet have durable data commit points, but we expect that when developers want to make their applications durable, they should have the knowledge of when and what data they want make durable. \subsection{\dsnvm\ Challenges} \label{sec:hotpot:challenges} Building a \dsnvm\ system presents several new challenges. First, {\em what type of abstraction should \dsnvm\ offer to support both direct memory accesses and data persistence (Section~\ref{sec:hotpot:abstraction})}? To perform native memory accesses, application processes should use virtual memory addresses. But virtual memory addresses are not a good way to {\em name} persistent data. \dsnvm\ needs a naming mechanism that applications can easily use to retrieve their in-memory data after reboot or crashes (Section~\ref{sec:hotpot:naming}). Allowing direct memory accesses to \dsnvm\ also brings another new problem: pointers need to be both persistent in \nvm\ and consistent across machines (Section~\ref{sec:hotpot:addressing}). Second, {\em how to efficiently organize data in \dsnvm\ to deliver good application performance (Section~\ref{sec:hotpot:data})?} To make \dsnvm's interface easy to use and transparent, \dsnvm\ should manage the physical \nvm\ space for applications and handle \nvm\ allocation. \dsnvm\ needs a flexible and efficient data management mechanism to deliver good performance to different types of applications. Finally, {\em \dsnvm\ needs to ensure both distributed cache coherence and data reliability at the same time} (Section~\ref{sec:hotpot:xact}). The former requirement ensures the coherence of multiple cached copies at different machines under concurrent accesses and is usually enforced in a distributed memory layer. The latter provides data reliability and availability when crashes happen and is implemented in distributed storage systems or distributed databases. \dsnvm\ needs to incorporate both these two different requirements in one layer in a correct and efficient way. %Note that PM is attached to main memory bus directly, hence we assume PM share the same CPU cache coherence mechanism with DRAM. %Hotpot focus on cache coherence among different cached copies across nodes.
{ "alphanum_fraction": 0.8075592885, "avg_line_length": 73.6, "ext": "tex", "hexsha": "b80f8363a9a1691ce64d224987693934e571152d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "859886a5c8524aa73d7d0784d5d695ec60ff1634", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lastweek/2022-UCSD-Thesis", "max_forks_repo_path": "hotpot/dspm.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "859886a5c8524aa73d7d0784d5d695ec60ff1634", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lastweek/2022-UCSD-Thesis", "max_issues_repo_path": "hotpot/dspm.tex", "max_line_length": 173, "max_stars_count": 12, "max_stars_repo_head_hexsha": "859886a5c8524aa73d7d0784d5d695ec60ff1634", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lastweek/2022-UCSD-Thesis", "max_stars_repo_path": "hotpot/dspm.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-18T16:28:32.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-14T03:09:38.000Z", "num_tokens": 838, "size": 4048 }
\section{Discussion and Conclusion} %The results demonstrate that there is clearly a difference in tracking performance %resulting from the modification to the geometry of the vertex barrels of Sidloi3. The two vertex detector geometries show differences in tracking performance. However, the change in performance do not necessarily point to one geometry being an outright favorite over the other. In general, the tracking benefits achieved by the vertex barrel modification are outweighed by some other shortcoming. For instance, though the modified detector had a higher tracking efficiency for low momentum particles in both physics processes ($\ee \rightarrow \ttbar$ at $ \sqrt{s} = $ 500 GeV, figure~\ref{fig:eettbareffthetalowpt}; $\ee \rightarrow \ttbar \bbbar$ (hadronic decays only) at $ \sqrt{s} = $ 1 TeV, figure~\ref{fig:ttbbeffthetalowpt}), the modified detector had a higher fake rate for reconstructed tracks (figure~\ref{fig:eettbarfakerate}, figure~\ref{fig:ttbbfakerate}). Therefore, even though the modified detector will reconstruct a greater number of tracks, those additional tracks will more often contain mismatched hits from other Monte Carlo particles than they would have had Sidloi3 reconstructed them because Sidloi3 has a lower fake rate. Furthermore, the modified detector's higher efficiency for lower momentum particles is again outweighed by the modified detector's lower efficiency for high momentum particles (figure~\ref{fig:eettbareffthetahighpt}, figure~\ref{fig:ttbbeffthetahighpt}) and lower efficiency with respect to the number of hits produced the Monte Carlo charged particle (figure~\ref{fig:eettbareffhit}, figure~\ref{fig:ttbbeffhit}). In addition, though modified detector exhibited better $z$-axis impact parameter resolution $\sigma(z_{0})$ for 10 and 100 GeV muons at high polar angles (figure~\ref{fig:muonz0resratio}), it also demonstrated worse $z$-axis impact parameter resolution for 1 GeV muons at lower polar angles (figure~\ref{fig:muonz0resratio}) and worse transverse impact parameter resolution $\sigma(d_{0})$ for 10 and 100 GeV muons at high polar angles (figure~\ref{fig:muond0resratio}). For the physics processes, though both detectors had equal transverse impact parameter resolution for a wide polar angle (figure~\ref{fig:eettbard0resratio}, figure~\ref{fig:ttbbd0resratio}), the modified detector had worse $z$-axis impact parameter resolution for a wide range of polar angles (figure~\ref{fig:eettbarz0resratio}, figure~\ref{fig:ttbbz0resratio}). The modified detector's increased $z$-axis impact parameter resolution for high energy single muons (figure~\ref{fig:muonz0resratio}) was present for the two physics processes (figure~\ref{fig:eettbarz0resratio}, figure~\ref{fig:ttbbz0resratio}) but to a lesser extent. The difference in tracking performance between the two vertex detector layouts indicates that the modified vertex barrel geometry is not optimal. However, it also suggests that the baseline Sidloi3 vertex geometry might not be optimal, since by some measures the modified detector performed better.% occasionally performed better. Further studies, with for instance hybrid vertex barrel geometries that mix doublet tracking and single tracking layers or that have reduced material budgets, should be conducted to see if it is possible to achieve the improvements demonstrated by the modified detector without compromising the strong performance of the baseline %Sidloi3 vertex detector. Moreover, studies with a different geometry of the outer tracker should also be conducted, so as %concerning the geometry and digitization of the outer tracker should also be conducted, so as to pave the road for optimizing the entire tracking system for SiD.
{ "alphanum_fraction": 0.8108829021, "avg_line_length": 70.7358490566, "ext": "tex", "hexsha": "8576295bc54a3d74cd792a32b53e065d228dc851", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "28e25341c1731a2699242b4464032c6e6f58922d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sagarsetru/detectorOptimizationStudy", "max_forks_repo_path": "discussionconclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "28e25341c1731a2699242b4464032c6e6f58922d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sagarsetru/detectorOptimizationStudy", "max_issues_repo_path": "discussionconclusion.tex", "max_line_length": 136, "max_stars_count": null, "max_stars_repo_head_hexsha": "28e25341c1731a2699242b4464032c6e6f58922d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sagarsetru/detectorOptimizationStudy", "max_stars_repo_path": "discussionconclusion.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 881, "size": 3749 }
% !TEX root = ../zeth-protocol-specification.tex \section{Efficiency and scalability}\label{implementation:efficiency} \newcommand{\primeF}{\ensuremath{\algostyle{prime}}\xspace} \newcommand{\decF}{\ensuremath{\algostyle{dec}}\xspace} \newcommand{\getsigma}{\ensuremath{\algostyle{getSigma}}\xspace} \subsection{Importance of performance}\label{implementation:efficiency:importance-of-performance} Poor performance and scalability has several impacts on the viability of the system. Efficiency and performance are arguably most important for the $\mixer$ contract, where gas usage directly affects the monetary cost of using $\zeth{}$ to transfer value. That is, high gas costs could make transactions very expensive, and therefore not practical for many use-cases, undermining the utility and viability. High storage or compute requirements on the client would severely restrict the set of devices on which $\zeth{}$ client software can run, and long delays when sending or receiving transactions can adversely affect the user-experience, discouraging some users and undermining the privacy promises of the system. Although the proof-of-concept implementation of $\zeth{}$ is not intended to be used in a production environment, one of its aims is to demonstrate the practicality of the protocol in terms of transaction costs. Therefore, some of the techniques described here have been included in the proof-of-concept implementation, while in some cases implementers of production software may wish to make different trade-offs. \subsection{Cost centers}\label{implementation:efficiency:cost-centers} One important factor, primarily affecting client performance, is the cost of zero-knowledge proof generation. This is directly related to the number of constraints used to represent the statement in \cref{zeth-protocol:statement}, which in turn depends on the specific cryptographic primitives used (see \cref{chap:instantiation}). Note that cryptographic primitives which are ``snark-friendly'' (i.e.~can be implemented using fewer gates in an arithmetic circuit) may not necessarily run efficiently on the \evm~or on standard hardware. As such, trade-offs must be made between proof generation cost and the gas costs of state transitions. An example of this is the hash function used in the Merkle tree of commitments. This is not only used in the statement of \cref{zeth-protocol:statement} (to verify Merkle proofs, see \cref{zeth-protocol:statement}), but also on the client (to create Merkle proofs, see \cref{zeth-protocol:mix-inp}) and in the $\mixer$ contract (to compute the Merkle root, see \cref{zeth-protocol:process-tx}). Aside from the specific hash function used, implementers have some freedom in the data structures and algorithms used to maintain the Merkle tree and generate proofs. Because of this freedom, and the importance of the chosen algorithms on performance across all components of the system, the majority of this section focuses on the details of the Merkle tree. As described in \cref{chap:zeth-protocol}, $\zeth{}$ notes are maintained and secured by the Merkle tree, whose depth $\mkTreeDepth$ must be fixed when the contract is deployed. Therefore, $\mkTreeDepth$ determines the maximum number of notes ($2^\mkTreeDepth$) that may be created over the lifetime of the deployment. To ensure the utility of \zeth, $\mkTreeDepth$ must be sufficiently large, and therefore the following includes a discussion of \emph{scalability} with respect to $\mkTreeDepth$. Also, due to the fact that $\mkTreeDepth$ is fixed, we assume that Merkle proofs are computed as $\mkTreeDepth$-tuples, no matter how many leaves have been populated. Unpopulated leaves are assumed to take some default value (usually a string of zero bits). \subsection{Client performance}\label{implementation:efficiency:client-performance} %% As described above, the primary potential costs for Zeth clients relate to the storage and compute requirements of the Merkle tree of commitments, and generation of the zero-knowledge proof. \subsubsection{Commitment Merkle tree} The simplest possible implementation, which stores only the data items at the leaves of the tree, requires $2^\mkTreeDepth - 1$ hash invocations to compute the Merkle root or to generate a Merkle proof. The cost of this is too high to be practical for non-trivial values of $\mkTreeDepth$. An immediate improvement in compute costs can be achieved by simply storing all nodes (or all nodes whose value is not the default value) and updating only those necessary as new commitments are added. When adding $\jsout$ consecutive leaves to the tree, after $\bigO{\log_2(\jsout)}$ layers (requiring $\bigO{\jsout}$ hashes) we reach the common ancestor of all new leaves and can update the Merkle tree by proceeding along a single branch (of approximately $\mkTreeDepth - \log_2(\jsout)$ layers). Thus, the cost of updating the Merkle tree for a single transaction has a fixed bound which is linear in $\jsout$ and $\mkTreeDepth$. However, this doubles the storage cost of the tree since non-leaf nodes must also be persisted. In the case of the client, the Merkle tree will only be used to generate proofs for notes owned by the user of the client. Thereby $\zeth{}$ clients need only store nodes of the Merkle tree that are required for this purpose, and may discard all others. In particular, any full sub-tree need only contain nodes that are part of Merkle paths associated with the user's notes. Implementations that discard unnecessary nodes can achieve vast savings in storage space. \subsubsection{Zero-knowledge proof generation} As well as keeping the number of constraints as low as possible, it is important to ensure that the prover implementation is optimal and thereby that proving times are as short as possible. Proof generation should also exploit any available parallelism, to help reduce the time taken. This may require specific programming languages or frameworks to be used, necessitating that proof generation be performed by some external process (as is the case in the proof-of-concept implementation). The proof generation process can also be very memory intensive (in part due to the \fft~calculations required), and so ensuring that enough \ram~is present in the system is important to avoid long proof times. See \cref{appendix:sca-attacks:proof-generation} for a discussion of related security concerns. \subsection{Zero-knowledge proof verification (on-chain)} Verification of the joinsplit statement via a zero-knowledge proof represents a significant computation, which must be carried out on-chain (by the \mixer~contract) for each valid \zeth~transaction. As described in \cref{instantiation:zksnark}, this verification cost increases linearly with the number of primary inputs to the statement -- a scalar multiplication of a group element and a group addition operation must be performed for each primary input. A technique presented in \cite[Section 4.5.1]{GGPR13} can be applied to reduce this linear cost. Given a relation $\REL$, the corresponding language $\LANG$, and a collision resistant hash function $H : \LANG \to \FFx{\rCURVE}$, let \[ \REL^\prime = \left\{ (\priminputs^\prime, \auxinputs^\prime) \mid \priminputs' = H(\priminputs), \auxinputs' = (\priminputs, \auxinputs), \allowbreak \text{ for } \allowbreak (\priminputs, \auxinputs) \in \REL \right\} \] be a new relation, with corresponding language $\LANG^\prime \subset \FFx{\rCURVE}$. To (probabilistically) verify that $\priminputs \in \LANG$, a verifier can compute $H(\priminputs)$ and check that $H(\priminputs) \in \LANG^\prime$. (By construction, if $H(\priminputs) \in \LANG^\prime$, there exists $(\priminputs_0, \auxinputs) \in \REL$, i.e. $\priminputs_0 \in \LANG$ with $H(\priminputs_0) = H(\priminputs)$. By the collision-resistance of $H$ we have $\priminputs_0 = \priminputs$ with overwhelming probability.) Informally, the original circuit is transformed as follows: \begin{itemize} \item all \emph{primary} inputs $\priminputs$ become \emph{auxiliary} inputs, \item a single primary input $h$ is added, and \item the statement is extended such that $h$ is the digest of the original primary inputs. \end{itemize} This slightly increases the complexity of the statement to be proven, adding to the cost of generating proofs $\pi^\prime$ for the augmented statement, but minimizes the linear component of the verification cost (since the verifier must now only process a single primary input). Note that this technique does not require any change to the initial statement itself (in this case the joinsplit statement described in \cref{zeth-protocol:statement}), or the data upon which it operates. The \mixer~contract must perform this hash step before the zk-SNARK verification, although we note that the parameters are also unchanged. In the proof-of-concept implementation of Zeth, this technique is employed using a snark-friendly hash function constructed as follows. The Merkle-Damgård construction (see~\cite[Chapter 9]{menezes1996handbook}) can be applied to a collision-resistant compression function to yield a collision-resistant hash function, accepting an arbitrary length input. We apply this to the compression function described in \cref{instantiation:mkhash}, which is chosen to be collision resistant over domain $\FFx{\rCURVE}$, and efficiently implementable as arithmetic constraints. Thereby, the resulting hash function, in common with the underlying compression function, can also be efficiently implemented to hash lists of elements in $\FFx{\rCURVE}$ (and this is exactly the form of the original primary inputs). \subsection{Merkle tree updates (on-chain)}\label{implementation:efficiency:merkle-tree-on-chain} For most components of the contract, the set of operations to be performed is strictly defined and the set of possible algorithmic optimizations that can be made is limited. In these cases, it is important to ensure that code is benchmarked and optimized to a reasonable degree, to minimize gas costs. We note that apart from the number and type of compute instructions executed, store and lookup operations have a significant impact on the gas used. In particular, storing new values is more expensive than overwriting existing values, and a gas rebate is made when contracts release stored values. See~\cite[Appendix H.1]{ethyellowpaper} for further details. The primary component in which algorithmic optimizations can be made is the Merkle tree of note commitments. The $\mixer$ contract must compute (and store) the new Merkle root after adding the $\jsout$ new commitments as leaves. As in \cref{implementation:efficiency:client-performance}, the simplest possible implementation which stores only the data items at the leaves of the tree, requires the full root to be recomputed, involving $2^\mkTreeDepth - 1$ hash invocations. This quickly becomes impractical for non-trivial values of $\mkTreeDepth$. The first-pass optimization (also described in \cref{implementation:efficiency:client-performance}) can be used to ensure that the cost of updating the Merkle tree (number of hash computations, stores and loads) is bounded by a constant that is linear in the Merkle tree depth. This is the strategy used in the proof-of-concept implementation of $\mixer$. It may be possible to gain further improvements in gas costs by discarding nodes from the Merkle tree that are not required. Unlike clients, $\mixer$ is only required to compute the new Merkle root, and does not need to create or validate Merkle proofs (as these are checked as part of the zero-knowledge proof). Consequently, \emph{all} nodes in a sub-tree can be discarded when the sub-tree is full, and the optimization is much simpler to implement than on the client. Another possible strategy for decreasing the gas costs associated with Merkle trees is \emph{Merkle Shrubs}, described in~\cite[Section 2.2]{merkle-shrubs}. Under this scheme, the contract maintains a ``frontier'' of roots of sub-trees and Merkle proofs provided by clients (as auxiliary inputs to the $\RELCIRC$ circuit) contain a path from the leaf to one of the nodes in the frontier. The gas savings in this scheme are due to the fact that, for new commitments, the contract need only recompute the value of nodes from the leaf to the ``frontier'' (not all the way to the root of the tree). However this comes at the cost of complexity in the arithmetic circuit, which must verify a Merkle path to one of several frontier nodes. When choosing cryptographic primitives to be used on the \evm~(and considering the trade-off with other platforms, described in \cref{implementation:efficiency:importance-of-performance}) it may be valuable to note that the \evm~supports so-called ``pre-compiled contracts''. These behave like built-in functions providing very gas-efficient access to certain algorithms, such as \keccak. However, pre-compiled contracts exist only for a limited set of algorithms. Others must be implemented using \evm~instructions. \subsection{Optimizing Blake2's circuit.}\label{implementation:efficiency:blake} After presenting \blake{2s}{} circuit and its components working on little endian variables, we show a few optimizations. \subsubsection{Helper circuits}\label{implementation:efficiency:blake:helper-circuits} We first define the following helper circuits needed in the \blake{2s}{} routine, operating on $w$-bit long words. \paragraph{$\xortxt$ circuits} The following $\xortxt$ circuits on $w$-bit long variables have been implemented, we assume the inputs are boolean (this is not checked in these circuits), \begin{itemize} \item ``Classic'' $\xortxt$ circuit, which xors 2 variables,\\ $a \xor b = c$; \item $\xortxt$ with constant, which xors two variables and a constant,\\ $a \xor b \xor c = d$, with $c$ constant; \item $\xortxt$ with rotation, which xors two variables and rotates the result.\\ $a \xor b \ggg r = c$, with $r$ constant, and $\ggg$ the rightward rotation~\cite[Section 2.3]{blakecompietf}; i.e.~for and constant $r < w$ we have $a_i \xor b_i = c_{i+r \pmod w}$, for $i = 0, \ldots, w$, \end{itemize} Each of these circuits presents $w$ constraints. Assuming that the inputs are boolean, the output is automatically boolean. To ascertain that both inputs are boolean ($a$ and $b$), we would need $2\cdot w$ more gates per circuit.\footnote{Making sure that no gates are duplicated in the circuit is very important to keep the proving time as small as possible. One challenge of writing R1CS programs is to make sure that the statement is correctly represented, without redundancy, in order to keep the constraint system as small as possible.} \paragraph{Modular addition}\label{implementation:efficiency:blake:helper-circuits:modular-addition} We present here two circuits to verify modular arithmetic. \subparagraph{Double modular addition: {\boldmath $a + b = c \pmod {2^w}$}.} This circuit checks that the sum of two $w$-bit long variables in little endian format modulo ${2^w}$ is equal to a $w$-bit long variable. More precisely, it checks the equality of the modular addition of $a+b \pmod {2^w}$ and $c$ and the booleaness of the later. We assume the inputs are boolean (this is not checked in this circuit). As the addition of two $w$-bit long integers results in at most an $(w + 1)$-bit integer, we consider $c$ to be $(w + 1)$-bit long. We do not care about the last bit value, $c_w$, but have to ensure its booleaness. The circuit presents the following $w+2$ constraints, for $a$ and $b$ of size $w$, where $w=32$ in practice, and variable $c$ of size $w+1$, that: \begin{equation} \label{implementation:eq:modular_sum} \sum_{i=0}^{w - 1} \left( a_i + b_i \right ) \cdot 2^i = \sum_{j=0}^{w} c_j \cdot 2^j\\ \end{equation} \begin{equation} \label{implementation:eq:modular_bool} \forall j \in \range{0}{w},\ (c_j - 0) \cdot (c_j - 1) = 0 \end{equation} \subparagraph{Triple modular addition: {\boldmath $a + b + c = d \pmod {2^w}$}.} This circuit checks the equality of a $w$-bit long variable $d$ with the sum of three $w$-bit long variables in little endian format modulo ${2^w}$. More precisely, it checks the equality of the modular addition of $a+b+c \pmod {2^w}$ and $d$ and the booleaness of the latter. We assume the inputs are boolean (this is not checked in this circuit). As the addition of three $w$-bit long integers results in at most an $(w + 2)$-bit integer, we consider $d$ to be $(w + 2)$-bit long. We do not care about the values of the last two bits ($d_w$ and $d_{w+1}$), but have to ensure their booleaness. The circuit presents the following $w+3$ constraints, for $a$, $b$ and $c$ of size $w$, where $w=32$ in practice, and variable $d$ of size $w+2$, that: \begin{align} \sum_{i=0}^{w - 1} \left( a_i + b_i + c_i \right ) \cdot 2^i = \sum_{j=0}^{w+1} d_j \cdot 2^j \label{implementation:eq:triple_modular_sum} \\ \forall j \in \range{0}{w + 1},\ (d_j - 0) \cdot (d_j - 1) = 0 \label{implementation:eq:triple_modular_bool} \end{align} \subsubsection{\blake{2s}{} routine circuit}\label{implementation:efficiency:blake:g-circuit} We define in this section the circuit of the \blake{2}{} routine (see~\cite[Section 3.1]{blakecompietf} and~\cref{implementation:alg:g}) known as ``$\blakeG$ function''~\cite[Section 2.4]{aumasson2013blake2}. $\blakeG$ is based on $\chacha{}$ encryption~\cite{bernstein2008chacha}. It works on $w$-bit long words, and presents $8 \cdot w+10$ constraints. The function mixes a state ($a$, $b$, $c$ and $d$) with the inputs ($x$ and $y$) and returns the updated state. This circuit does not check the booleaness of the inputs or state. However, given that the state is boolean, the output is automatically boolean due to the use of the modular addition circuits. For \blake{2s}{}, we have $w=32$, $r_1=16$, $r_2 = 12$, $r_3=3$ and $r_4=7$. \begin{figure*} \begin{minipage}[t]{.4\textwidth} \centering \procedure[linenumbering]{$\blakeG({a}, {b}, {c}, {d}; {x}, {y}) \mapsto ({a_2}, {b_2}, {c_2}, {d_2})$}{% a_{1} \gets a + b + x \pmod {2^w} \\ d_{1} \gets d \xor a_{1} \ggg r_1 \\ c_{1} \gets c + d_{1} \pmod {2^w} \\ b_{1} \gets b \xor c_{1} \ggg r_2 \\ a_{2} \gets a_{1} + b_{1} + {y} \pmod {2^w} \\ d_{2} \gets d_{1} \xor a_{2} \ggg r_3 \\ c_{2} \gets c_{1} + d_{2} \pmod {2^w} \\ b_{2} \gets b_{1} \xor c_{2} \ggg r_4 \\ \pcreturn a_2, b_2, c_2, d_2 } \caption{$\blakeG$ primitive~\cite[Section 3.1]{blakecompietf}}\label{implementation:alg:g} \end{minipage}% \begin{minipage}[t]{.6\textwidth} \centering \procedure[linenumbering]{$\getsigma()$}{% \blakePermutation{} \in (\NN^{16})^{10} \\ \blakePermutation{0} \gets (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15) \\ \blakePermutation{1} \gets (14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3) \\ \blakePermutation{2} \gets (11, 8, 12, 0, 5, 2, 15, 13, 10, 14, 3, 6, 7, 1, 9, 4) \\ \blakePermutation{3} \gets (7, 9, 3, 1, 13, 12, 11, 14, 2, 6, 5, 10, 4, 0, 15, 8) \\ \blakePermutation{4} \gets (9, 0, 5, 7, 2, 4, 10, 15, 14, 1, 11, 12, 6, 8, 3, 13) \\ \blakePermutation{5} \gets (2, 12, 6, 10, 0, 11, 8, 3, 4, 13, 7, 5, 15, 14, 1, 9) \\ \blakePermutation{6} \gets (12, 5, 1, 15, 14, 13, 4, 10, 0, 7, 6, 3, 9, 2, 8, 11) \\ \blakePermutation{7} \gets (13, 11, 7, 14, 12, 1, 3, 9, 5, 0, 15, 4, 8, 6, 2, 10) \\ \blakePermutation{8} \gets (6, 15, 14, 9, 11, 3, 0, 8, 12, 2, 13, 7, 1, 4, 10, 5) \\ \blakePermutation{9} \gets (10, 2, 8, 4, 7, 6, 1, 5, 15, 11, 9, 14, 3, 12, 13, 0) \\ \pcreturn \blakePermutation{} } \caption{$\blake{2}{}$ permutation table~\cite[Section 2.7]{blakecompietf}}\label{implementation:alg:blake2-perm} \end{minipage} \end{figure*} \subsubsection{\blake{2s}{} compression function circuit}\label{implementation:efficiency:blake:comp-circuit} The compression function is defined as follows, for more details see~\cref{implementation:alg:blake2s_comp}, \[ \blake{2sC}{} : \BB^n \times \BB^{2n} \times \BB^{n/4} \times \BB^{n/4} \to \BB^n\,. \] \blake{2C}{} takes as input a state $h \in \BB^n$ which is used as chaining value when hashing, a message to compress $x \in \BB^{2n}$, a message length written in binary $t \in \BB^{n/4}$ which is incremented when hashing and a binary flag $f \in \BB^{n/4}$ to tell whether the current block is the last to be compressed to prevent length extension attacks. \blake{2C}{} uses the \blakeG function iteratively over \blakeRound number of rounds on a state and message. The constant initialization vector \blakeIV{} and the permutation table \blakePermutation{} are hard-coded. \blake{2sC}{} works in little endian (see~\cite[Section 2.4]{blakecompietf}) on $n$-bit long variables ($n = 256$), $w$-bit long words ($w = 32$), and the rotation constants specified in~\cref{implementation:efficiency:blake:g-circuit} (see~\cite[Section 2.1]{blakecompietf}). We have the following constants (see~specifications~\cite{aumasson2013blake2} and~\cite[Section 2.2]{blakecompietf}), \begin{itemize} \item $\blakeIV{}$ is the $(8 \cdot w)$-bit long initialization vector; it corresponds to the first $w$ bits of the fractional parts of the square roots of the first eight prime numbers ($2, 3, 5, 7, \ldots$) (see~\cite[Section 2.6]{blakecompietf}); \item $\blakePermutation{}$ are the $10 \cdot 16$ permutation constants of $\blake{2}{}$ (see~\cref{implementation:alg:blake2-perm} and~\cite[Section 2.7]{blakecompietf}); \item $\blakeRound$, the number of rounds: $10$ for $\blake{2sC}{}$, $12$ for $\blake{2bC}{}$. \end{itemize} We have the following variables (see~specifications~\cite{aumasson2013blake2} and~\cite[Section 2.2]{blakecompietf}), \begin{itemize} \item $\blakeInitState{}$ is the $(8 \cdot w)$-bit long initial state while $\blakeState{}$ is the $(16 \cdot w)$-bit long final state; \item $\blakeDigestLength{i}$ are two $w$-bit long counters encoding the block length; \item $\blakeFlag{i}$ are two $w$-bit long finalization flags. We set the first one $\blakeFlag{0}$ to $2^w-1$ to state when the input block is the last one to be hashed. The second, $\blakeFlag{1}=0$ is only set for tree hashing mode (which is not our case) and is therefore unused. \end{itemize} We introduce the following functions to write \blake{2C}{} (see~specifications~\cite{aumasson2013blake2} and~\cite[Section 2.6]{blakecompietf}): \begin{itemize} \item The function $\primeF$ takes a positive integer $i$ as input and outputs the $i$-th prime number; \item The function $\decF$ takes a real number $x$ as input outputs its positive decimal part. \end{itemize} \begin{figure*}[h!] \centering \procedure[linenumbering]{$\blake{2C}{h,m,t,f}$}{% \blakeDigestLength{}, \blakeFlag{}, \blakeInitState{}, \blakeIV{}, \blakeState{} \in (\BB^w)^2 \times (\BB^w)^2 \times (\BB^w)^8 \times (\BB^w)^8 \times (\BB^w)^{16} \\ \indexedset{\blakeIV{i}}{i \in [8]} \gets \indexedset{\floor{2^w \cdot \decF(\sqrt{\primeF(i+1})}}{i \in [8]} \\ \blakePermutation{} \gets \getsigma() \\ \indexedset{\blakeInitState{i}}{i \in [8]} \gets \indexedset{\slice{h}{i \cdot w}{(i+1) \cdot w}}{i \in [8]} \\ \indexedset{m[i]}{i \in [8]} \gets \indexedset{\slice{x}{i\cdot w}{(i+1) \cdot w}}{i \in [8]} \\ \blakeDigestLength{0},\ \blakeDigestLength{1} \gets \slice{t}{w}{2w},\ \slice{t}{0}{w} \\ \blakeFlag{0},\ \blakeFlag{1} \gets \slice{f}{w}{2w},\ \slice{f}{0}{w} \\ \indexedset{\blakeState{i}}{i \in [8]} \gets \indexedset{\blakeInitState{i}}{i \in [8]} \\ \indexedset{\blakeState{i+8}}{i \in [8]} \gets \indexedset{\blakeIV{i}}{i \in [8]} \\ \blakeState{12},\ \blakeState{13} \gets \blakeState{12} \xor \blakeDigestLength{0},\ \blakeState{13} \xor \blakeDigestLength{1} \\ \blakeState{14},\ \blakeState{15} \gets \blakeState{14} \xor \blakeFlag{0},\ \blakeState{15} \xor \blakeFlag{1} \\ \pcforeach r \in [\blakeRound] \pcdo \\ \t \tau \gets \blakePermutation{r \pmod{15}} \\ \t \blakeState{0}, \blakeState{4}, \hphantom{1} \blakeState{8}, \blakeState{12} \gets \blakeG(\blakeState{0}, \blakeState{4},\hphantom{1} \blakeState{8}, \blakeState{12}, \hphantom{1} m[\tau[0]], \hphantom{1} m[\tau[1]]) \\ \t \blakeState{1}, \blakeState{5}, \hphantom{1} \blakeState{9}, \blakeState{13} \gets \blakeG(\blakeState{1}, \blakeState{5},\hphantom{1} \blakeState{9}, \blakeState{13}, \hphantom{1} m[\tau[2]], \hphantom{1} m[\tau[3]]) \\ \t \blakeState{2}, \blakeState{6}, \blakeState{10}, \blakeState{14} \gets \blakeG(\blakeState{2}, \blakeState{6}, \blakeState{10}, \blakeState{14}, \hphantom{1} m[\tau[4]], \hphantom{1} m[\tau[5]]) \\ \t \blakeState{3}, \blakeState{7}, \blakeState{11}, \blakeState{15} \gets \blakeG(\blakeState{3}, \blakeState{7}, \blakeState{11}, \blakeState{15}, \hphantom{1} m[\tau[6]], \hphantom{1} m[\tau[7]]) \\ \t \blakeState{0}, \blakeState{5}, \blakeState{10}, \blakeState{15} \gets \blakeG(\blakeState{0}, \blakeState{5}, \blakeState{10}, \blakeState{15}, \hphantom{1} m[\tau[8]], \hphantom{1} m[\tau[9]]) \\ \t \blakeState{1}, \blakeState{6}, \blakeState{11}, \blakeState{12} \gets \blakeG(\blakeState{1}, \blakeState{6}, \blakeState{11}, \blakeState{12}, m[\tau[10]], m[\tau[11]]) \\ \t \blakeState{2}, \blakeState{7}, \hphantom{1} \blakeState{8}, \blakeState{13} \gets \blakeG(\blakeState{2}, \blakeState{7}, \hphantom{1} \blakeState{8}, \blakeState{13}, m[\tau[12]], m[\tau[13]]) \\ \t \blakeState{3}, \blakeState{4}, \hphantom{1} \blakeState{9}, \blakeState{14} \gets \blakeG(\blakeState{3}, \blakeState{4}, \hphantom{1} \blakeState{9}, \blakeState{14}, m[\tau[14]], m[\tau[15]]) \\ \pcreturn \concat_{i=0}^8 \blakeInitState{i} \xor \blakeState{i} \xor \blakeState{i+8} } \caption{\blake{2}{} compression function~\cite[Section 3.2]{blakecompietf}. Set $n$, $w$ and $\blakeG$'s constants to obtain \blake{2sC}{}.}\label{implementation:alg:blake2s_comp} \end{figure*} This circuit presents $((64 \cdot \blakeRound + 8) \cdot w + 8 \cdot \blakeRound + 10)$ constraints. For \blake{2sC}{}, as $w=32$ and $\blakeRound=10$, we have 21536 constraints. We do not check the input block booleaness in this circuit. Given that the initial state is boolean, the output is automatically boolean. This can be proved iteratively by the booleaness of $\blakeG$ primitive's output. \paragraph*{Security requirement.} The inputs to \blake{2sC}{} \MUST~be boolean. \subsubsection{\blake{2s}{} hash function}\label{implementation:efficiency:blake:hash-circuit} The hash function is defined as follows, for more details see~\cref{implementation:alg:blake2s_comp}, \[ \blake{2s}{} : \BB^{\leq 2n} \times \BB^{*} \to \BB^n \] \blake{2}{} takes as input a hash key $k \in \BB^n$ and the message to hash $x \in \BB^{2n}$. % \blake{2}{} uses the \blake{2C}{} function iteratively over each $2n$-bit long chunk of the padded message. If the key is non null, it is used as the first block to be hashed. The constant initialization vector \blakeIV{} and part of the parameter block $\blakePB{}$ are hard-coded. We have the following constants (see~specifications~\cite{aumasson2013blake2} and~\cite[Section 2.2]{blakecompietf}), \begin{itemize} \item $\blakeIV{}$ is the $(8 \cdot w)$-bit long Initialization Vector; it corresponds to the first $w$ bits of the fractional parts of the square roots of the first eight prime numbers ($2, 3, 5, 7, \ldots$) (see~\cite[Section 2.6]{blakecompietf}). \end{itemize} We have the following variables (see~specifications~\cite{aumasson2013blake2} and~\cite[Section 2.2]{blakecompietf}), \begin{itemize} \item $\blakePB{}$ is the $(16 \cdot w)$-bit long parameter block used to initialize the state (see~\cite[Section 2.5]{blakecompietf}). In big endian encoding, the first byte corresponds to the digest length (fixed to 32 bytes), the second byte to the key length, the third and fourth bytes correspond to the use of the serial mode; \item $\blakeInitState{} \in \BB^\blakeCompLen$, the chaining value. \end{itemize} \begin{figure*}[h!] \centering \procedure[linenumbering]{$\blake{2}{k, x}$}{% \blakeInitState{}, \blakeIV{}, \blakePB{} \in \BB^{8w} \times \BB^{8w} \times \BB^{8w} \\ \blakePB{} \gets \pad{\encode{0\text{x}0101}{\NN} \concat \pad{\encode{\ceil{\len{k}/\byteLen}}{\NN}}{w} \concat \encode{0\text{x}20}{\NN}}{8 \cdot w} \\ \blakeIV{} \gets \concat_{i=0}^8 \floor{2^w \cdot \decF(\sqrt{\primeF(i+1})} \\ \blakeInitState{} \gets \blakePB{} \xor \blakeIV{}\\ y \gets x \\ \pcif \len{k} \neq 0 \pcdo \\ \t y \gets \pad{k}{2n} \concat y \\ z \gets \pad{y}{2n \cdot \ceil{\len{y} / 2n}} \\ \pcfor i \in [\ceil{\len{z}/2n}] \pcdo \\ \t \pcif i = \ceil{\len{z}/2n} - 1 \pcdo \\ \t \t \blakeInitState{} \gets \blake{2C}{\blakeInitState{}, \slice{z}{i \cdot 2n}{(i+1) \cdot 2n}, \pad{\encode{\ceil{\len{y}/\byteLen}}{\NN}}{2w}, \pad{\encode{2^w-1}{\NN}}{2w}} \\ \t \pcelse \\ \t \t \blakeInitState{} \gets \blake{2C}{\blakeInitState{}, \slice{z}{i \cdot 2n}{(i+1) \cdot 2n}, \pad{\encode{(i+1) \cdot 2n / \byteLen}{\NN}}{2w}, \pad{0}{2w}} \\ \pcreturn \blakeInitState{} } \caption{\blake{2}{} hash function~\cite[Section 3.3]{blakecompietf}. Set $n=16w$ and $\blakeG$'s constants accordingly to obtain \blake{2s}{}.}\label{implementation:alg:blake2s_hash} \end{figure*} We do not check the input block booleaness in this circuit. Given that the initial state is boolean, the output is automatically boolean. This can be proved iteratively by the booleaness of $\blake{2C}{}$ primitive's output. \paragraph*{Security requirement} To ensure the correct use of \blake{2s}{}, \blake{2s}{}'s inputs \MUST~be boolean. \subsubsection{Optimizing the circuits}\label{implementation:efficiency:blake:optimization} The above helper circuits form the building blocks of the \blake{2s}{} compression function. We show here two exclusive methods to optimize these circuits. \paragraph{Optimizing the Modular additions}\label{implementation:efficiency:blake:optimization:mod-circuits} \subparagraph{Double modular addition: {\boldmath $a + b = c \pmod {2^w}$}.} We present here an optimization on the circuit to save one constraint by merging the modular constraint with a boolean constraint. The optimized circuit presents the following constraints: \begin{equation} \label{implementation:eq:modular_to_prove} \left ( \sum_{i=0}^{w - 1} ( a_i + b_i - c_i ) \cdot 2^i \right ) \cdot \left ( \sum_{i=0}^{w - 1} ( a_i + b_i - c_i ) \cdot 2^i - 2^{w} \right ) = 0 \end{equation} \begin{equation} \label{implementation:eq:modular_bool_to_prove} \forall j \in \range{0}{w - 1},\ (c_j - 0) \cdot (c_j - 1) = 0 \end{equation} with $\sum_{i=0}^{w - 1} x_i \cdot 2^i$ a binary encoding of $x$ ($x_i$ is the $i^{th}$ bit of $x$). These equations describe $w+1$ constraints to prove the bit equality $a + b = c$ (note that an additional $2\cdot w$ constraints would be required to prove the booleaness of input variables $a$ and $b$). We now explain how we obtained them. \begin{proof} The most straightforward way to prove that $a+b=c \pmod{2^w}$ and $c$ booleaness is with the set of constraints illustrated in~\cref{implementation:eq:modular_sum} and in~\cref{implementation:eq:modular_bool}. As we perform arithmetic modulo $2^w$, we do not care about the value of $c_w$ but would like to ensure its booleaness. As one may notice, the summing constraint~\cref{implementation:eq:modular_sum} is an equality of two linear combinations with no multiplication by a variable. Hence, we can combine it with the boolean constraint of $c_w$ to remove any reference to $c_w$ and still have a bilinear gate. To do so, we first rewrite~\cref{implementation:eq:modular_sum} as an equality check over $c_{w}\cdot 2^{w}$ and multiply~\cref{implementation:eq:modular_bool} for $j=n$ by $2^{2\cdot w}$. \begin{equation} \label{implementation:eq:modular_sum_proof} \sum_{i=0}^{w - 1} (a_i + b_i - c_i) \cdot 2^i = c_{w} \cdot 2^w \end{equation} \begin{equation} \label{implementation:eq:modular_bool_proof} 2^w \cdot (c_w - 0) \cdot 2^w \cdot (c_w - 1)= 0 \end{equation} We finally replace $c_w \cdot 2^w$ in~\cref{implementation:eq:modular_bool_proof} by the value from~\cref{implementation:eq:modular_sum_proof}. \begin{multline*} 0 = 2^w \cdot (c_w - 0) \cdot 2^w \cdot (c_w - 1) = 2^w \cdot c_w \cdot (2^w \cdot c_w - 2^w)\\ = \left ( \sum_{i=0}^{w - 1} ( a_i + b_i - c_i ) \cdot 2^i \right ) \cdot \left( \left (\sum_{i=0}^{w - 1} ( a_i + b_i - c_i ) \cdot 2^i \right ) - 2^w \right) \end{multline*} This results in~\cref{implementation:eq:modular_to_prove} and~\cref{implementation:eq:modular_bool_to_prove}. All references to $c_w$ have disappeared and, with a single multiplication by a variable, we still have bilinear gates.\end{proof} \subparagraph{Triple modular addition: {\boldmath $a + b + c = d \pmod {2^w}$}.} To optimize, we use the above circuit twice. We define a temporary variable $d'$ such that $a+b = d' \pmod {2^w}$. As such, we have $c+d'= d \pmod {2^w}$. As $d'$ is the addition of two $w$-bit long variables, it is $(w + 1)$-bit long. However as we evaluate the sum modulo $2^w$, we discard the last bit of $d'$. We proceed similarly for $d$. To ensure that $d$ is boolean, we check the booleaness of the $w+1$ bits of $d$ as well as the booleaness of the last bit of $d'$ (to account for $d$'s ${w+2}^{th}$ bit in the original expression ($a + b + c = d \pmod {2^w}$)). We thus obtain the following circuit with $w+2$ constraints, \begin{align*} \left ( \sum_{i=0}^{w - 1} ( a_i + b_i - d'_i ) \cdot 2^i \right ) \cdot \left ( \sum_{i=0}^{w - 1} ( a_i + b_i - d'_i ) \cdot 2^i - 2^w \right ) &= 0 \\ \left ( \sum_{i=0}^{w - 1} ( c_i + d'_i - d_i ) \cdot 2^i \right ) \cdot \left ( \sum_{i=0}^{w - 1} ( c_i + d'_i - d_i ) \cdot 2^i - 2^w \right ) &= 0 \\ \forall j \in \range{0}{w - 1},\ (d_j - 0) \cdot (d_j - 1 ) &= 0 \end{align*} These optimizations lead to a gain of 320 constraints ($=4 \cdot 8 \cdot \rounds$). \paragraph{Optimizing Blake2s routine's circuit}\label{implementation:efficiency:blake:optimization:batch-constraints} As seen in~\cref{implementation:alg:g}, our routine presents 2 double and 2 triple modular additions. Each of these circuits comprises at least one modular constraint which pack several $w$-bit long variables. The circuit is however processed in $\FFx{\rCURVE}$, that is to say most integers can be written over $\fieldBitCap$ bits. We can thus batch the modular constraints. As the $\blakeG$ primitive performs 2 double modular and 2 triple modular, we have in total 6 modular checks per iteration. We can batch up to $\fieldBitCap / w$ constraints together. For $w=32$ and $\fieldBitCap \geq 224$ (which holds for \BNCurve and \BLSCurve), we can encode up to 7 words per field element, that is to say we can include all the modular constraints into a single one. This optimization leads to a gain of $274$ constraints ($= 4\cdot8\cdot10-\ceil{\frac{4\cdot8\cdot10}{7}}$). \paragraph{Optimization conclusion}\label{implementation:efficiency:blake:optimization:conclusion} Using the more efficient optimization on the modular additions, the \blake{2s}{} compression function comprises $21216$ constraints. \subsubsection{Increasing the PRF security with Blake}\label{implementation:efficiency:blake-prf} As \blake{2}{} comprises a personalization tag in its parameter block $\blakePB{}$, one could ensure the independence of the \prf{}s by writing different tags for each of them (we would be able to consider up to $2^{30}$ inputs and outputs). We did not choose to write this enhancement in the instantiation to keep a general tagging method in case of a change of hash function.
{ "alphanum_fraction": 0.7226248885, "avg_line_length": 96.4301075269, "ext": "tex", "hexsha": "8921c0874a871b850664a4dbc50bbc4382f81255", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-07-26T04:51:29.000Z", "max_forks_repo_forks_event_min_datetime": "2021-07-26T04:51:29.000Z", "max_forks_repo_head_hexsha": "ba29c67587395f5c7b26b52ee7ab9cba12f1cc6b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "clearmatics/zeth-specifications", "max_forks_repo_path": "chapters/chap04-sec03.tex", "max_issues_count": 13, "max_issues_repo_head_hexsha": "ba29c67587395f5c7b26b52ee7ab9cba12f1cc6b", "max_issues_repo_issues_event_max_datetime": "2021-04-16T10:57:05.000Z", "max_issues_repo_issues_event_min_datetime": "2020-10-27T10:41:50.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "clearmatics/zeth-specifications", "max_issues_repo_path": "chapters/chap04-sec03.tex", "max_line_length": 732, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ba29c67587395f5c7b26b52ee7ab9cba12f1cc6b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "clearmatics/zeth-specifications", "max_stars_repo_path": "chapters/chap04-sec03.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-29T18:22:00.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-29T18:22:00.000Z", "num_tokens": 11105, "size": 35872 }
\section{Binning optimization} \label{appendix:Binningopt}
{ "alphanum_fraction": 0.8166666667, "avg_line_length": 15, "ext": "tex", "hexsha": "0429af2dad57b58204cbde4e83771d0c988d6a47", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "904cd56e96a3489887bb9e808d28f6dae4d7f058", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "phreborn/vbfcp_INT", "max_forks_repo_path": "tex/appendices/App_BinningOpt.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "904cd56e96a3489887bb9e808d28f6dae4d7f058", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "phreborn/vbfcp_INT", "max_issues_repo_path": "tex/appendices/App_BinningOpt.tex", "max_line_length": 30, "max_stars_count": null, "max_stars_repo_head_hexsha": "904cd56e96a3489887bb9e808d28f6dae4d7f058", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "phreborn/vbfcp_INT", "max_stars_repo_path": "tex/appendices/App_BinningOpt.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 16, "size": 60 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ english, man,floatsintext]{apa6} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={Discriminatory Experiences, Chronic Strain, Social Connectedness, and Psychological Wellbeing Among Individuals With Marginalized Sexual Orientations}, pdfauthor={Maggie Head1, Sarah Spafford1, \& Heather Terral1}, pdfkeywords={minority stress, sexual minorities, LGBQ, health, social connectedness}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage{longtable,booktabs} % Correct order of tables after \paragraph or \subparagraph \usepackage{etoolbox} \makeatletter \patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} \makeatother % Allow footnotes in longtable head/foot \IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} \makesavenoteenv{longtable} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{-\maxdimen} % remove section numbering % Make \paragraph and \subparagraph free-standing \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi % Manuscript styling \usepackage{upgreek} \captionsetup{font=singlespacing,justification=justified} % Table formatting \usepackage{longtable} \usepackage{lscape} % \usepackage[counterclockwise]{rotating} % Landscape page setup for large tables \usepackage{multirow} % Table styling \usepackage{tabularx} % Control Column width \usepackage[flushleft]{threeparttable} % Allows for three part tables with a specified notes section \usepackage{threeparttablex} % Lets threeparttable work with longtable % Create new environments so endfloat can handle them % \newenvironment{ltable} % {\begin{landscape}\begin{center}\begin{threeparttable}} % {\end{threeparttable}\end{center}\end{landscape}} \newenvironment{lltable}{\begin{landscape}\begin{center}\begin{ThreePartTable}}{\end{ThreePartTable}\end{center}\end{landscape}} % Enables adjusting longtable caption width to table width % Solution found at http://golatex.de/longtable-mit-caption-so-breit-wie-die-tabelle-t15767.html \makeatletter \newcommand\LastLTentrywidth{1em} \newlength\longtablewidth \setlength{\longtablewidth}{1in} \newcommand{\getlongtablewidth}{\begingroup \ifcsname LT@\roman{LT@tables}\endcsname \global\longtablewidth=0pt \renewcommand{\LT@entry}[2]{\global\advance\longtablewidth by ##2\relax\gdef\LastLTentrywidth{##2}}\@nameuse{LT@\roman{LT@tables}} \fi \endgroup} % \setlength{\parindent}{0.5in} % \setlength{\parskip}{0pt plus 0pt minus 0pt} % \usepackage{etoolbox} \makeatletter \patchcmd{\HyOrg@maketitle} {\section{\normalfont\normalsize\abstractname}} {\section*{\normalfont\normalsize\abstractname}} {}{\typeout{Failed to patch abstract.}} \patchcmd{\HyOrg@maketitle} {\section{\protect\normalfont{\@title}}} {\section*{\protect\normalfont{\@title}}} {}{\typeout{Failed to patch title.}} \makeatother \shorttitle{EDLD 651 Final Project} \keywords{minority stress, sexual minorities, LGBQ, health, social connectedness} \usepackage{csquotes} \raggedbottom \setlength{\parskip}{0pt} \ifxetex % Load polyglossia as late as possible: uses bidi with RTL langages (e.g. Hebrew, Arabic) \usepackage{polyglossia} \setmainlanguage[]{english} \else \usepackage[shorthands=off,main=english]{babel} \fi \title{Discriminatory Experiences, Chronic Strain, Social Connectedness, and Psychological Wellbeing Among Individuals With Marginalized Sexual Orientations} \author{Maggie Head\textsuperscript{1}, Sarah Spafford\textsuperscript{1}, \& Heather Terral\textsuperscript{1}} \date{} \authornote{ This study utilized data from Project STRIDE: Stress, Identity and Mental Health, which was funded by the National Institutes of Health/National Institute of Mental Health (Grant\#: 5R01MH066058-03). Correspondence concerning this article should be addressed to Maggie Head, 1215 University of Oregon, Eugene, OR 97403-1215. E-mail: \href{mailto:[email protected]}{\nolinkurl{[email protected]}} } \affiliation{\vspace{0.5cm}\textsuperscript{1} University of Oregon} \abstract{ Individuals with marginalized sexual orientations experience higher rates of physical and psychiatric comorbidities compared to their heterosexual counterparts. These disparities are considered the result of minority stress, such that the stress attached to navigating pervasive prejudice and discrimination precipitates deleterious mental health outcomes. Less is known about factors that are related to positive mental health outcomes in individuals with marginalized sexual orientations. Using data from 360 men and women with marginalized sexual orientations (i.e., lesbian, gay, bisexual, queer, or other LGB orientation) who participated in a three year longitudinal study in New York City, we examined the links between discriminatory experiences, chronic strain, social connectedness to the gay community, and psychological wellbeing. Results from a multiple regression analysis revealed discriminatory experiences and chronic strain were significantly negatively associated with psychological wellbeing. Consistent with hypotheses, social connectedness was significantly positively associated with psychological wellbeing. These findings provide further evidence for the relationship between minority stress and mental health and highlight the importance of social connectedness in promoting psychological wellbeing among LGBQ individuals. } \begin{document} \maketitle Inherent to living with a marginalized identity is the excess stress that accompanies stigma-related experiences and discriminatory conditions (Frost, Lehavot, \& Meyer, 2015). An extensive body of literature demonstrates that chronic exposure to stress compromises physical and mental health (see Thoits, 2010), and ultimately elevates susceptibility to a myriad of physiological and psychiatric disorders (Salleh, 2008). It is not surprising, then, that individuals who identify as gay, bisexual, lesbian, and queer (LGBQ) experience higher rates of psychopathology than their heterosexual counterparts, including substance use disorders (Green \& Feinstein, 2012), eating disorders (Parker \& Harriger, 2020), deliberate self-injury (King et al., 2008), suicidality, and suicide attempts (Haas et al., 2010). The term \enquote{minority stress} has been used to describe the phenomenon of elevated mental health concerns resulting from the societal stigmatization of LGBQ sexual orientation status (Meyer, 1995). The link between minority stress and poor health outcomes may be direct, such that discriminatory experiences lead to increased cortisol (Korous, Causadias, \& Casper, 2017) and cardiovascular reactivity (Panza et al., 2019). However, minority stress may also impact health indirectly through the cognitive burden, strain, and behavioral coping strategies that are required to navigate marginalization (Frost et al., 2015). Given that morbidity and mortality are intimately tied to social and interpersonal conditions, researchers have come to recognize the importance of relationships and support (Cohen, 2004 ; Pescosolido, 2011). Social connectedness, which refers to the sense of subjective belonging that people feel in relation to individuals and groups of others, is considered a pivotal factor in individual and population-level health (Haslam, Cruwys, Haslam, \& Jetten, 2015). Burgeoning evidence indicates that, among individuals with marginalized identities, connection with others who are marginalized for the same characteristic may mitigate detrimental stress responses (Austin \& Goodman, 2017). Indeed, social connectedness is associated with positive health outcomes and has been found to buffer the negative effects of discrimination and perceived stress among many groups of marginalized individuals (Kim \& Fredriksen-Goldsen, 2017; Liao, Weng, \& West, 2016; Liu, Li, Wang, Wei, \& Ko, 2020). Yet, social connectedness is markedly overlooked in research examining the health of LGBQ individuals. Thus, the purpose of the current study was to examine the relationships between discriminatory experiences, chronic strain, social connectedness, and psychological wellbeing among LGBQ individuals. \hypertarget{methods}{% \section{Methods}\label{methods}} \hypertarget{participants}{% \subsection{Participants}\label{participants}} Project STRIDE (Meyer, Dohrenwend, Schwartz, Hunter, \& Kertzner, 2016) participants included individuals who had been residing in New York City for a minimum of two years, self-identified as lesbian, gay, bisexual (LGB), or straight, and self-identified as White, Black, or Latino (Meyer et al., 2016). Participants were excluded from the present study if they identified as heterosexual or did not complete the main study measures (\emph{n} = 360). Participants were aged 18-59 years (\emph{M} = 32.41, \emph{SD} = 9.25) and were predominantly White (34\%), followed by Black/African-American (33\%), and Latino/Hispanic (32\%). The distribution of sexual orientations in the study sample can be seen in Table 1. \newpage \textbf{Table 1.} \emph{Distribution of self-identified sexual orientations} \begin{tabular}{l|r} \hline Sexual Orientation & Count\\ \hline Gay & 160\\ \hline Lesbian & 104\\ \hline Queer & 12\\ \hline Bisexual & 63\\ \hline Homosexual & 16\\ \hline Other - LGB & 5\\ \hline \end{tabular} \hypertarget{measures}{% \subsection{Measures}\label{measures}} \hypertarget{discriminatory-experiences}{% \subsubsection{Discriminatory experiences}\label{discriminatory-experiences}} The discriminatory experiences 8-item measure was adapted from Williams, Yu, Jackson, and Anderson (1997) to be inclusive of all minority groups (e.g.~gender minorities). This scale assessed how often discriminatory experiences (e.g.~being treated with less respect, being threatened or harassed) occurred throughout their lifetimes. Each question was rated on a 4-point scale (1 = \emph{\enquote{often}} through 4 = \emph{\enquote{never}}) and coded so that higher scores represented more discriminatory experiences (Meyer, Frost, Narvaez, \& Dietrich, 2006). For these analyses, the total number of types (0-8) of everyday discrimination experiences were used. \hypertarget{chronic-strain}{% \subsubsection{Chronic strain}\label{chronic-strain}} The chronic strain measure was adapted from a scale by Wheaton (1999), which measures strain in 9 areas of life, including general problems, financial issues, work relationships, parenting, family, social life, residence, and health. Responses were coded such that higher scores indicated higher levels of chronic strain (Meyer et al., 2006). \hypertarget{social-connectedness}{% \subsubsection{Social connectedness}\label{social-connectedness}} Social connectedness was contextualized as connectedness to the gay community, as measured by an 8-item scale adapted from Mills et al. (2001) to be more relevant to the geographic area. Each response was rated from 1 (\emph{\enquote{agree strongly}}) to 4 (\emph{\enquote{disagree strongly}}) and coded so that higher scores indicated a greater level of connectedness to the gay community (Meyer et al., 2006). \hypertarget{psychological-wellbeing}{% \subsubsection{Psychological wellbeing}\label{psychological-wellbeing}} Psychological wellbeing was assessed using an 18-item measure adapted from Ryff (1989) and Ryff and Keyes (1995). This measured psychological wellbeing on six dimensions including self-acceptance, purpose in life, environmental mastery, positive relations with others, personal growth, and autonomy. All responses were coded such that higher scores indicated higher levels of wellbeing (Meyer et al., 2006). \hypertarget{procedure}{% \subsection{Procedure}\label{procedure}} For additional details on data collection procedures for Project STRIDE, please see Meyer et al. (2006). \hypertarget{data-analytic-strategy-and-hypotheses}{% \subsection{Data Analytic Strategy and Hypotheses}\label{data-analytic-strategy-and-hypotheses}} Prior to the main analysis, data were screened for missingness. Pearson bivariate correlations were conducted among discriminatory experiences, chronic strain, social connectedness, and psychological wellbeing. To examine the proposed model, a multivariate regression analysis was conducted. Discriminatory experiences, chronic strain, and social connectedness were entered as the predictor variables. Psychological wellbeing was entered as the outcome variable. We expected a negative association between discriminatory experiences and psychological wellbeing (Hypothesis 1). We also expected a negative association between chronic strain and psychological wellbeing (Hypothesis 2). In contrast, we expected a positive association between social connectedness and psychological wellbeing (Hypothesis 3). We used R (Version 3.6.2; R Core Team, 2020) and the R-packages \emph{apaTables} (Version 2.0.5; Stanley, 2018), \emph{dplyr} (Version 1.0.2; Wickham et al., 2020), \emph{forcats} (Version 0.4.0; Wickham, 2019a), \emph{gdtools} (Version 0.2.2; Gohel, Wickham, Henry, \& Ooms, 2020), \emph{ggiraphExtra} (Version 0.3.0; Moon, 2020), \emph{ggplot2} (Version 3.3.2; Wickham, 2016), \emph{haven} (Version 2.2.0; Wickham \& Miller, 2020), \emph{janitor} (Version 2.0.1; Firke, 2020), \emph{knitr} (Version 1.28; Xie, 2015), \emph{lavaan} (Version 0.6.7; Rosseel, 2012; Lishinski, 2018), \emph{lavaanPlot} (Version 0.5.1; Lishinski, 2018), \emph{lm.beta} (Version 1.5.1; Behrendt, 2014), \emph{magick} (Version 2.5.2; Ooms, 2020, 2020), \emph{papaja} (Version 0.1.0.9997; Aust \& Barth, 2020), \emph{probemod} (Version 0.2.1; Tan, 2015), \emph{psych} (Version 2.0.9; Revelle, 2020), \emph{purrr} (Version 0.3.3; Henry \& Wickham, 2019), \emph{qwraps2} (Version 0.5.0; DeWitt, 2020), \emph{readr} (Version 1.3.1; Wickham, Hester, \& Francois, 2018), \emph{rio} (Version 0.5.16; Chan, Chan, Leeper, \& Becker, 2018), \emph{rockchalk} (Version 1.8.144; Johnson, 2019), \emph{stringr} (Version 1.4.0; Wickham, 2019b), \emph{tibble} (Version 3.0.3; Müller \& Wickham, 2020), \emph{tidyr} (Version 1.0.2; Wickham \& Henry, 2020), and \emph{tidyverse} (Version 1.3.0; Wickham, Averick, et al., 2019) for all our analyses. \hypertarget{results}{% \section{Results}\label{results}} \hypertarget{preliminary-analyses}{% \subsection{Preliminary Analyses}\label{preliminary-analyses}} Missing data were minimal; thus, listwise deletion was employed. Means, standard deviations, minimum and maximum values of the main study measures for the total sample can be seen in Table 2. Means, standard deviations, minimum and maximum values of the main study variables according to sexual orientation can be seen in Table 3. Of particular concern is the substantial number of discriminatory experiences reported by participants. Figure 1 displays the average number of everyday discriminatory experiences according to sexual orientation. Pearson bivariate correlations revealed small to moderate correlations among the main study variables (see Figure 2). \newpage \textbf{Table 2.} \emph{Descriptive statistics for main study variables.} \begin{longtable}[]{@{}ll@{}} \toprule & stridy (N = 360)\tabularnewline \midrule \endhead \textbf{Everyday Discrmination} & ~~\tabularnewline ~~ min & 0\tabularnewline ~~ median & 7\tabularnewline ~~ max & 8\tabularnewline ~~ mean (sd) & 6.59 ± 1.86\tabularnewline \textbf{Chronic Strain} & ~~\tabularnewline ~~ min & 1\tabularnewline ~~ median & 1.67\tabularnewline ~~ max & 3\tabularnewline ~~ mean (sd) & 1.71 ± 0.55\tabularnewline \textbf{Psychological Wellbeing} & ~~\tabularnewline ~~ min & 3\tabularnewline ~~ median & 5.56\tabularnewline ~~ max & 7\tabularnewline ~~ mean (sd) & 5.47 ± 0.79\tabularnewline \textbf{Social Connectedness} & ~~\tabularnewline ~~ min & 1.38\tabularnewline ~~ median & 3.38\tabularnewline ~~ max & 4\tabularnewline ~~ mean (sd) & 3.29 ± 0.51\tabularnewline \bottomrule \end{longtable} \newpage \textbf{Table 3.} \emph{Descriptive statistics for main study variables by sexual orientation} \begin{longtable}[]{@{}lllllll@{}} \toprule \begin{minipage}[b]{0.16\columnwidth}\raggedright \strut \end{minipage} & \begin{minipage}[b]{0.11\columnwidth}\raggedright Gay (N = 160)\strut \end{minipage} & \begin{minipage}[b]{0.11\columnwidth}\raggedright Lesbian (N = 104)\strut \end{minipage} & \begin{minipage}[b]{0.11\columnwidth}\raggedright Queer (N = 12)\strut \end{minipage} & \begin{minipage}[b]{0.11\columnwidth}\raggedright Bisexual (N = 63)\strut \end{minipage} & \begin{minipage}[b]{0.11\columnwidth}\raggedright Homosexual (N = 16)\strut \end{minipage} & \begin{minipage}[b]{0.11\columnwidth}\raggedright Other - LGB (N = 5)\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{0.16\columnwidth}\raggedright \textbf{Everyday Discrmination}\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ min\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 0\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 0\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 5\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 0\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 0\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 5\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ median\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 7\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 7\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 7\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 7\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 8\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 6\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ max\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 8\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 8\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 8\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 8\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 8\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 8\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ mean (sd)\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 6.63 ± 1.72\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 6.52 ± 1.99\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 7.25 ± 0.87\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 6.43 ± 2.13\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 6.88 ± 2.03\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 6.40 ± 1.14\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright \textbf{Chronic Strain}\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ min\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.33\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ median\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.67\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.67\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.5\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 2\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.33\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 2\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ max\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 2.67\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.67\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 2.67\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ mean (sd)\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.65 ± 0.53\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.77 ± 0.58\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.64 ± 0.61\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.88 ± 0.51\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.35 ± 0.26\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.87 ± 0.56\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright \textbf{Psychological Wellbeing}\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ min\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.41\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 4.29\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.18\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.12\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.88\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ median\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 5.62\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 5.53\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 6.03\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 5.24\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 5.74\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 5.12\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ max\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 7\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 6.82\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 7\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 6.82\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 6.59\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 5.76\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ mean (sd)\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 5.51 ± 0.79\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 5.53 ± 0.70\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 5.75 ± 0.78\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 5.24 ± 0.85\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 5.47 ± 1.01\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 4.95 ± 0.72\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright \textbf{Social Connectedness}\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright ~~\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ min\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.38\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 2.12\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.25\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 1.88\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 2.62\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 2.12\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ median\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.25\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.38\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.44\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.12\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.5\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 2.75\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ max\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 4\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 4\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 4\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 4\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.88\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.75\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.16\columnwidth}\raggedright ~~ mean (sd)\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.26 ± 0.54\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.41 ± 0.45\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.51 ± 0.25\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.14 ± 0.51\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 3.38 ± 0.40\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright 2.95 ± 0.71\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \textbf{Figure 1.} \emph{Experiences of everyday discrimination according to sexual orientation.} \includegraphics{prep_script_files/figure-latex/meanplot-1.pdf} \newpage \textbf{Figure 2.} \emph{Correlation Panels and Distributions For All Variables Included in the Model.} \includegraphics{prep_script_files/figure-latex/correlation panels-1.pdf} \hypertarget{primary-analyses}{% \subsection{Primary Analyses}\label{primary-analyses}} A multiple regression analysis was conducted to examine the effects of discriminatory experiences, chronic strain, social connectedness on psychological wellbeing among LGBQ individuals. Consistent with Hypothesis 1, discriminatory experiences were negatively associated with psychological wellbeing, \(\hat{\beta_{1}}=-0.05, SE(\hat{\beta_{1}})=-0.11, t(356)=-2.14, p=.03\). Likewise, consistent with Hypothesis 2, chronic strain was significantly negatively associated with psychological wellbeing, \(\hat{\beta_{2}}=-0.29, SE(\hat{\beta_{2}})=-0.20, t(356)=-3.91, p < .001\). Consistent with Hypothesis 3, social connectedness was significantly positively associated with psychological wellbeing, \(\hat{\beta_{3}}=0.24, SE(\hat{\beta_{3}})=0.15, t(356)=2.99, p < .001\). Taken together, all three predictors explained approximately 7.7\% of the variance in psychological wellbeing, \(F(3,356)=9.90, p<.001, R^{2}=.077\). Figure 3 displays the relationship between everyday discrimination and psychological wellbeing. Figure 4 displays the path model with corresponding beta coefficients. \textbf{Figure 3.} \emph{Linear regression results demonstrating the effect of discrimination on psychological wellbeing.} \includegraphics{prep_script_files/figure-latex/regression plot-1.pdf} \newpage \textbf{Figure 4.} \emph{Path model for the effect of discrimination, chronic strain, and social connectedness on psychological wellbeing.} \hypertarget{htmlwidget-d9d800e45ebbfe8e0658}{} \hypertarget{discussion}{% \section{Discussion}\label{discussion}} Findings that social connectedness was positively associated with psychological wellbeing are simultaneously intuitive and in synchrony with the literature. For members of the LGB community, both physical and virtual spaces in which members can create and maintain meaningful relationships with each other may be one method to increase resilience within these groups. Although this analysis did not examine how these variables may vary by additional identities, such as race/ethnicity and gender, intersectionality theory suggests that members of the LGB community who simulatenously hold additional marginalized identities would experience and report higher level of discrimination and strain than their relatively more privileged peers. In considering interventions to improve health and wellbeing among LGB communities, it is important to not place the burden on these groups; a preventative approach in line with a social ecological perspective must also include mechanisms designed to reduce both implicit and explicit bias in the general population. Approval of LGBT+ individuals has decreased among young people in the United States ages 18 - 34 (Miller, 2019). Furthermore, the presidential administration of 2016 - 2020 took several actions during their tenure which further marginalized this population (Acosta, 2020). Examples include the removal of the LGBT issues page from the White House website within hours of the administration's commencement, and the attempted exclusion of trans people from United States military service. Although this study did not measure social connectedness outside of the LGB community, future research examining perceived acceptance by the general population may better inform health promotion interventions for this group. \hypertarget{strengths-and-limitations}{% \subsection{Strengths and Limitations}\label{strengths-and-limitations}} The availability and tidyness of the data from this robust sample gathered by Meyer et al. (2016) are a clear strength of the present work. However, it is unclear whether or not these results are applicable to LGB members outside of New York City itself, and furthermore within rural areas. Future work may benefit from examining whether LGB individuals who relocate to large cities share similiar rates of discrimination and may be systematically different from community members who have not relocated. Additionally, the intentional exclusion of trans people from this sample limits the richness of this dataset and interpretability. \newpage \hypertarget{references}{% \section{References}\label{references}} \begingroup \setlength{\parindent}{-0.5in} \setlength{\leftskip}{0.5in} \hypertarget{refs}{} \leavevmode\hypertarget{ref-Acosta2020}{}% Acosta, L. (2020). A list of trump's "unprecedented steps" for the lgbtq community. Retrieved from \url{https://www.hrc.org/news/the-list-of-trumps-unprecedented-steps-for-the-lgbtq-community} \leavevmode\hypertarget{ref-R-papaja}{}% Aust, F., \& Barth, M. (2020). \emph{papaja: Create APA manuscripts with R Markdown}. Retrieved from \url{https://github.com/crsh/papaja} \leavevmode\hypertarget{ref-austin2017}{}% Austin, A., \& Goodman, R. (2017). The impact of social connectedness and internalized transphobic stigma on self-esteem among transgender and gender non-conforming adults. \emph{Journal of Homosexuality}, \emph{64}(6), 825--841. \leavevmode\hypertarget{ref-R-lm.beta}{}% Behrendt, S. (2014). \emph{Lm.beta: Add standardized regression coefficients to lm-objects}. Retrieved from \url{https://CRAN.R-project.org/package=lm.beta} \leavevmode\hypertarget{ref-R-rio}{}% Chan, C.-h., Chan, G. C., Leeper, T. J., \& Becker, J. (2018). \emph{Rio: A swiss-army knife for data file i/o}. \leavevmode\hypertarget{ref-cohen2004}{}% Cohen, S. (2004). Social relationships and health. \emph{American Psychologist}, \emph{59}(8), 676. \leavevmode\hypertarget{ref-R-qwraps2}{}% DeWitt, P. (2020). \emph{Qwraps2: Quick wraps 2}. Retrieved from \url{https://CRAN.R-project.org/package=qwraps2} \leavevmode\hypertarget{ref-R-janitor}{}% Firke, S. (2020). \emph{Janitor: Simple tools for examining and cleaning dirty data}. Retrieved from \url{https://CRAN.R-project.org/package=janitor} \leavevmode\hypertarget{ref-frost2015}{}% Frost, D. M., Lehavot, K., \& Meyer, I. H. (2015). Minority stress and physical health among sexual minority individuals. \emph{Journal of Behavioral Medicine}, \emph{38}(1), 1--8. \leavevmode\hypertarget{ref-R-gdtools}{}% Gohel, D., Wickham, H., Henry, L., \& Ooms, J. (2020). \emph{Gdtools: Utilities for graphical rendering}. Retrieved from \url{https://CRAN.R-project.org/package=gdtools} \leavevmode\hypertarget{ref-green2012}{}% Green, K. E., \& Feinstein, B. A. (2012). Substance use in lesbian, gay, and bisexual populations: An update on empirical research and implications for treatment. \emph{Psychology of Addictive Behaviors}, \emph{26}(2), 265. \leavevmode\hypertarget{ref-haas2010}{}% Haas, A. P., Eliason, M., Mays, V. M., Mathy, R. M., Cochran, S. D., D'Augelli, A. R., \ldots{} others. (2010). Suicide and suicide risk in lesbian, gay, bisexual, and transgender populations: Review and recommendations. \emph{Journal of Homosexuality}, \emph{58}(1), 10--51. \leavevmode\hypertarget{ref-haslam2015}{}% Haslam, C., Cruwys, T., Haslam, S. A., \& Jetten, J. (2015). Social connectedness and health. \emph{Encyclopaedia of Geropsychology}, \emph{2015}, 46--41. \leavevmode\hypertarget{ref-R-purrr}{}% Henry, L., \& Wickham, H. (2019). \emph{Purrr: Functional programming tools}. Retrieved from \url{https://CRAN.R-project.org/package=purrr} \leavevmode\hypertarget{ref-R-rockchalk}{}% Johnson, P. E. (2019). \emph{Rockchalk: Regression estimation and presentation}. Retrieved from \url{https://CRAN.R-project.org/package=rockchalk} \leavevmode\hypertarget{ref-kim2017}{}% Kim, H.-J., \& Fredriksen-Goldsen, K. I. (2017). Disparities in mental health quality of life between hispanic and non-hispanic white lgb midlife and older adults and the influence of lifetime discrimination, social connectedness, socioeconomic status, and perceived stress. \emph{Research on Aging}, \emph{39}(9), 991--1012. \leavevmode\hypertarget{ref-king2008}{}% King, M., Semlyen, J., Tai, S. S., Killaspy, H., Osborn, D., Popelyuk, D., \& Nazareth, I. (2008). A systematic review of mental disorder, suicide, and deliberate self harm in lesbian, gay and bisexual people. \emph{BMC Psychiatry}, \emph{8}(1), 70. \leavevmode\hypertarget{ref-korous2017}{}% Korous, K. M., Causadias, J. M., \& Casper, D. M. (2017). Racial discrimination and cortisol output: A meta-analysis. \emph{Social Science \& Medicine}, \emph{193}, 90--100. \leavevmode\hypertarget{ref-liao2016}{}% Liao, K. Y.-H., Weng, C.-Y., \& West, L. M. (2016). Social connectedness and intolerance of uncertainty as moderators between racial microaggressions and anxiety among black individuals. \emph{Journal of Counseling Psychology}, \emph{63}(2), 240. \leavevmode\hypertarget{ref-R-lavaanPlot}{}% Lishinski, A. (2018). \emph{LavaanPlot: Path diagrams for lavaan models via diagrammer}. Retrieved from \url{https://CRAN.R-project.org/package=lavaanPlot} \leavevmode\hypertarget{ref-liu2020}{}% Liu, S., Li, C.-I., Wang, C., Wei, M., \& Ko, S. (2020). Self-compassion and social connectedness buffering racial discrimination on depression among asian americans. \emph{Mindfulness}, \emph{11}(3), 672--682. \leavevmode\hypertarget{ref-meyer1995}{}% Meyer, I. H. (1995). Minority stress and mental health in gay men. \emph{Journal of Health and Social Behavior}, 38--56. \leavevmode\hypertarget{ref-projectstride}{}% Meyer, I. H., Dohrenwend, B. P., Schwartz, S., Hunter, J., \& Kertzner, R. M. (2016). Project stride: Stress, identity, and mental health, new york city, 2004-2005: Version 2. Inter-University Consortium for Political; Social Research. \url{https://doi.org/10.3886/ICPSR35525.V2} \leavevmode\hypertarget{ref-projectstridemethod}{}% Meyer, I. H., Frost, D. M., Narvaez, R., \& Dietrich, J. H. (2006). Project stride methodology and technical notes. \emph{Unpublished Manuscript}. \leavevmode\hypertarget{ref-Miller2019}{}% Miller, S. (2019). The young are regarded as the most tolerant generation. That's why results of this lgbtq survey are 'alarming'. Retrieved from \url{https://www.usatoday.com/story/news/nation/2019/06/24/lgbtq-acceptance-millennials-decline-glaad-survey/1503758001/} \leavevmode\hypertarget{ref-mills2001}{}% Mills, T. C., Stall, R., Pollack, L., Paul, J. P., Binson, D., Canchola, J., \& Catania, J. A. (2001). Health-related characteristics of men who have sex with men: A comparison of those living in" gay ghettos" with those living elsewhere. \emph{American Journal of Public Health}, \emph{91}(6), 980. \leavevmode\hypertarget{ref-R-ggiraphExtra}{}% Moon, K.-W. (2020). \emph{GgiraphExtra: Make interactive 'ggplot2'. Extension to 'ggplot2' and 'ggiraph'}. Retrieved from \url{https://CRAN.R-project.org/package=ggiraphExtra} \leavevmode\hypertarget{ref-R-tibble}{}% Müller, K., \& Wickham, H. (2020). \emph{Tibble: Simple data frames}. Retrieved from \url{https://CRAN.R-project.org/package=tibble} \leavevmode\hypertarget{ref-R-magick}{}% Ooms, J. (2020). \emph{Magick: Advanced graphics and image-processing in r}. Retrieved from \url{https://CRAN.R-project.org/package=magick} \leavevmode\hypertarget{ref-panza2019}{}% Panza, G. A., Puhl, R. M., Taylor, B. A., Zaleski, A. L., Livingston, J., \& Pescatello, L. S. (2019). Links between discrimination and cardiovascular health among socially stigmatized groups: A systematic review. \emph{PloS One}, \emph{14}(6), e0217623. \leavevmode\hypertarget{ref-parker2020}{}% Parker, L. L., \& Harriger, J. A. (2020). Eating disorders and disordered eating behaviors in the lgbt population: A review of the literature. \emph{Journal of Eating Disorders}, \emph{8}(1), 1--20. \leavevmode\hypertarget{ref-pescosolido2011}{}% Pescosolido, B. (2011). Social connectedness in health, morbidity and mortality, and health care-the contributions, limits and further potential of health and retirement study. In \emph{Forum for health economics \& policy} (Vol. 14). De Gruyter. \leavevmode\hypertarget{ref-R-base}{}% R Core Team. (2020). \emph{R: A language and environment for statistical computing}. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from \url{https://www.R-project.org/} \leavevmode\hypertarget{ref-R-psych}{}% Revelle, W. (2020). \emph{Psych: Procedures for psychological, psychometric, and personality research}. Evanston, Illinois: Northwestern University. Retrieved from \url{https://CRAN.R-project.org/package=psych} \leavevmode\hypertarget{ref-R-lavaan}{}% Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. \emph{Journal of Statistical Software}, \emph{48}(2), 1--36. Retrieved from \url{http://www.jstatsoft.org/v48/i02/} \leavevmode\hypertarget{ref-ryff1989}{}% Ryff, C. D. (1989). Happiness is everything, or is it? Explorations on the meaning of psychological well-being. \emph{Journal of Personality and Social Psychology}, \emph{57}(6), 1069. \leavevmode\hypertarget{ref-ryffkeyes1995}{}% Ryff, C. D., \& Keyes, C. L. M. (1995). The structure of psychological well-being revisited. \emph{Journal of Personality and Social Psychology}, \emph{69}(4), 719. \leavevmode\hypertarget{ref-salleh2008}{}% Salleh, M. R. (2008). Life event, stress and illness. \emph{The Malaysian Journal of Medical Sciences: MJMS}, \emph{15}(4), 9. \leavevmode\hypertarget{ref-R-apaTables}{}% Stanley, D. (2018). \emph{ApaTables: Create american psychological association (apa) style tables}. Retrieved from \url{https://CRAN.R-project.org/package=apaTables} \leavevmode\hypertarget{ref-R-probemod}{}% Tan, J. C. (2015). \emph{Probemod: Statistical tools for probing moderation effects}. Retrieved from \url{https://CRAN.R-project.org/package=probemod} \leavevmode\hypertarget{ref-thotis2010}{}% Thoits, P. A. (2010). Stress and health: Major findings and policy implications. \emph{Journal of Health and Social Behavior}, \emph{51}(1\_suppl), S41--S53. \url{https://doi.org/10.1177/0022146510383499} \leavevmode\hypertarget{ref-wheaton1999}{}% Wheaton, B. (1999). The nature of stressors. In A. F. Horwitz \& T. L. Scheid (Eds.), \emph{A handbook for the study of mental health: Social contexts, theories, and systems} (pp. 176--197). New York: Oxford University Press. \leavevmode\hypertarget{ref-R-ggplot2}{}% Wickham, H. (2016). \emph{Ggplot2: Elegant graphics for data analysis}. Springer-Verlag New York. Retrieved from \url{https://ggplot2.tidyverse.org} \leavevmode\hypertarget{ref-R-forcats}{}% Wickham, H. (2019a). \emph{Forcats: Tools for working with categorical variables (factors)}. Retrieved from \url{https://CRAN.R-project.org/package=forcats} \leavevmode\hypertarget{ref-R-stringr}{}% Wickham, H. (2019b). \emph{Stringr: Simple, consistent wrappers for common string operations}. Retrieved from \url{https://CRAN.R-project.org/package=stringr} \leavevmode\hypertarget{ref-R-tidyverse}{}% Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L. D., François, R., \ldots{} Yutani, H. (2019). Welcome to the tidyverse. \emph{Journal of Open Source Software}, \emph{4}(43), 1686. \url{https://doi.org/10.21105/joss.01686} \leavevmode\hypertarget{ref-R-dplyr}{}% Wickham, H., François, R., Henry, L., \& Müller, K. (2020). \emph{Dplyr: A grammar of data manipulation}. Retrieved from \url{https://CRAN.R-project.org/package=dplyr} \leavevmode\hypertarget{ref-R-tidyr}{}% Wickham, H., \& Henry, L. (2020). \emph{Tidyr: Tidy messy data}. Retrieved from \url{https://CRAN.R-project.org/package=tidyr} \leavevmode\hypertarget{ref-R-readr}{}% Wickham, H., Hester, J., \& Francois, R. (2018). \emph{Readr: Read rectangular text data}. Retrieved from \url{https://CRAN.R-project.org/package=readr} \leavevmode\hypertarget{ref-R-haven}{}% Wickham, H., \& Miller, E. (2020). \emph{Haven: Import and export 'spss', 'stata' and 'sas' files}. Retrieved from \url{https://CRAN.R-project.org/package=haven} \leavevmode\hypertarget{ref-williams1997}{}% Williams, D. R., Yu, Y., Jackson, J. S., \& Anderson, N. B. (1997). Racial differences in physical and mental health: Socio-economic status, stress and discrimination. \emph{Journal of Health Psychology}, \emph{2}(3), 335--351. \leavevmode\hypertarget{ref-R-knitr}{}% Xie, Y. (2015). \emph{Dynamic documents with R and knitr} (2nd ed.). Boca Raton, Florida: Chapman; Hall/CRC. Retrieved from \url{https://yihui.org/knitr/} \endgroup \end{document}
{ "alphanum_fraction": 0.7632152007, "avg_line_length": 57.8271604938, "ext": "tex", "hexsha": "67a708271622c49acecca74a187d56abd730d51f", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2020-11-18T18:36:01.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-17T14:41:18.000Z", "max_forks_repo_head_hexsha": "5e96b705983c75b32bdaf421e76a5781c7d82fb4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sarahgspafford/edld651_finalproj", "max_forks_repo_path": "prep_script.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5e96b705983c75b32bdaf421e76a5781c7d82fb4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sarahgspafford/edld651_finalproj", "max_issues_repo_path": "prep_script.tex", "max_line_length": 1438, "max_stars_count": null, "max_stars_repo_head_hexsha": "5e96b705983c75b32bdaf421e76a5781c7d82fb4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sarahgspafford/edld651_finalproj", "max_stars_repo_path": "prep_script.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 15231, "size": 46840 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ ]{book} \usepackage{amsmath,amssymb} \usepackage{lmodern} \usepackage{iftex} \ifPDFTeX \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={SCMA 470 : Risk Analysis and Credibility Tutorial 3}, pdfauthor={Pairote Satiracoo}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage[margin=1in]{geometry} \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{longtable,booktabs,array} \usepackage{calc} % for calculating minipage widths % Correct order of tables after \paragraph or \subparagraph \usepackage{etoolbox} \makeatletter \patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} \makeatother % Allow footnotes in longtable head/foot \IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} \makesavenoteenv{longtable} \usepackage{graphicx} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} \usepackage{booktabs} \usepackage{amsthm} \usepackage{LectureNoteMacro} \usepackage{bbm} \usepackage{mathtools} \makeatletter \def\thm@space@setup{% \thm@preskip=8pt plus 2pt minus 4pt \thm@postskip=\thm@preskip } \makeatother \ifLuaTeX \usepackage{selnolig} % disable illegal ligatures \fi \usepackage[]{natbib} \bibliographystyle{apalike} \title{\textbf{SCMA 470 : Risk Analysis and Credibility} \textbf{Tutorial 3}} \author{Pairote Satiracoo} \date{2021-12-06} \usepackage{amsthm} \newtheorem{theorem}{Theorem}[chapter] \newtheorem{lemma}{Lemma}[chapter] \newtheorem{corollary}{Corollary}[chapter] \newtheorem{proposition}{Proposition}[chapter] \newtheorem{conjecture}{Conjecture}[chapter] \theoremstyle{definition} \newtheorem{definition}{Definition}[chapter] \theoremstyle{definition} \newtheorem{example}{Example}[chapter] \theoremstyle{definition} \newtheorem{exercise}{Exercise}[chapter] \theoremstyle{definition} \newtheorem{hypothesis}{Hypothesis}[chapter] \theoremstyle{remark} \newtheorem*{remark}{Remark} \newtheorem*{solution}{Solution} \begin{document} \maketitle { \setcounter{tocdepth}{1} \tableofcontents } \hypertarget{basic-probability-concepts}{% \chapter{Basic Probability Concepts}\label{basic-probability-concepts}} \hypertarget{random-variables}{% \section{Random Variables}\label{random-variables}} \begin{definition} \protect\hypertarget{def:unlabeled-div-1}{}\label{def:unlabeled-div-1} \emph{Let \(S\) be the sample space of an experiment. A real-valued function \(X : S \rightarrow \mathbb{R}\) is called a \textbf{random variable} of the experiment if, for each interval \(I \subset \mathbb{R}, \, \{s : X(s) \in I \}\) is an event. } \end{definition} Random variables are often used for the calculation of the probabilities of events. The real-valued function \(P(X \le t)\) characterizes \(X\), it tells us almost everything about \(X\). This function is called the \textbf{cumulative distribution function} of \(X\). The cumulative distribution function describes how the probabilities accumulate. \begin{definition} \protect\hypertarget{def:unlabeled-div-2}{}\label{def:unlabeled-div-2} \emph{If \(X\) is a random variable, then the function \(F\) defined on \(\mathbb{R}\) by \[F(x) = P(X \le x)\] is called the \textbf{cumulative distribution function} or simply \textbf{distribution function (c.d.f)} of \(X\).} \end{definition} Functions that define the probability measure for discrete and continuous random variables are the probability mass function and the probability density function. \begin{definition} \protect\hypertarget{def:unlabeled-div-3}{}\label{def:unlabeled-div-3} \emph{Suppose \(X\) is a discrete random variable. Then the function \[f(x) = P(X = x)\] that is defined for each \(x\) in the range of \(X\) is called the \textbf{probability mass function} (p.m.f) of a random variable \(X\).} \end{definition} \begin{definition} \protect\hypertarget{def:unlabeled-div-4}{}\label{def:unlabeled-div-4} \emph{Suppose \(X\) is a continuous random variable with c.d.f \(F\) and there exists a nonnegative, integrable function \(f\), \(f: \mathbb{R} \rightarrow [0, \infty)\) such that \[F(x) = \int_{-\infty}^x f(y)\, dy\] Then the function \(f\) is called the \textbf{probability density function} (p.d.f) of a random variable \(X\).} \end{definition} \hypertarget{r-functions-for-probability-distributions}{% \subsection{R Functions for Probability Distributions}\label{r-functions-for-probability-distributions}} In R, density, distribution function, for the Poisson distribution with parameter \(\lambda\) is shown as follows: \begin{longtable}[]{@{} >{\raggedright\arraybackslash}p{(\columnwidth - 8\tabcolsep) * \real{0.12}} >{\raggedright\arraybackslash}p{(\columnwidth - 8\tabcolsep) * \real{0.21}} >{\raggedright\arraybackslash}p{(\columnwidth - 8\tabcolsep) * \real{0.22}} >{\raggedright\arraybackslash}p{(\columnwidth - 8\tabcolsep) * \real{0.21}} >{\raggedright\arraybackslash}p{(\columnwidth - 8\tabcolsep) * \real{0.22}}@{}} \toprule \begin{minipage}[b]{\linewidth}\raggedright Distribution \end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright Density function: \(P(X = x)\) \end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright Distribution function: \(P(X ≤ x)\) \end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright Quantile function (inverse c.d.f.) \end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright random generation \end{minipage} \\ \midrule \endhead Poisson & \texttt{dpois(x,\ lambda,\ log\ =\ FALSE)} & \texttt{ppois(q,\ lambda,\ lower.tail\ =\ TRUE,\ log.p\ =\ FALSE)} & \texttt{qpois(p,\ lambda,\ lower.tail\ =\ TRUE,\ log.p\ =\ FALSE)} & \texttt{rpois(n,\ lambda)} \\ \bottomrule \end{longtable} For the binomial distribution, these functions are pbinom, qbinom, dbinom, and rbinom. For the normal distribution, these functions are pnorm, qnorm, dnorm, and rnorm. And so forth. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(ggplot2)} \NormalTok{x }\OtherTok{\textless{}{-}} \DecValTok{0}\SpecialCharTok{:}\DecValTok{20} \NormalTok{myData }\OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{( }\AttributeTok{k =} \FunctionTok{factor}\NormalTok{(x), }\AttributeTok{pK =} \FunctionTok{dbinom}\NormalTok{(x, }\DecValTok{20}\NormalTok{, .}\DecValTok{5}\NormalTok{))} \FunctionTok{ggplot}\NormalTok{(myData,}\FunctionTok{aes}\NormalTok{(k,}\AttributeTok{ymin=}\DecValTok{0}\NormalTok{,}\AttributeTok{ymax=}\NormalTok{pK)) }\SpecialCharTok{+} \FunctionTok{geom\_linerange}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{ylab}\NormalTok{(}\StringTok{"p(k)"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{breaks=}\FunctionTok{seq}\NormalTok{(}\DecValTok{0}\NormalTok{,}\DecValTok{20}\NormalTok{,}\DecValTok{5}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{ggtitle}\NormalTok{(}\StringTok{"p.m.f of binomial distribution"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{SCMA470Bookdownproj_files/figure-latex/unnamed-chunk-1-1.pdf} To plot continuous probability distribution in R, we use stat\_function to add the density function as its arguement. To specify a different mean or standard deviation, we use the \texttt{args} parameter to supply new values. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(ggplot2)} \NormalTok{df }\OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{(}\AttributeTok{x=}\FunctionTok{seq}\NormalTok{(}\SpecialCharTok{{-}}\DecValTok{10}\NormalTok{,}\DecValTok{10}\NormalTok{,}\AttributeTok{by=}\FloatTok{0.1}\NormalTok{))} \FunctionTok{ggplot}\NormalTok{(df) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\FunctionTok{aes}\NormalTok{(x),}\AttributeTok{fun=}\NormalTok{dnorm, }\AttributeTok{args =} \FunctionTok{list}\NormalTok{(}\AttributeTok{mean =} \DecValTok{0}\NormalTok{, }\AttributeTok{sd =} \DecValTok{1}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{labs}\NormalTok{(}\AttributeTok{x =} \StringTok{"x"}\NormalTok{, }\AttributeTok{y =} \StringTok{"f(x)"}\NormalTok{, } \AttributeTok{title =} \StringTok{"Normal Distribution With Mean = 0 \& SD = 1"}\NormalTok{) } \end{Highlighting} \end{Shaded} \includegraphics{SCMA470Bookdownproj_files/figure-latex/unnamed-chunk-2-1.pdf} \hypertarget{expectation}{% \section{Expectation}\label{expectation}} \begin{definition} \protect\hypertarget{def:unlabeled-div-5}{}\label{def:unlabeled-div-5} \emph{The \textbf{expected value} of a discrete random variable \(X\) with the set of possible values \(A\) and probability mass function \(f(x)\) is defined by \[\mathrm{E}(X) = \sum_{x \in A} x f(x)\]} \end{definition} The \textbf{expected value} of a random variable \(X\) is also called the mean, or the mathematical expectation, or simply the expectation of \(X\). It is also occasionally denoted by \(\mathrm{E}[X]\), \(\mu_X\), or \(\mu\). Note that if each value \(x\) of \(X\) is weighted by \(f(x) = P(X = x)\), then \(\displaystyle \sum_{x \in A} x f(x)\) is nothing but the weighted average of \(X\). \begin{theorem} \protect\hypertarget{thm:unlabeled-div-6}{}\label{thm:unlabeled-div-6} \emph{Let \(X\) be a discrete random variable with set of possible values \(A\) and probability mass function \(f(x)\), and let \(g\) be a real-valued function. Then \(g(X)\) is a random variable with \[\mathrm{E}[g(X)] = \sum_{x \in A} g(x) f(x)\] } \end{theorem} \begin{definition} \protect\hypertarget{def:unlabeled-div-7}{}\label{def:unlabeled-div-7} \emph{If \(X\) is a continuous random variable with probability density function \(f\) , the \textbf{expected value} of \(X\) is defined by \[\mathrm{E}(X) = \int_{-\infty}^\infty x f(x)\, dx\] } \end{definition} \begin{theorem} \protect\hypertarget{thm:unlabeled-div-8}{}\label{thm:unlabeled-div-8} \begin{itemize} \tightlist \item Let \(X\) be a continuous random variable with probability density function \(f (x)\); then for any function \(h: \mathbb{R} \rightarrow \mathbb{R}\), \[\mathrm{E}[h(X)] = \int_{-\infty}^\infty h(x)\, f(x)\, dx\] * \end{itemize} \end{theorem} \begin{theorem} \protect\hypertarget{thm:unlabeled-div-9}{}\label{thm:unlabeled-div-9} \emph{Let \(X\) be a random variable. Let \(h_1, h_2, . . . , h_n\) be real-valued functions, and \(a_1, a_2, \ldots, a_n\) be real numbers. Then \[\mathrm{E}[a_1 h_1(X) + a_2 h_2(X) + \cdots + a_n h_n(X)] = a_1 \mathrm{E}[h_1(X)] + a_2 \mathrm{E}[h_2(X)] + \ldots + a_n \mathrm{E}[h_n(X)]\]} \end{theorem} Moreover, if \(a\) and \(b\) are constants, then \[\mathrm{E}(aX +b) = a\mathrm{E}(x) + b\] \hypertarget{variances-of-random-variables}{% \section{Variances of Random Variables}\label{variances-of-random-variables}} \begin{definition} \protect\hypertarget{def:unlabeled-div-10}{}\label{def:unlabeled-div-10} \emph{Let \(X\) be a discrete random variable with a set of possible values \(A\), probability mass function \(f(x)\), and \(\mathrm{E}(X) = \mu\). then \(\mathrm{Var}(X)\) and \(\sigma_X\), called the \textbf{variance} and \textbf{standard deviation} of \(X\), respectively, are defined by \[\mathrm{Var}(X) = \mathrm{E}[(X- \mu)^2] = \sum_{x \in A} (x - \mu)^2 f(x),\] \[\sigma_X = \sqrt{\mathrm{E}[(X- \mu)^2]}\]} \end{definition} \begin{definition} \protect\hypertarget{def:unlabeled-div-11}{}\label{def:unlabeled-div-11} \emph{If \(X\) is a continuous random variable with \(\mathrm{E}(X) = \mu\), then \(\mathrm{Var}(X)\) and \(\sigma_X\), called the \textbf{variance} and \textbf{standard deviation} of \(X\), respectively, are defined by \[\mathrm{Var}(X) = \mathrm{E}[(X- \mu)^2] = \int_{-\infty}^\infty (x - \mu)^2\, f(x)\, dx ,\] \[\sigma_X = \sqrt{\mathrm{E}[(X- \mu)^2]}\]} \end{definition} We have the following important relations \[\mathrm{Var}(x) = \mathrm{E}(X^2) - (\mathrm{E}(x))^2 ,\] \[\mathrm{Var}(aX + b) = a^2\ Var(X), \quad \sigma_{aX + b}= |a|\sigma_X\] where \(a\) and \(b\) are constants. \hypertarget{moments-and-moment-generating-function}{% \section{Moments and Moment Generating Function}\label{moments-and-moment-generating-function}} \begin{definition} \protect\hypertarget{def:unlabeled-div-12}{}\label{def:unlabeled-div-12} \emph{For \(r > 0\), the \(r\)th moment of \(X\) (the \(r\)th moment about the origin) is \(\mathrm{E}[X^r]\), when it is defined. The \(r\)th central moment of a random variable \(X\) (the \(r\)th moment about the mean) is \(\mathrm{E}[(X - \mathrm{E}[X])^r].\) } \end{definition} \begin{definition} \protect\hypertarget{def:unlabeled-div-13}{}\label{def:unlabeled-div-13} \emph{The skewness of \(X\) is defined to be the third central moment, \[\mathrm{E}[(X - \mathrm{E}[X])^3],\] and the coefficient of skewness to be given by \[\frac{\mathrm{E}[(X - \mathrm{E}[X])^3]}{(\mathrm{Var}[X])^{3/2}}.\] } \end{definition} \begin{definition} \protect\hypertarget{def:unlabeled-div-14}{}\label{def:unlabeled-div-14} \emph{The coefficient of kurtosis of \(X\) is defined by \[\frac{\mathrm{E}[(X - \mathrm{E}[X])^4]}{(\mathrm{Var}[X])^{4/2}}.\] } \end{definition} \textbf{Note} In the formula, subtract from the mean and normalise or divide by the standard deviation center and scale to the standard values. Odd-order moments are increased if there is a long tail to the right and decreased if there is a long tail to the left, while even-order moments are increased if either tail is long. A negative value of the coefficient of skewness that the distribution is skewed to the left, or negatively skewed, meaning that the deviations above the mean tend to be smaller than the deviations below the mean, and vice versa. If the coefficent of skewness is close to zero, this could mean symmetry, \textbf{Note} The fourth moment measures the fatness in the tails, which is always positive. The kurtosis of the standard normal distribution is 3. Using the standard normal distribution as a benchmark, the excess kurtosis of a random variable is defined as the kurtosis minus 3. A higher kurtosis corresponds to a larger extremity of deviations (or outliers), which is called excess kurtosis. The following diagram compares the shape between the normal distribution and Student's t-distribution. Note that to use the legend with the \texttt{stat\_function} in ggplot2, we use \texttt{scale\_colour\_manual} along with \texttt{colour\ =} inside the \texttt{aes()} as shown below and give names for specific density plots. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(ggplot2)} \NormalTok{df }\OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{(}\AttributeTok{x=}\FunctionTok{seq}\NormalTok{(}\SpecialCharTok{{-}}\DecValTok{10}\NormalTok{,}\DecValTok{10}\NormalTok{,}\AttributeTok{by=}\FloatTok{0.1}\NormalTok{))} \FunctionTok{ggplot}\NormalTok{(df) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\FunctionTok{aes}\NormalTok{(x, }\AttributeTok{colour =} \StringTok{"dnorm"}\NormalTok{),}\AttributeTok{fun =}\NormalTok{ dnorm, }\AttributeTok{args =} \FunctionTok{list}\NormalTok{(}\AttributeTok{mean =} \DecValTok{0}\NormalTok{, }\AttributeTok{sd =} \DecValTok{1}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\FunctionTok{aes}\NormalTok{(x, }\AttributeTok{colour =}\StringTok{"dt"}\NormalTok{),}\AttributeTok{fun =}\NormalTok{ dt, }\AttributeTok{args =} \FunctionTok{list}\NormalTok{(}\AttributeTok{df =} \DecValTok{4}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{scale\_colour\_manual}\NormalTok{(}\StringTok{"Legend title"}\NormalTok{, }\AttributeTok{values =} \FunctionTok{c}\NormalTok{(}\StringTok{"black"}\NormalTok{, }\StringTok{"blue"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{labs}\NormalTok{(}\AttributeTok{x =} \StringTok{"x"}\NormalTok{, }\AttributeTok{y =} \StringTok{"f(x)"}\NormalTok{, } \AttributeTok{title =} \StringTok{"Normal Distribution With Mean = 0 \& SD = 1"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{theme}\NormalTok{(}\AttributeTok{plot.title =} \FunctionTok{element\_text}\NormalTok{(}\AttributeTok{hjust =} \FloatTok{0.5}\NormalTok{))} \end{Highlighting} \end{Shaded} \includegraphics{SCMA470Bookdownproj_files/figure-latex/unnamed-chunk-3-1.pdf} Next we will simulate 10000 samples from a normal distribution with mean 0, and standard deviation 1, then compute and interpret for the skewness and kurtosis, and plot the histogram. Here we also use the function \texttt{set.seed()} to set the seed of R's random number generator, this is useful for creating simulations or random objects that can be reproduced. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{set.seed}\NormalTok{(}\DecValTok{15}\NormalTok{) }\CommentTok{\# Set the seed of R\textquotesingle{}s random number generator} \CommentTok{\#Simulation} \NormalTok{n.sample }\OtherTok{\textless{}{-}} \FunctionTok{rnorm}\NormalTok{(}\AttributeTok{n =} \DecValTok{10000}\NormalTok{, }\AttributeTok{mean =} \DecValTok{0}\NormalTok{, }\AttributeTok{sd =} \DecValTok{1}\NormalTok{)} \CommentTok{\#Skewness and Kurtosis} \FunctionTok{library}\NormalTok{(moments)} \FunctionTok{skewness}\NormalTok{(n.sample)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] -0.03585812 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{kurtosis}\NormalTok{(n.sample)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 2.963189 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\FunctionTok{data.frame}\NormalTok{(}\AttributeTok{x =}\NormalTok{ n.sample),}\FunctionTok{aes}\NormalTok{(x)) }\SpecialCharTok{+} \FunctionTok{geom\_histogram}\NormalTok{(}\AttributeTok{binwidth =} \FloatTok{0.5}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{SCMA470Bookdownproj_files/figure-latex/unnamed-chunk-4-1.pdf} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{set.seed}\NormalTok{(}\DecValTok{15}\NormalTok{)} \CommentTok{\#Simulation} \NormalTok{t.sample }\OtherTok{\textless{}{-}} \FunctionTok{rt}\NormalTok{(}\AttributeTok{n =} \DecValTok{10000}\NormalTok{, }\AttributeTok{df =} \DecValTok{5}\NormalTok{)} \CommentTok{\#Skewness and Kurtosis} \FunctionTok{library}\NormalTok{(moments)} \FunctionTok{skewness}\NormalTok{(t.sample)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.06196269 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{kurtosis}\NormalTok{(t.sample)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 7.646659 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\FunctionTok{data.frame}\NormalTok{(}\AttributeTok{x =}\NormalTok{ t.sample),}\FunctionTok{aes}\NormalTok{(x)) }\SpecialCharTok{+} \FunctionTok{geom\_histogram}\NormalTok{(}\AttributeTok{binwidth =} \FloatTok{0.5}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{SCMA470Bookdownproj_files/figure-latex/unnamed-chunk-5-1.pdf} \textbf{Example} Let us count the number of samples greater than 5 from the samples of the normal and Student's t distributions. Comment on your results eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiIjIFdyaXRlIHlvdXIgY29kZSBoZXJlXG5zZXQuc2VlZCgxNSlcbm4uc2FtcGxlIDwtIHJub3JtKG4gPSAxMDAwMCwgbWVhbiA9IDAsIHNkID0gMSlcbnQuc2FtcGxlIDwtIHJ0KG4gPSAxMDAwMCwgZGYgPSA1KSJ9 \begin{definition} \protect\hypertarget{def:unlabeled-div-15}{}\label{def:unlabeled-div-15} \emph{The moment generating function (mgf) of a random variable \(X\) is defined to be \[M_X(t) = E[e^{tX}],\] if the expectation exists.} \end{definition} \textbf{Note} The moment generating function of \(X\) may not defined (may not be finite) for all \(t\) in \(\mathbb{R}\). If \(M_X(t)\) is finite for \(|t| < h\) for some \(h > 0\), then, for any \(k = 1, 2, \ldots,\) the function \(M_X(t)\) is k-times differentiable at \(t = 0\), with \[M^{(k)}_X (0) = \mathrm{E}[X^k],\] with \(\mathrm{E}[|X|^k]\) finite. We can obtain the moments by succesive differentiation of \(M_X(t)\) and letting \(t = 0\). \begin{example} \protect\hypertarget{exm:unlabeled-div-16}{}\label{exm:unlabeled-div-16} Derive the formula for the mgf of the standard normal distribution. Hint: its mgf is \(e^{\frac{1}{2} t^2}\). \end{example} \hypertarget{probability-generating-function}{% \section{Probability generating function}\label{probability-generating-function}} \begin{definition} \protect\hypertarget{def:unlabeled-div-17}{}\label{def:unlabeled-div-17} \emph{For a counting variable \(N\) (a variable which assumes some or all of the values \(0, 1, 2, \ldots,\) but no others), The probability generating function of \(N\) is \[G_N(t) = E[t^N],\] for those \(t\) in \(\mathbb{R}\) for which the series converges absolutely. } \end{definition} Let \(p_k = P(N = k)\). Then \[G_N(t) = E[t^N] = \sum_{k=0}^\infty t^k p_k.\] It can be shown that if \(E[N] < \infty\) then \[\mathrm{E}[N] = G'_N(1),\] and if \(E[N^2] < \infty\) then \[\mathrm{Var}[N] = G''_N(1) + G'_N(1) - (G'_N(1))^2.\] Moreover, when both pgf and mgf of \(N\) are defined, we have \[G_N(t) = M_N(\log(t)) \quad \text{ and } M_N(t) = G_N(e^t).\] \hypertarget{multivariate-distributions}{% \section{Multivariate Distributions}\label{multivariate-distributions}} When \(X_1,X_2,\ldots ,X_n\) be random variables defined on the same sample space, a multivariate probability density function or probability mass function\\ \(f(x_1, x_2, \ldots x_n)\) can be defined. The following definitions can be extended to more than two random variables and the case of discrete random variables. \begin{definition} \protect\hypertarget{def:unlabeled-div-18}{}\label{def:unlabeled-div-18} \emph{Two random variables \(X\) and \(Y\), defined on the same sample space, have a continuous joint distribution if there exists a nonnegative function of two variables, \(f(x, y)\) on \(\mathbb{R} \times \mathbb{R}\) , such that for any region \(R\) in the \(xy\)-plane that can be formed from rectangles by a countable number of set operations, \[P((X, Y) \in R) = \iint_R f(x,y) \, dx\, dy\] } \end{definition} The function \(f (x, y)\) is called the \textbf{joint probability density function} of \(X\) and \(Y\). Let \(X\) and \(Y\) have joint probability density function \(f (x, y)\). Let \(f_Y\) be the probability density function of \(Y\) . To find \(f_Y\) in terms of \(f\) , note that, on the one hand, for any subset \(B\) of \(R\), \[P(Y \in B) = \int_B f_Y(y) \, dy,\] and on the other hand, we also have \[P(Y \in B) = P(X \in (-\infty, \infty), Y \in B) = \int_B \left( \int_{-\infty}^\infty f(x,y)\, dx \right) \, dy.\] We have \begin{equation} \label{eq:label} f_Y(y) = \int_{-\infty}^\infty f(x,y)\, dx \end{equation} and \begin{equation} \label{eq:label2} f_X(x) = \int_{-\infty}^\infty f(x,y)\, dy \end{equation} \begin{definition} \protect\hypertarget{def:unlabeled-div-19}{}\label{def:unlabeled-div-19} \emph{Let \(X\) and \(Y\) have joint probability density function \(f (x, y)\); then the functions \(f_X\) and \(f_Y\) in} \eqref{eq:label} and \eqref{eq:label2} \emph{are called, respectively, the \textbf{marginal probability density functions} of \(X\) and \(Y\) .} \end{definition} Let \(X\) and \(Y\) be two random variables (discrete, continuous, or mixed). The \textbf{joint probability distribution function}, or \textbf{joint cumulative probability distribution function}, or simply the joint distribution of \(X\) and \(Y\), is defined by \[F(t, u) = P(X \le t, Y \le u)\] for all \(t, u \in (-\infty, \infty)\). The marginal probability distribution function of \(X\), \(F_X\), can be found from \(F\) as follows: \[F_X(t) = \lim_{n \rightarrow \infty} F(t,u) = F(t, \infty)\] and \[F_Y(u) = \lim_{n \rightarrow \infty}F(t,u) = F( \infty, u)\] The following relationship between \(f(x,y)\) and \(F(t,u)\) is as follows: \[F(t,u) = \int_{-\infty}^{u}\int_{-\infty}^{t} f(x,y)\, dx\, dy.\] We also have \[\mathrm{E}(X) = \int_{-\infty}^\infty x f_X(x)\, dx , \quad \mathrm{E}(Y) = \int_{-\infty}^\infty y f_Y(y)\, dy\] \begin{theorem} \protect\hypertarget{thm:unlabeled-div-20}{}\label{thm:unlabeled-div-20} \emph{Let \(f (x, y)\) be the joint probability density function of random variables \(X\) and \(Y\). If \(h\) is a function of two variables from \(\mathbb{R}^2\) to \(\mathbb{R}\), then \(h(X, Y )\) is a random variable with the expected value given by \[\mathrm{E}[h(X,Y)] = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} h(x,y) \, f(x,y)\, dx\, dy\] provided that the integral is absolutely convergent.} \end{theorem} As a consequence of the above theorem, for random variables \(X\) and \(Y\), \[\mathrm{E}(X + Y) = \mathrm{E}(X) + \mathrm{E}(Y)\] \hypertarget{independent-random-variables}{% \section{Independent random variables}\label{independent-random-variables}} \begin{definition} \protect\hypertarget{def:unlabeled-div-21}{}\label{def:unlabeled-div-21} \emph{Two random variables \(X\) and \(Y\) are called independent if, for arbitrary subsets \(A\) and \(B\) of real numbers, the events \(\{X \in A\}\) and \(\{Y \in B\}\) are \textbf{independent}, that is, if \[P(X \in A, Y \in B) = P(X \in A) P(Y \in B).\]} \end{definition} \begin{theorem} \protect\hypertarget{thm:unlabeled-div-22}{}\label{thm:unlabeled-div-22} \emph{Let \(X\) and \(Y\) be two random variables defined on the same sample space. If \(F\) is the joint probability distribution function of \(X\) and \(Y\), then \(X\) and \(Y\) are independent if and only if for all real numbers \(t\) and \(u\), \[F(t,u) = F_X(t) F_Y(u).\]} \end{theorem} \begin{theorem} \protect\hypertarget{thm:unlabeled-div-23}{}\label{thm:unlabeled-div-23} \emph{Let \(X\) and \(Y\) be jointly continuous random variables with joint probability density function \(f (x, y)\). Then \(X\) and \(Y\) are independent if and only if \[f (x, y) = f_X(x) f_Y (y).\]} \end{theorem} \begin{theorem} \protect\hypertarget{thm:unlabeled-div-24}{}\label{thm:unlabeled-div-24} \emph{Let \(X\) and \(Y\) be independent random variables and \(g : \mathbb{R} \rightarrow\mathbb{R}\) and \(h : \mathbb{R} \rightarrow\mathbb{R}\) be real-valued functions; then \(g(X)\) and \(h(Y )\) are also independent random variables.} \end{theorem} As a consequence of the above theorem, we obtain \begin{theorem} \protect\hypertarget{thm:unlabeled-div-25}{}\label{thm:unlabeled-div-25} \emph{Let \(X\) and \(Y\) be independent random variables. Then for all real-value functions \(g : \mathbb{R} \rightarrow\mathbb{R}\) and \(h : \mathbb{R} \rightarrow\mathbb{R}\), \[\mathrm{E}[g(X)h(Y)] = \mathrm{E}[g(X)]\mathrm{E}[h(Y)]\] } \end{theorem} \hypertarget{conditional-distributions}{% \section{Conditional Distributions}\label{conditional-distributions}} Let \(X\) and \(Y\) be two continuous random variables with the joint probability density function \(f (x, y)\). Note that the case of discrete random variables can be considered in the same way. When no information is given about the value of \(Y\), the marginal probability density function of \(X\), \(f_X(x)\) is used to calculate the probabilities of events concerning \(X\). However, when the value of \(Y\) is known, to find such probabilities, \(f_{X|Y} (x|y)\), the conditional probability density function of \(X\) given that \(Y = y\) is used and is defined as follows: \[f_{X|Y} (x|y) = \frac{f(x,y)}{f_Y(y)}\] provided that \(f_Y (y) > 0\). Note also that the conditional probability density function of \(X\) given that \(Y = y\) is itseof a probability density function, i.e. \[\int_{-\infty}^\infty f_{X|Y}(x|y)\, dx = 1.\] Note that the conditional probability distribution function of \(X\) given that \(Y = y\), the conditional expectation of \(X\) given that \(Y = y\) can be as follows: \[F_{Y|X}(x|y) = P(X \le x | Y = y) = \int_ {-\infty}^x f_{X|Y}(t|y) \, dt\] and \[\mathrm{E}(X|Y = y) = \int_{-\infty}^{\infty} x f_{X|Y}(x|y) \, dx,\] where \(f_Y(y) > 0\). Note that if \(X\) and \(Y\) are independent, then \(f_{X|Y}\) coincides with \(f_X\) because \[f_{X|Y}(x|y) = \frac{f(x,y)}{f_Y(y)} =\frac{f_X(x)f_Y(y)}{f_Y(y)} = f_X(x).\] \hypertarget{covariance}{% \section{Covariance}\label{covariance}} The notion of the variance of a random variable \(X\), \(\mathrm{Var}(X) = \mathrm{E}[ ( X - \mathrm{E}(X))^2]\) measures the average magnitude of the fluctuations of the random variable \(X\) from its expectation, \(\mathrm{E}(X)\). This quantity measures the dispersion, or spread, of the distribution of \(X\) about its expectation. Now suppose that \(X\) and \(Y\) are two jointly distributed random variables. Covariance is a measure of how much two random variables vary together. Let us calculuate \(\mathrm{Var}(aX + bY)\) the joint spread, or dispersion, of \(X\) and \(Y\) along the \((ax + by)\)-direction for arbitrary real numbers \(a\) and \(b\): \[\mathrm{Var}(aX + bY) = a^2 \mathrm{Var}(X) + b^2 \mathrm{Var}(Y) + 2 a b \mathrm{E}[(X - \mathrm{E}(X))(Y - \mathrm{E}(Y))].\] However, \(\mathrm{Var}(X)\) and \(\mathrm{Var}(Y )\) determine the dispersions of \(X\) and \(Y\) independently; therefore, \(\mathrm{E}[(X - \mathrm{E}(X))(Y - \mathrm{E}(Y))]\) is the quantity that gives information about the joint spread, or dispersion, \(X\) and \(Y\) . \begin{definition} \protect\hypertarget{def:unlabeled-div-26}{}\label{def:unlabeled-div-26} \emph{Let \(X\) and \(Y\) be jointly distributed random variables; then the \textbf{covariance} of \(X\) and \(Y\) is defined by \[\mathrm{Cov}(X,Y) = \mathrm{E}[(X - \mathrm{E}(X))(Y - \mathrm{E}(Y))].\]} \end{definition} Note that for random variables \(X, Y\) and \(Z\), and \(ab > 0\), then the joint dispersion of \(X\) and \(Y\) along the \((ax + by)\)-direction is greater than the joint dispersion of \(X\) and \(Z\) along the \((ax + bz)\)-direction if and only if \(\mathrm{Cov}(X, Y) > \mathrm{Cov}(X,Z).\) Note that \[\mathrm{Cov}(X, X) = \mathrm{Var}(X).\] Moreover, \[\mathrm{Cov}(X,Y) = \mathrm{E}(XY) - \mathrm{E}(X)\mathrm{E}(Y).\] Properties of covariance are as follows: for arbitrary real numbers \(a, b, c, d\) and random variables \(X\) and \(Y\), \[\mathrm{Var}(aX + bY) = a^2 \mathrm{Var}(X) + b^2 \mathrm{Var}(Y) + 2 a b \mathrm{Cov}(X,Y).\] \[\mathrm{Cov}(aX + b, cY + d) = acCov(X, Y)\] For random variables \(X_1, X_2, . . . , X_n\) and \(Y_1, Y_2, . . . , Y_m\), \[\mathrm{Cov}(\sum_{i=1}^n a_i X_i, \sum_{j=1}^m b_j Y_j) = \sum_{i=1}^n\sum_{j=1}^m a_i\,b_j\, \mathrm{Cov}(X_i,Y_j).\] If \(\mathrm{Cov}(X, Y) > 0\), we say that \(X\) and \(Y\) are positively correlated. If \(\mathrm{Cov}(X, Y) < 0\), we say that they are negatively correlated. If \(\mathrm{Cov}(X, Y) = 0\), we say that \(X\) and \(Y\) are uncorrelated. If \(X\) and \(Y\) are independent, then \[\mathrm{Cov}(X,Y) = 0.\] However, the converse of this is not true; that is, two dependent random variables might be uncorrelated. \hypertarget{correlation}{% \section{Correlation}\label{correlation}} A large covariance can mean a strong relationship between variables. However, we cannot compare variances over data sets with different scales. A weak covariance in one data set may be a strong one in a different data set with different scales. The problem can be fixed by dividing the covariance by the standard deviation to get the correlation coefficient. \begin{definition} \protect\hypertarget{def:unlabeled-div-27}{}\label{def:unlabeled-div-27} \emph{Let \(X\) and \(Y\) be two random variables with \(0< \sigma^2_X, \sigma^2_Y < \infty\). The covariance between the standardized \(X\) and the standardized \(Y\) is called the correlation coefficient between \(X\) and \(Y\) and is denoted \(\rho = \rho(X,Y)\), \[\rho(X,Y) = \frac{\mathrm{Cov}(X,Y)}{\sigma_X \sigma_Y}.\] } \end{definition} Note that \begin{itemize} \item \(\rho(X, Y ) > 0\) if and only if \(X\) and \(Y\) are positively correlated; \item \(\rho(X, Y ) < 0\) if and only if \(X\) and \(Y\) are negatively correlated; and \item \(\rho(X, Y ) = 0\) if and only if \(X\) and \(Y\) are uncorrelated. \item \(\rho(X, Y )\) roughly measures the amount and the sign of linear relationship between \(X\) and \(Y\). \end{itemize} In the case of perfect linear relationship, we have \(\rho(X, Y ) = \pm1\). A correlation of 0, i.e.~\(\rho(X, Y ) = 0\) does not mean zero relationship between two variables; rather, it means zero linear relationship. Some importants properties of correlation are \[-1 \le \rho(X, Y ) \le 1\] \[\rho(a X + b, cY +d) = \text{sign}(ac) \rho(X, Y )\] \hypertarget{model-fitting}{% \section{Model Fitting}\label{model-fitting}} The contents in this section are taken from Gray and Pitts. To fit a parametric model, we have to calculate estimates of the unknown parameters of the probability distribution. Various criteria are available, including the method of moments, the method of maximum likelihood, etc. \hypertarget{the-method-of-moments}{% \section{The method of moments}\label{the-method-of-moments}} The method of moments leads to parameter estimates by simply matching the moments of the model, \(\mathrm{E}[X], \mathrm{E}[X^2], \mathrm{E}[X^3], \ldots ,\) in turn to the required number of corresponding sample moments calculated from the data \(x_1, x_2, \ldots , x_n\), where \(n\) is the number of observations available. The sample moments are simply \[\frac{1}{n}\sum_{i=1}^n x_i, \quad \frac{1}{n}\sum_{i=1}^n x^2_i, \quad \frac{1}{n}\sum_{i=1}^n x^3_i, \ldots.\] It is often more convenient to match the mean and central moments, in particular matching \(\mathrm{E}[X]\) to the sample mean \(\bar{x}\) and \(\mathrm{Var}[X]\) to the sample variance \[s^2 = \frac{1}{n-1}\sum_{i=1}^n (x_i - \bar{x})^2.\] An estimate produced using the method of moments is called an MME, and the MME of a parameter \(\theta\), say, is usually denoted \(\tilde{\theta}\). \hypertarget{the-method-of-maximum-likelihood}{% \section{The method of maximum likelihood}\label{the-method-of-maximum-likelihood}} The method of maximum likelihood is the most widely used method for parameter estimation. The estimates it produces are those values of the parameters which give the maximum value attainable by the likelihood function, denoted \(L\), which is the joint probability mass or density function for the data we have (under the chosen parametric distribution), regarded as a function of the unknown parameters. In practice, it is often easier to maximise the loglikelihood function, which is the logarithm of the likelihood function, rather than the likelihood itself. An estimate produced using the method of maximum likelihood is called an MLE, and the MLE of a parameter \(\theta\), say, is denoted \(\hat{\theta}\). MLEs have many desirable theoretical properties, especially in the case of large samples. In some simple cases we can derive MLE(s) analytically as explicit functions of summaries of the data. Thus, suppose our data consist of a random sample \(x_1, x_2, \ldots , x_n\), from a parametric distribution whose parameter(s) we want to estimate. Some straightforward cases include the following: \begin{itemize} \item the MLE of \(\lambda\) for a \(Poi(\lambda)\) distribution is the sample mean, that is \(\hat{\lambda} = \bar{x}\) \item the MLE of \(\lambda\) for an \(Exp(\lambda)\) distribution is the reciprocal of the sample mean, that is \(\hat{\lambda} = 1/\bar{x}\) \end{itemize} \hypertarget{goodness-of-fit-tests}{% \section{Goodness of fit tests}\label{goodness-of-fit-tests}} We can assess how well the fitted distributions reflect the distribution of the data in various ways. We should, of course, examine and compare the tables of frequencies and, if appropriate, plot and compare empirical distribution functions. More formally, we can perform certain statistical tests. Here we will use the Pearson chi-square goodness-of-fit criterion. \hypertarget{the-pearson-chi-square-goodness-of-fit-criterion}{% \section{the Pearson chi-square goodness-of-fit criterion}\label{the-pearson-chi-square-goodness-of-fit-criterion}} We construct the test statistic \[\chi^2 = \frac{\sum(O - E)^2}{E},\] where \(O\) is the observed frequency in a cell in the frequency table and \(E\) is the fitted or expected frequency (the frequency expected in that cell under the fitted model), and where we sum over all usable cells. \textbf{The null hypothesis} is that the sample comes from a specified distribution. The value of the test statistic is then evaluated in one of two ways. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item We convert it to a \(P\)-value, which is a measure of the strength of the evidence against the hypothesis that the data do follow the fitted distribution. \textbf{If the \(P\)-value is small enough, we conclude that the data do not follow the fitted distribution -- we say ``the fitted distribution does not provide a good fit to the data'' (and quote the \(P\)-value in support of this conclusion)}. \item We compare it with values in published tables of the distribution function of the appropriate \(\chi^2\) distribution, and if the value of the statistic is high enough to be in a tail of specified size of this reference distribution, we conclude that the fitted distribution does not provide a good fit to the data. \end{enumerate} \hypertarget{kolmogorov-smirnov-k-s-test.}{% \section{Kolmogorov-Smirnov (K-S) test.}\label{kolmogorov-smirnov-k-s-test.}} The K-S test statistic is the maximum difference between the values of the ecdf of the sample and the cdf of the fully specified fitted distribution. The course does not emphasis on the Goodness of Fit Test. Please refer to the reference text for more details. \hypertarget{loss-distributions}{% \chapter{Loss distributions}\label{loss-distributions}} \hypertarget{introduction}{% \section{Introduction}\label{introduction}} The aim of the course is to provide a fundamental basis which applies mainly in general insurance. General insurance companies' products are short-term policies that can be purchased for a short period of time. Examples of insurance products are \begin{itemize} \item motor insurance; \item home insurance; \item health insurance; and \item travel insurance. \end{itemize} In case of an occurrence of an insured event, two important components of financial losses which are of importance for management of an insurance company are \begin{itemize} \item the number of claims; and \item the amounts of those claims. \end{itemize} Mathematical and statistical techniques used to model these sources of uncertainty will be discussed. This will enable insurance companies to \begin{itemize} \item calculate premium rates to charge policy holders; and \item decide how much reserve should be set aside for the future payment of incurred claims. \end{itemize} In the chapter, statistical distributions and their properties which are suitable for modelling claim sizes are reviewed. These distribution are also known as loss distributions. In practice, the shape of loss distributions are positive skew with a long right tail. The main features of loss distributions include: \begin{itemize} \item having a few small claims; \item rising to a peak; \item tailing off gradually with a few very large claims. \end{itemize} \hypertarget{exponential-distribution}{% \section{Exponential Distribution}\label{exponential-distribution}} A random variable \(X\) has an exponential distribution with a parameter \(\lambda > 0\), denoted by \(X \sim \text{Exp}(\lambda)\) if its probability density function is given by \[f_X(x) = \lambda e^{-\lambda x}, \quad x > 0.\] \begin{example} \protect\hypertarget{exm:unlabeled-div-28}{}\label{exm:unlabeled-div-28} \emph{Let \(X \sim \text{Exp}(\lambda)\) and \(0 < a < b\).} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \emph{Find the distribution \(F_X(x)\).} \item \emph{Express \(P(a < X < B)\) in terms of \(f_X(x)\) and \(F_X(x)\).} \item \emph{Show that the moment generating function of \(X\) is \[M_X(t) = \left(1 - \frac{t}{\lambda}\right)^{-1}, \quad t < \lambda.\]} \item \emph{Derive the \(r\)-th moment about the origin \(\mathrm{E}[X^r].\)} \item \emph{Derive the coefficient of skewness for \(X\).} \item \emph{Simulate a random sample of size n = 200 from \(X \sim \text{Exp}(0.5)\) using the command \texttt{sample\ =\ rexp(n,\ rate\ =\ lambda)} where \(n\) and \(\lambda\) are the chosen parameter values.} \item \emph{Plot a histogram of the random sample using the command \texttt{hist(sample)} (use help for available options for \texttt{hist} function in R).} \end{enumerate} \end{example} \textbf{Solution:} The code for questions 6 and 7 is given below. The histogram can be generated from the code below. \begin{verbatim} # set.seed is used so that random number generated from different simulations are the same. # The number 5353 can be set arbitrarily. set.seed(5353) nsample <- 200 data_exp <- rexp(nsample, rate = 0.5) dataset <- data_exp hist(dataset, breaks=100,probability = TRUE, xlab = "claim sizes" , ylab = "density", main = paste("Histogram of claim sizes" )) hist(dataset, breaks=100, xlab = "claim sizes" , ylab = "count", main = paste("Histogram of claim sizes" )) \end{verbatim} Copy and paste the code above and run it. eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiJzZXQuc2VlZCg1MzUzKVxuXG5uc2FtcGxlIDwtIDIwMFxuZGF0YV9leHAgPC0gcmV4cChuc2FtcGxlLCByYXRlID0gMC41KVxuXG5kYXRhc2V0IDwtIGRhdGFfZXhwXG5oaXN0KGRhdGFzZXQsIGJyZWFrcz0xMDAscHJvYmFiaWxpdHkgPSBUUlVFLCB4bGFiID0gXCJjbGFpbSBzaXplc1wiIFxuICAgICAsIHlsYWIgPSBcImRlbnNpdHlcIiwgbWFpbiA9IHBhc3RlKFwiSGlzdG9ncmFtIG9mIGNsYWltIHNpemVzXCIgKSkifQ== eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiJzZXQuc2VlZCg1MzUzKVxuXG5uc2FtcGxlIDwtIDIwMFxuZGF0YV9leHAgPC0gcmV4cChuc2FtcGxlLCByYXRlID0gMC41KVxuXG5kYXRhc2V0IDwtIGRhdGFfZXhwXG5cblxuaGlzdChkYXRhc2V0LCBicmVha3M9MTAwLCB4bGFiID0gXCJjbGFpbSBzaXplc1wiIFxuICAgICAsIHlsYWIgPSBcImNvdW50XCIsIG1haW4gPSBwYXN0ZShcIkhpc3RvZ3JhbSBvZiBjbGFpbSBzaXplc1wiICkpIn0= \textbf{Notes} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item The exponential distribution can used to model the inter-arrival time of an event. \item The exponential distribution has an important property called \textbf{lack of memory}: if \(X \sim \text{Exp}(\lambda)\), then the random variable \(X-w\) conditional on \(X > w\) has the same distribution as \(X\), i.e. \[X \sim \text{Exp}(\lambda)\Rightarrow X - w | X > w \sim \text{Exp}(\lambda).\] \end{enumerate} We can use R to plot the probability density functions (pdf) of exponential distributions with various parameters \(\lambda\), which are shown in Figure \ref{fig:FigExp}. Here we use \texttt{scale\_colour\_manual} to override defaults with scales package (see cheat sheet for details). \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(ggplot2)} \FunctionTok{ggplot}\NormalTok{(}\FunctionTok{data.frame}\NormalTok{(}\AttributeTok{x=}\FunctionTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{,}\DecValTok{10}\NormalTok{)), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{x)) }\SpecialCharTok{+} \FunctionTok{labs}\NormalTok{(}\AttributeTok{y=}\StringTok{"Probability density"}\NormalTok{, }\AttributeTok{x =} \StringTok{"x"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{ggtitle}\NormalTok{(}\StringTok{"Exponential distributions"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{theme}\NormalTok{(}\AttributeTok{plot.title =} \FunctionTok{element\_text}\NormalTok{(}\AttributeTok{hjust =} \FloatTok{0.5}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dexp,}\AttributeTok{geom =}\StringTok{"line"}\NormalTok{, }\AttributeTok{args =}\NormalTok{ (}\AttributeTok{mean=}\FloatTok{0.5}\NormalTok{), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"0.5"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dexp,}\AttributeTok{geom =}\StringTok{"line"}\NormalTok{, }\AttributeTok{args =}\NormalTok{ (}\AttributeTok{mean=}\DecValTok{1}\NormalTok{), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"1"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dexp,}\AttributeTok{geom =}\StringTok{"line"}\NormalTok{, }\AttributeTok{args =}\NormalTok{ (}\AttributeTok{mean=}\FloatTok{1.5}\NormalTok{), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"1.5"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dexp,}\AttributeTok{geom =}\StringTok{"line"}\NormalTok{, }\AttributeTok{args =}\NormalTok{ (}\AttributeTok{mean=}\DecValTok{2}\NormalTok{), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"2"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{scale\_colour\_manual}\NormalTok{(}\FunctionTok{expression}\NormalTok{(}\FunctionTok{paste}\NormalTok{(lambda, }\StringTok{" = "}\NormalTok{)), }\AttributeTok{values =} \FunctionTok{c}\NormalTok{(}\StringTok{"red"}\NormalTok{, }\StringTok{"blue"}\NormalTok{, }\StringTok{"green"}\NormalTok{, }\StringTok{"orange"}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{figure} \centering \includegraphics{SCMA470Bookdownproj_files/figure-latex/FigExp-1.pdf} \caption{\label{fig:FigExp}The probability density functions (pdf) of exponential distributions with various parameters lambda.} \end{figure} \hypertarget{gamma-distribution}{% \section{Gamma distribution}\label{gamma-distribution}} A random variable \(X\) has a gamma distribution with parameters \(\alpha > 0\) and \(\lambda > 0\), denoted by \(X \sim \mathcal{G}(\alpha, \lambda)\) or \(X \sim \text{gamma}(\alpha, \lambda)\) if its probability density function is given by \[f_X(x) = \frac{\lambda^\alpha}{\Gamma(\alpha)} x^{\alpha -1} e^{-\lambda x}, \quad x > 0.\] The symbol \(\Gamma\) denotes the gamma function, which is defined as \[\Gamma(\alpha) = \int_{0}^\infty x^{\alpha - 1} e^{-x} \mathop{}\!dx, \quad \text{for } \alpha > 0.\] It follows that \(\Gamma(\alpha + 1) = \alpha \Gamma(\alpha)\) and that for a positive integer \(n\), \(\Gamma(n) = (n-1)!\). The properties of the gamma distribution are summarised. \begin{itemize} \item The mean and variance of \(X\) are \[\mathrm{E}[X] = \frac{\alpha}{\lambda} \text{ and } \mathrm{Var}[X] =\frac{\alpha}{\lambda^2}\] \item The \(r\)-th moment about the origin is \[\mathrm{E}[X^r] = \frac{1}{\lambda^r} \frac{\Gamma(\alpha + r)}{\Gamma(\alpha )}, \quad r > 0.\] \item The moment generating function (mgf) of \(X\) is \[M_X(t) = \left(1 - \frac{t}{\lambda}\right)^{-\alpha}, \quad t < \lambda.\] \item The coefficient of skewness is \[\frac{2}{\sqrt{\alpha}}.\] \end{itemize} \textbf{Notes} 1. The exponential function is a special case of the gamma distribution, i.e.~\(\text{Exp}(\lambda)= \mathcal{G}(1,\lambda)\) \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item If \(\alpha\) is a positive integer, the sum of \(\alpha\) independent, identically distributed as \(\text{Exp}(\lambda)\), is \(\mathcal{G}(\alpha, \lambda)\). \item If \(X_1, X_2, \ldots, X_n\) are independent, identically distributed, each with a \(\mathcal{G}(\alpha, \lambda)\) distribution, then \[\sum_{i = 1}^n X_i \sim \mathcal{G}(n\alpha, \lambda).\] \item The exponential and gamma distributions are not fat-tailed, and \textbf{may not provide a good fit} to claim amounts. \end{enumerate} \begin{example} \protect\hypertarget{exm:unlabeled-div-29}{}\label{exm:unlabeled-div-29} \emph{Using the moment generating function of a gamma distribution, show that the sum of independent gamma random variables with the same scale parameter \(\lambda\), \(X \sim \mathcal{G}(\alpha_1, \lambda)\) and \(Y \sim \mathcal{G}(\alpha_2, \lambda)\), is \(S = X+ Y \sim \mathcal{G}(\alpha_1 + \alpha_2, \lambda).\)} \end{example} \textbf{Solution:} Because \(X\) and \(Y\) are independent, \[\begin{aligned} M_S(t) &= M_{X+Y}(t) = M_X(t) \cdot M_Y(t)\\ &= (1 - \frac{t}{\lambda})^{-\alpha_1} \cdot (1 - \frac{t}{\lambda})^{-\alpha_2} \\ &= (1 - \frac{t}{\lambda})^{-(\alpha_1 + \alpha_2)}. \end{aligned}\] Hence \(S = X + Y \sim \mathcal{G}(\alpha_1 + \alpha_2, \lambda).\) The probability density functions (pdf) of gamma distributions with various shape parameters \(\alpha\) and rate parameter \(\lambda\) = 1 are shown in Figure \ref{fig:FigGamma}. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\FunctionTok{data.frame}\NormalTok{(}\AttributeTok{x=}\FunctionTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{,}\DecValTok{20}\NormalTok{)), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{x)) }\SpecialCharTok{+} \FunctionTok{labs}\NormalTok{(}\AttributeTok{y=}\StringTok{"Probability density"}\NormalTok{, }\AttributeTok{x =} \StringTok{"x"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{ggtitle}\NormalTok{(}\StringTok{"Gamma distribution"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{theme}\NormalTok{(}\AttributeTok{plot.title =} \FunctionTok{element\_text}\NormalTok{(}\AttributeTok{hjust =} \FloatTok{0.5}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dgamma, }\AttributeTok{args=}\FunctionTok{list}\NormalTok{(}\AttributeTok{shape=}\DecValTok{2}\NormalTok{, }\AttributeTok{rate=}\DecValTok{1}\NormalTok{), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"2"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dgamma, }\AttributeTok{args=}\FunctionTok{list}\NormalTok{(}\AttributeTok{shape=}\DecValTok{6}\NormalTok{, }\AttributeTok{rate=}\DecValTok{1}\NormalTok{) , }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"6"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{scale\_colour\_manual}\NormalTok{(}\FunctionTok{expression}\NormalTok{(}\FunctionTok{paste}\NormalTok{(lambda, }\StringTok{" = 1 and "}\NormalTok{, alpha ,}\StringTok{" = "}\NormalTok{)), }\AttributeTok{values =} \FunctionTok{c}\NormalTok{(}\StringTok{"red"}\NormalTok{, }\StringTok{"blue"}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{figure} \centering \includegraphics{SCMA470Bookdownproj_files/figure-latex/FigGamma-1.pdf} \caption{\label{fig:FigGamma}The probability density functions (pdf) of gamma distributions with various shape alpha and rate parameter lambda = 1.} \end{figure} \hypertarget{lognormal-distribution}{% \section{Lognormal distribution}\label{lognormal-distribution}} A random variable \(X\) has a lognormal distribution with parameters \(\mu\) and \(\sigma^2\), denoted by \(X \sim \mathcal{LN}(\mu, \sigma^2)\) if its probability density function is given by \[f_X(x) = \frac{1}{\sigma x \sqrt{2 \pi}} \exp\left(-\frac{1}{2} \left( \frac{\log(x) - \mu}{\sigma} \right)^2 \right) , \quad x > 0.\] The following relation holds: \[X \sim \mathcal{LN}(\mu, \sigma^2)\Leftrightarrow Y = \log X \sim \mathcal{N}(\mu, \sigma^2).\] The properties of the lognormal distribution are summarised. \begin{itemize} \item The mean and variance of \(X\) are \[\mathrm{E}[X] = \exp\left(\mu + \frac{1}{2} \sigma^2 \right) \text{ and } \mathrm{Var}[X] =\exp\left(2\mu + \sigma^2 \right) (\exp(\sigma^2) - 1).\] \item The \(r\)-th moment about the origin is \[\mathrm{E}[X^r] =\exp\left(r\mu + \frac{1}{2}r^2 \sigma^2 \right).\] \item The moment generating function (mgf) of \(X\) is not finite for any positive value of \(t\). \item The coefficient of skewness is \[(\exp(\sigma^2) + 2) \left(\exp(\sigma^2) -1 \right)^{1/2} .\] \end{itemize} The probability density functions (pdf) of gamma distributions with various shape parameters \(\alpha\) and rate parameter \(\lambda = 1\) is shown in Figure \ref{fig:FigLognormal}. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\FunctionTok{data.frame}\NormalTok{(}\AttributeTok{x=}\FunctionTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{,}\DecValTok{10}\NormalTok{)), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{x)) }\SpecialCharTok{+} \FunctionTok{labs}\NormalTok{(}\AttributeTok{y=}\StringTok{"Probability density"}\NormalTok{, }\AttributeTok{x =} \StringTok{"x"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{ggtitle}\NormalTok{(}\StringTok{"lognormal distribution"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{theme}\NormalTok{(}\AttributeTok{plot.title =} \FunctionTok{element\_text}\NormalTok{(}\AttributeTok{hjust =} \FloatTok{0.5}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dlnorm, }\AttributeTok{args =} \FunctionTok{list}\NormalTok{(}\AttributeTok{meanlog =} \DecValTok{0}\NormalTok{, }\AttributeTok{sdlog =} \FloatTok{0.25}\NormalTok{), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"0.25"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dlnorm, }\AttributeTok{args =} \FunctionTok{list}\NormalTok{(}\AttributeTok{meanlog =} \DecValTok{0}\NormalTok{, }\AttributeTok{sdlog =} \DecValTok{1}\NormalTok{), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"1"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{scale\_colour\_manual}\NormalTok{(}\FunctionTok{expression}\NormalTok{(}\FunctionTok{paste}\NormalTok{(mu, }\StringTok{" = 0 and "}\NormalTok{, sigma, }\StringTok{"= "}\NormalTok{)), }\AttributeTok{values =} \FunctionTok{c}\NormalTok{(}\StringTok{"red"}\NormalTok{, }\StringTok{"blue"}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{figure} \centering \includegraphics{SCMA470Bookdownproj_files/figure-latex/FigLognormal-1.pdf} \caption{\label{fig:FigLognormal}The probability density functions (pdf) of lognormal distributions with mu = 0 and sigma = 0.25 or 1.} \end{figure} \hypertarget{pareto-distribution}{% \section{Pareto distribution}\label{pareto-distribution}} A random variable \(X\) has a Pareto distribution with parameters \(\alpha > 0\) and \(\lambda > 0\), denoted by \(X \sim \text{Pa}(\alpha, \lambda)\) if its probability density function is given by \[f_X(x) = \frac{\alpha \lambda^\alpha}{(\lambda + x)^{\alpha + 1}}, \quad x > 0.\] The distribution function is given by \[F_X(x) = 1 - \left( \frac{\lambda}{\lambda + \alpha} \right)^\alpha, \quad x > 0.\] The properties of the Pareto distribution are summarized. \begin{itemize} \item The mean and variance of \(X\) are \[\mathrm{E}[X] = \frac{\lambda}{\alpha - 1}, \alpha > 1 \text{ and } \mathrm{Var}[X] = \frac{\alpha \lambda^2}{(\alpha - 1)^2(\alpha - 2)}, \alpha > 2.\] \item The \(r\)-th moment about the origin is \[\mathrm{E}[X^r] =\frac{\Gamma(\alpha-r) \Gamma(1+ r)}{\Gamma(\alpha)} \lambda^r, \quad 0 < r < \alpha.\] \item The moment generating function (mgf) of \(X\) is not finite for any positive value of \(t\). \item The coefficient of skewness is \[\frac{2(\alpha + 1)}{\alpha - 3} \sqrt{\frac{\alpha-2}{\alpha}} , \quad \alpha > 3.\] \end{itemize} \textbf{Note} 1. The following conditional tail property for a Pareto distribution is useful for reinsurance calculation. Let \(X \sim \text{Pa}(\alpha, \lambda)\). Then the random variable \(X - w\) conditional on \(X > w\) has a Pareto distribution with parameters \(\alpha\) and \(\lambda + w\), i.e. \[X \sim \text{Pa}(\alpha, \lambda)\Rightarrow X - w | X > w \sim \text{Pa}(\alpha,\lambda + w).\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item The lognormal and Pareto distributions, in practice, provide a better fit to claim amounts than exponential and gamma distributions. \item Other loss distribution are useful in practice including \textbf{Burr, Weibull and loggamma distributions}. \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(actuar)} \FunctionTok{ggplot}\NormalTok{(}\FunctionTok{data.frame}\NormalTok{(}\AttributeTok{x=}\FunctionTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{,}\DecValTok{60}\NormalTok{)), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{x)) }\SpecialCharTok{+} \FunctionTok{labs}\NormalTok{(}\AttributeTok{y=}\StringTok{"Probability density"}\NormalTok{, }\AttributeTok{x =} \StringTok{"x"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{ggtitle}\NormalTok{(}\StringTok{"Pareto distribution"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{theme}\NormalTok{(}\AttributeTok{plot.title =} \FunctionTok{element\_text}\NormalTok{(}\AttributeTok{hjust =} \FloatTok{0.5}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dpareto, }\AttributeTok{args=}\FunctionTok{list}\NormalTok{(}\AttributeTok{shape=}\DecValTok{3}\NormalTok{, }\AttributeTok{scale=}\DecValTok{20}\NormalTok{), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"alpha = 3, lambda = 20"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dpareto, }\AttributeTok{args=}\FunctionTok{list}\NormalTok{(}\AttributeTok{shape=}\DecValTok{6}\NormalTok{, }\AttributeTok{scale=}\DecValTok{50}\NormalTok{), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"alpha = 6, lambda = 50"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{scale\_colour\_manual}\NormalTok{(}\StringTok{"Parameters"}\NormalTok{, }\AttributeTok{values =} \FunctionTok{c}\NormalTok{(}\StringTok{"red"}\NormalTok{, }\StringTok{"blue"}\NormalTok{), }\AttributeTok{labels =} \FunctionTok{c}\NormalTok{(}\FunctionTok{expression}\NormalTok{(}\FunctionTok{paste}\NormalTok{(alpha, }\StringTok{" = 3 and "}\NormalTok{, lambda, }\StringTok{"= 20"}\NormalTok{)), }\FunctionTok{expression}\NormalTok{(}\FunctionTok{paste}\NormalTok{(alpha, }\StringTok{" = 6 and "}\NormalTok{, lambda, }\StringTok{"= 50"}\NormalTok{)))) } \end{Highlighting} \end{Shaded} \begin{figure} \centering \includegraphics{SCMA470Bookdownproj_files/figure-latex/FigPareto-1.pdf} \caption{\label{fig:FigPareto}The probability density functions (pdf) of Pareto distributions with various shape alpha and rate parameter lambda = 1.} \end{figure} \begin{example} \protect\hypertarget{exm:exampleFittingClaimSizes}{}\label{exm:exampleFittingClaimSizes} \emph{Consider a data set consisting of 200 claim amounts in one year from a general insurance portfolio.} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \emph{Calculate the sample mean and sample standard deviation.} \item \emph{Use the method of moments to fit these data with both exponential and gamma distributions.} \item \emph{Calculate the boundaries for groups or bins so that the expected number of claims in each bin is 20 under the fitted exponential distribution.} \item \emph{Count the values of the observed claim amounts in each bin.} \item \emph{With these bin boundaries, find the expected number of claims when the data are fitted with the gamma, lognormal and Pareto distributions.} \item \emph{Plot a histogram for the data set along with fitted exponential distribution and fitted gamma distribution. In addition, plot another histogram for the data set along with fitted lognormal and fitted Pareto distribution.} \item \emph{Comment on the goodness of fit of the fitted distributions.} \end{enumerate} \end{example} \textbf{Solution:} 1. Given that \(\sum_{i=1}^n x_i = 206046.4\) and \(\sum_{i=1}^n x_i^2 = 1,472,400,135\), we have \[\bar{x} = \frac{\sum_{i=1}^n x_i}{n} = \frac{206046.4}{200} = 1030.232.\] The sample variance and standard deviation are \[s^2 = \frac{1}{n-1} \left( \sum_{i=1}^n x_i^2 - \frac{(\sum_{i=1}^n x_i)^2}{n} \right) = 6332284,\] and \[s = 2516.403.\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item We calculate estimates of unknown parameters of both exponential and gamma distributions by the method of moments. We simply match the mean and central moments, i.e.~matching \(\mathrm{E}[X]\) to the sample mean \(\bar{x}\) and \(\mathrm{Var}[X]\) to the sample variance. The MME (moment matching estimation) of the required distributions are as follows: \begin{itemize} \item the MME of \(\lambda\) for an \(\text{Exp}(\lambda)\) distribution is the reciprocal of the sample mean, \[\tilde{\lambda} = \frac{1}{\bar{x}} = 0.000971.\] \item the MMEs of \(\alpha\) and \(\lambda\) for a \(\mathcal{G}(\alpha, \lambda)\) distribution are \[\begin{aligned} \tilde{\alpha} &= \left(\frac{\bar{x}}{s}\right)^2 = 0.167614, \\ \tilde{\lambda} &= \frac{\tilde{\alpha}}{\bar{x}} = 0.000163.\end{aligned}\] \item the MMEs of \(\mu\) and \(\sigma\) for a \(\mathcal{LN}(\mu, \sigma^2)\) distribution are \[\begin{aligned} \tilde{\sigma} &= \sqrt{ \ln \left( \frac{s^2}{\bar{x}^2} + 1 \right) } = 1.393218, \\ \tilde{\mu} &= \ln(\bar{x}) - \frac{\tilde{\sigma}^2 }{2} = 5.967012.\end{aligned}\] \item the MMEs of \(\alpha\) and \(\lambda\) for a \(\text{Pa}(\alpha, \lambda)\) distribution are \[\begin{aligned} \tilde{\alpha} &= \displaystyle{ 2 \left( \frac{s^2}{\bar{x}^2} \right) \frac{1}{(\frac{s^2}{\bar{x}^2} - 1)} } = 2.402731,\\ \tilde{\lambda} &= \bar{x} (\tilde{\alpha} - 1) = 1445.138.\end{aligned}\] \end{itemize} \item The upper boundaries for the 10 groups or bins so that the expected number of claims in each bin is 20 under the fitted exponential distribution are determined by \[\Pr(X \le \text{upbd}_j) = \frac{j}{10}, \quad j = 1,2,3, \ldots, 9.\] With \(\tilde{\lambda}\) from the MME for an \(\text{Exp}(\lambda)\) from the previous, \[\Pr(X \le x) = 1 - \exp(-\tilde{\lambda} x).\] We obtain \[\text{upbd}_j = -\frac{1}{\tilde{\lambda}} \ln\left( 1 - \frac{j}{10}\right).\] The results are given in Table \ref{tab:tableFitted}. \item The following table shows frequency distributions for observed and fitted claims sizes for exponential, gamma, and also lognormal and Pareto fits. \end{enumerate} \begin{longtable}[]{@{}rrrrrr@{}} \caption{\label{tab:tableFitted} Frequency distributions for observed and fitted claims sizes.}\tabularnewline \toprule Range & Observation & Exp & Gamma & Lognormal & Pareto \\ \midrule \endfirsthead \toprule Range & Observation & Exp & Gamma & Lognormal & Pareto \\ \midrule \endhead (0,109{]} & 60 & 20 & 109.4 & 36 & 31.9 \\ (109,230{]} & 31 & 20 & 14.3 & 34.4 & 27.8 \\ (230,367{]} & 25 & 20 & 9.7 & 26 & 24.2 \\ (367,526{]} & 17 & 20 & 7.8 & 20.5 & 21.2 \\ (526,714{]} & 14 & 20 & 6.8 & 16.6 & 18.6 \\ (714,944{]} & 13 & 20 & 6.3 & 13.9 & 16.4 \\ (944,1240{]} & 6 & 20 & 6.2 & 11.9 & 14.6 \\ (1240,1658{]} & 7 & 20 & 6.5 & 10.8 & 13.2 \\ (1658,2372{]} & 10 & 20 & 7.7 & 10.4 & 12.5 \\ (2372,\(\infty\)) & 17 & 20 & 25.4 & 19.5 & 19.4 \\ \bottomrule \end{longtable} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{4} \item Let \(X\) be the claim size. \begin{itemize} \item The expected number of claims for the fitted exponential distribution in the range \((a,b]\) is \[200 \cdot \Pr( a < X \le b) = 200( e^{-\tilde{\lambda} a} - e^{-\tilde{\lambda} b} ).\] In our case, the expected frequencies under the fitted exponential distribution are given in the third column of Table \ref{tab:tableFitted}. \item (Excel) The expected number of claims for the fitted gamma distribution in the range \((a,b]\) is \[200 \cdot\left( \text{GAMMADIST}\left(b, \tilde{\alpha}, \frac{1}{\tilde{\lambda}}, \text{TRUE}\right) - \text{GAMMADIST}\left(a, \tilde{\alpha}, \frac{1}{\tilde{\lambda}}, \text{TRUE}\right) \right).\] The expected frequencies under the fitted gamma distribution are given in the fourth column of Table \ref{tab:tableFitted}. \item (Excel) For the fitted lognormal, the expected number of claims in the range \((a,b]\) can be obtained from \[200 \cdot\left( \text{NORMDIST} \left(\frac{LN(b) - \tilde{\mu}}{\tilde{\sigma}}\right) - \text{NORMDIST}\left(\frac{LN(a) - \tilde{\mu}}{\tilde{\sigma}}\right) \right).\] \item For the fitted Pareto distribution, the expected number of claims in the range \((a,b]\) can be obtained from \[200 \left[ \left(\frac{\tilde{\lambda}}{\tilde{\lambda} + a} \right)^{\tilde{\alpha}} - \left(\frac{\tilde{\lambda}}{\tilde{\lambda} + b} \right)^{\tilde{\alpha}} \right].\] \end{itemize} \item The histograms for the data set with fitted distributions are shown in Figures \ref{fig:FittedExpGamma} and \ref{fig:FittedLognormalPareto}. \item Comments: \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item The high positive skewness of the sample reflects the fact that SD is large when compared to the mean. Consequently, the exponential distribution may not fit the data well. \item Five claims (2.5\%) are greater than 10,000, which is one of the main features of the loss distribution. \item The fit is poor for the exponential distribution, as we see that the model under-fits the data for small claims up to 367 and over-fits for large claims between 944 to 2372. The gamma fit is again poor. We see that the model over-fits for small claims between 0-109 and under-fits for claims 230 and 944. \item Which one of the lognormal and Pareto distributions provides a better fit to the observed claim data? \end{enumerate} \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(stats)} \FunctionTok{library}\NormalTok{(MASS)} \FunctionTok{library}\NormalTok{(ggplot2)} \NormalTok{xbar }\OtherTok{\textless{}{-}} \FunctionTok{mean}\NormalTok{(dat}\SpecialCharTok{$}\NormalTok{claims)} \NormalTok{s }\OtherTok{\textless{}{-}} \FunctionTok{sd}\NormalTok{(dat}\SpecialCharTok{$}\NormalTok{claims)} \CommentTok{\# MME of alpha and lambda for Gamma distribution} \NormalTok{alpha\_tilde }\OtherTok{\textless{}{-}}\NormalTok{ (xbar}\SpecialCharTok{/}\NormalTok{s)}\SpecialCharTok{\^{}}\DecValTok{2} \NormalTok{lambda\_tilde }\OtherTok{\textless{}{-}}\NormalTok{ alpha\_tilde}\SpecialCharTok{/}\NormalTok{xbar} \FunctionTok{ggplot}\NormalTok{(dat) }\SpecialCharTok{+} \FunctionTok{geom\_histogram}\NormalTok{(}\FunctionTok{aes}\NormalTok{(}\AttributeTok{x =}\NormalTok{ claims, }\AttributeTok{y =}\NormalTok{ ..density..), }\AttributeTok{bins =} \DecValTok{90}\NormalTok{ , }\AttributeTok{fill =} \StringTok{"grey"}\NormalTok{, }\AttributeTok{color =} \StringTok{"black"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dexp, }\AttributeTok{geom =}\StringTok{"line"}\NormalTok{, }\AttributeTok{args =}\NormalTok{ (}\AttributeTok{rate =} \DecValTok{1}\SpecialCharTok{/}\FunctionTok{mean}\NormalTok{(dat}\SpecialCharTok{$}\NormalTok{claims)), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"Exponential"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dgamma, }\AttributeTok{geom =}\StringTok{"line"}\NormalTok{, }\AttributeTok{args =} \FunctionTok{list}\NormalTok{(}\AttributeTok{shape =}\NormalTok{ alpha\_tilde ,}\AttributeTok{rate =}\NormalTok{ lambda\_tilde), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"Gamma"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{ylim}\NormalTok{(}\DecValTok{0}\NormalTok{, }\FloatTok{0.0015}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{scale\_color\_discrete}\NormalTok{(}\AttributeTok{name=}\StringTok{"Fitted Distributions"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} \centering \includegraphics{SCMA470Bookdownproj_files/figure-latex/FittedExpGamma-1.pdf} \caption{\label{fig:FittedExpGamma}Histogram of claim sizes with fitted exponential and gamma distributions.} \end{figure} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(actuar)} \CommentTok{\# MME of mu and sigma for lognormal distribution} \NormalTok{sigma\_tilda }\OtherTok{\textless{}{-}} \FunctionTok{sqrt}\NormalTok{(}\FunctionTok{log}\NormalTok{( }\FunctionTok{var}\NormalTok{(dat}\SpecialCharTok{$}\NormalTok{claims)}\SpecialCharTok{/}\FunctionTok{mean}\NormalTok{(dat}\SpecialCharTok{$}\NormalTok{claims)}\SpecialCharTok{\^{}}\DecValTok{2} \SpecialCharTok{+}\DecValTok{1}\NormalTok{ )) }\CommentTok{\# gives \textbackslash{}tilde\textbackslash{}sigma} \NormalTok{mu\_tilda }\OtherTok{\textless{}{-}} \FunctionTok{log}\NormalTok{(}\FunctionTok{mean}\NormalTok{(dat}\SpecialCharTok{$}\NormalTok{claims)) }\SpecialCharTok{{-}}\NormalTok{ sigma\_tilda}\SpecialCharTok{\^{}}\DecValTok{2}\SpecialCharTok{/}\DecValTok{2} \CommentTok{\# gives \textbackslash{}tilde\textbackslash{}mu} \CommentTok{\# MME of alpha and lambda for Pareto distribution} \NormalTok{alpha\_tilda }\OtherTok{\textless{}{-}} \DecValTok{2}\SpecialCharTok{*}\FunctionTok{var}\NormalTok{(dat}\SpecialCharTok{$}\NormalTok{claims)}\SpecialCharTok{/}\FunctionTok{mean}\NormalTok{(dat}\SpecialCharTok{$}\NormalTok{claims)}\SpecialCharTok{\^{}}\DecValTok{2} \SpecialCharTok{*} \DecValTok{1}\SpecialCharTok{/}\NormalTok{(}\FunctionTok{var}\NormalTok{(dat}\SpecialCharTok{$}\NormalTok{claims)}\SpecialCharTok{/}\FunctionTok{mean}\NormalTok{(dat}\SpecialCharTok{$}\NormalTok{claims)}\SpecialCharTok{\^{}}\DecValTok{2} \SpecialCharTok{{-}} \DecValTok{1}\NormalTok{) }\CommentTok{\#/tilde/alpha} \NormalTok{lambda\_tilda }\OtherTok{\textless{}{-}} \FunctionTok{mean}\NormalTok{(dat}\SpecialCharTok{$}\NormalTok{claims)}\SpecialCharTok{*}\NormalTok{(alpha\_tilda }\SpecialCharTok{{-}}\DecValTok{1}\NormalTok{)} \FunctionTok{ggplot}\NormalTok{(dat) }\SpecialCharTok{+} \FunctionTok{geom\_histogram}\NormalTok{(}\FunctionTok{aes}\NormalTok{(}\AttributeTok{x =}\NormalTok{ claims, }\AttributeTok{y =}\NormalTok{ ..density..), }\AttributeTok{bins =} \DecValTok{90}\NormalTok{ , }\AttributeTok{fill =} \StringTok{"grey"}\NormalTok{, }\AttributeTok{color =} \StringTok{"black"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dlnorm, }\AttributeTok{geom =}\StringTok{"line"}\NormalTok{, }\AttributeTok{args =} \FunctionTok{list}\NormalTok{(}\AttributeTok{meanlog =}\NormalTok{ mu\_tilda, }\AttributeTok{sdlog =}\NormalTok{ sigma\_tilda), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"Lognormal"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{stat\_function}\NormalTok{(}\AttributeTok{fun=}\NormalTok{dpareto, }\AttributeTok{geom =}\StringTok{"line"}\NormalTok{, }\AttributeTok{args =} \FunctionTok{list}\NormalTok{(}\AttributeTok{shape =}\NormalTok{ alpha\_tilda, }\AttributeTok{scale =}\NormalTok{ lambda\_tilda), }\FunctionTok{aes}\NormalTok{(}\AttributeTok{colour =} \StringTok{"Pareto"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{scale\_color\_discrete}\NormalTok{(}\AttributeTok{name=}\StringTok{"Fitted Distributions"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} \centering \includegraphics{SCMA470Bookdownproj_files/figure-latex/FittedLognormalPareto-1.pdf} \caption{\label{fig:FittedLognormalPareto}Histogram of claim sizes with fitted lognormal and pareto distributions.} \end{figure} Let us plot the histogram of claim sizes with fitted exponential and gamma distributions in this interaction area. Note that the data set is stored in the variable \texttt{dat}. eyJsYW5ndWFnZSI6InIiLCJwcmVfZXhlcmNpc2VfY29kZSI6ImxpYnJhcnkoc3RhdHMpXG5saWJyYXJ5KE1BU1MpXG5saWJyYXJ5KGdncGxvdDIpXG5kYXQgPC0gYygzMS4wODk0MjE1Niw5MTUuMDI1OTM2MiwzMi4wMjM3OTU2Miw4ODUuODc1NDUxLDkzMTQuMTAwNzk3LDcwNy4xNzM4ODY2LDIxMTQuMzYyNDg2LDYwMS41ODI3ODY2LDQzNS4zNzg4MTM1LDQ5LjgwMTc5NjE5LDE4MDIuMzgzODIsMjExLjYzNjQzOSwxNTMuNTk4NDcxOSw2MC4wNTk2Njk5Miw0OC4xNzE1NzY5Nyw5NDguNzIzNDYyNiwxMzIuNDI3MzEwOSwxNTEuNzEwODE1LDI5NjcuOTYxMDM2LDczNS40MTQ5MzMyLDMwNC41ODA3NTg3LDUwLjUzMzExMDY5LDIyNC43NjY1NTQ2LDM1Ni4xODA5NDM4LDQ3NS43NjY4NDQxLDMwNDQuMTQ5NTEzLDEzLjUwNjY0ODk0LDY3Ljk4Mjc0NDQ5LDMwNC4xNDM5OTY2LDIzOC4wMDAxMDUsMzk5LjA3OTgyNjMsMTQ5LjI4MDc4LDEyOC4yODQ4Mzc5LDIxLjQyMDI3NzE4LDczLjMxNDI2NzMyLDQ5LjQ2ODgyNzkxLDY2Ny41MzI3Mjc1LDQ0LjY3MzYwMTg1LDE0ODk0LjA3ODM5LDY2MC43NjE0MzA3LDEwMC43NjI4NjI4LDYzMi4yODEyMzkxLDQyLjkwODg0Nzc3LDY2LjE3NjkzMTM1LDUwLjY5NDU0MTMyLDE4Ni44NzgxNjY3LDE2OC41NDA4NjE1LDE1MS43Mzk5NzgsMjQxOS41MzIzNTQsNDM0LjQ1NjQwMzIsNjkuMDM0NjAyMzIsMTYyLjY2OTg1OTMsMjI2LjYxMTAzOTUsMzMuNjE1MDM0OTUsMjMzLjAyNzk5NiwzMjQ0Ljk0NTg5MywzNTQuMjUzNDgxMyw3OC42MTA1Mjc0NCwyMzEuNTY2NjE0LDI4My4wMjA2NDkxLDQ1Ny42Nzg1NDI2LDEzNC4yOTMzMDU4LDYxLjM0MjY1MDYzLDM4LjI1NjkxOTEyLDE1NzguOTA5MDQ4LDQ0MS42MTk5ODI2LDc2MS40MTc3Nzc3LDI3NS42OTc4NTg4LDUyMS4wMzU1OTE2LDIxODkuNjI3ODMsMTE3LjI2Njc4ODUsMjQwLjM0MDMxNTMsNjcyLjQ1MTI5MzgsNzUzLjg5NTgwMTksODQuOTg4ODMwNzksMzY2NS40MTc5NzYsNjAuMzU5Nzc1MTUsNC4wMTkzMzMwOSwxNC45NTk1MTM2NiwxOTYuNzE0NjQwMywxNTMuNzEzMzE2NSw5OC4zMTg3NTA1MywxMDQuNDQ4NjMyNCwzNTguOTIwNTg3OCwyLjI1NDk4NjMzMSwyMDU5LjYwMzk1OSwzNy42Mzg5ODYwOCw1Ni40ODk0NDAzNSwxMTQzLjA4Mjk0OCw0MTAuNzU4NTUxNiwxMi42NTQwMjk4NiwxOS44MzMyMjUxNCwxMzA1LjEzNDc5NywyMDE5LjM2MDczNSwxMjg2Ljk4NDc5LDg4OTIuMTgyMTMxLDUuODE0NTE4NzQ5LDI5Ni4xNTUxMjk1LDg2Ljc1MzA4MTYxLDQ4Ni43ODUxNDA1LDcuNDkxMzg5Nzk4LDE4MC4zMjU1MjgxLDE0MTQuMjk3NzQ4LDUyNC40NjI4MjA4LDEwNDIuNjkwMzM0LDEyOTEuNDgxNDc0LDExNS40OTUwOTk4LDM2MC42MzEwNzM3LDMyMzMuNzE2ODM4LDE0OS41MTkxMDM5LDguODQ1ODM3NDczLDgzLjg3Mjk2MzI0LDQyLjk5NjE0NTE3LDYyMy45NzA0ODUzLDQ1Ljc0OTkwMDc0LDE0NC4yNDQ5NzkzLDM2OC41NzU2NDIsODY2LjkyNzI1NDUsNTcuNjE1OTI5MjEsMTgxMi4yMzEzMTUsMjIyOS45OTg3NTQsMzQ0OC4zMzI4ODgsMTEzMTMuMzQ3MjEsMTQ5Mi40OTg4NTYsMTk2LjcyNjI1NzEsNzEuMTE4MTc2MDEsNDI1LjA2MTQ0ODMsMzguMjg2NTMwNDgsNDQuNTA5MDAxNiwzMDguODc4MTY1NywxOTA4MC41MTc0OSw4Mi4wNzYxMzkzLDI1MC4wODM1MjMsNzkuMDc0OTIwNDIsMTgzLjg2OTc2ODYsMzMuODMxNjAzOTEsMjIuNzgyMTgyOTksNjk4Ljk1NDE2NDgsMzIuNzU0MjcwMDMsNDU3LjAxMDQ5MTksMTEyLjE3MTU1NjcsMzk2LjcxNTUyMzQsMTk1LjAyNDA3ODEsMTg2My4xODUzODUsMTgxLjY0NDExMjEsNTkuMDg3MzM2NTUsOTYuMzkwMDQxOTEsODI0LjczMDE4NCwxNTUuODIxNTE2MiwxMS44NTUxMDY2MSw4NzAuODY3OTUwMiw0MjUuMzEzMzA0NSw4NTQuNzI5NjQ3NCwyNTQuMzEwODg5Miw2NjQuMzMyMDEwNyw1Ni4yNjEyMDc1MiwzNzguMjQ0MDE2LDIwNjkuNDMxNDk1LDMxMjEuMDkwMSw4NDQuNDMzNzU5NCw3NDYuMzg1MzY3NSwxODUxLjQ2OTYzMyw0MzEuNzA2MDIzMiwzMzMuMzI1NDgxLDIyLjIzOTcwMTAyLDY2Mi4zODE2ODg4LDExNy43NTkwMDU3LDU3MC40NDEyODI5LDExMjcuMDM1MzA2LDI0Ni4yNjg2MTgzLDE0NjcuNTY4ODY3LDM0Ljg5MTg3MTEyLDIzNy41NzYxNjkzLDM0OS40NTAzOTY3LDIyOS4zMzYyODQ5LDkzNC42OTI2NTYxLDE2Mi45MjU0NDA4LDU4LjI4NDk3MTcyLDEyODAzLjA0NzI2LDE1Ljk0OTA0MTg5LDk1OS45NTQzNDAyLDU4NTMuODc4OTc5LDUzNy4zOTc0MjUzLDc1LjMwNTcyODgzLDcxOC42NTk3NTIxLDYzMy44MjE0NDM4LDM2My4wMzM4MDc2LDk1Ljg0NzYyNjYsODAuMzE3ODY1MywyODYuNzEyMTc2Myw2MzY3LjQ1MzQwOCwzMjEuNTY3NzExLDIxLjUxODM4MDIyLDU5OS40NzEyOTU4LDI0Ni43MDA3MDczLDEzODYzLjc4MTgxLDIxNC43MzQyNTk3LDIzNC4zMjEyOTUyLDk1OC45MTYxNzksMTY1LjI1MjEzODUpXG5kYXQgPC0gZGF0YS5mcmFtZShjbGFpbXMgPSBkYXQpIiwic2FtcGxlIjoiIyBOb3RlIHRoYXQgdGhlIG9iamVjdCBcImRhdFwiIGlzIGRhdGEgZnJhbWUuIFRvIGFjY2VzcyB0aGUgY2xhaW1zIGRhdGEgd2UgdXNlIGRhdCRjbGFpbXMuICMgRmlsbCB5b3VyIGNvZGUgYmVsb3cuIFxuY2xhc3MoZGF0KSIsInNvbHV0aW9uIjoiZ2dwbG90KGRhdCkgKyBcbiAgZ2VvbV9oaXN0b2dyYW0oYWVzKHggPSBjbGFpbXMsIHkgPSAuLmRlbnNpdHkuLiksIGJpbnMgPSA5MCAsIGZpbGwgPSBcImdyZXlcIiwgY29sb3IgPSBcImJsYWNrXCIpICsgXG4gIHN0YXRfZnVuY3Rpb24oZnVuPWRleHAsIGdlb20gPVwibGluZVwiLCBhcmdzID0gKHJhdGUgPSAxL21lYW4oZGF0JGNsYWltcykpLCBhZXMoY29sb3VyID0gXCJFeHBcIikpICtcbiAgc2NhbGVfY29sb3JfZGlzY3JldGUobmFtZT1cIkZpdHRlZCBEaXN0cmlidXRpb25zXCIpIn0= The following code can be used to obtain the expected number of claims for the fitted exponential distribution and perform goodness-of-fit test. eyJsYW5ndWFnZSI6InIiLCJwcmVfZXhlcmNpc2VfY29kZSI6ImxpYnJhcnkoc3RhdHMpXG5saWJyYXJ5KE1BU1MpXG5saWJyYXJ5KGdncGxvdDIpXG5kYXQgPC0gYygzMS4wODk0MjE1Niw5MTUuMDI1OTM2MiwzMi4wMjM3OTU2Miw4ODUuODc1NDUxLDkzMTQuMTAwNzk3LDcwNy4xNzM4ODY2LDIxMTQuMzYyNDg2LDYwMS41ODI3ODY2LDQzNS4zNzg4MTM1LDQ5LjgwMTc5NjE5LDE4MDIuMzgzODIsMjExLjYzNjQzOSwxNTMuNTk4NDcxOSw2MC4wNTk2Njk5Miw0OC4xNzE1NzY5Nyw5NDguNzIzNDYyNiwxMzIuNDI3MzEwOSwxNTEuNzEwODE1LDI5NjcuOTYxMDM2LDczNS40MTQ5MzMyLDMwNC41ODA3NTg3LDUwLjUzMzExMDY5LDIyNC43NjY1NTQ2LDM1Ni4xODA5NDM4LDQ3NS43NjY4NDQxLDMwNDQuMTQ5NTEzLDEzLjUwNjY0ODk0LDY3Ljk4Mjc0NDQ5LDMwNC4xNDM5OTY2LDIzOC4wMDAxMDUsMzk5LjA3OTgyNjMsMTQ5LjI4MDc4LDEyOC4yODQ4Mzc5LDIxLjQyMDI3NzE4LDczLjMxNDI2NzMyLDQ5LjQ2ODgyNzkxLDY2Ny41MzI3Mjc1LDQ0LjY3MzYwMTg1LDE0ODk0LjA3ODM5LDY2MC43NjE0MzA3LDEwMC43NjI4NjI4LDYzMi4yODEyMzkxLDQyLjkwODg0Nzc3LDY2LjE3NjkzMTM1LDUwLjY5NDU0MTMyLDE4Ni44NzgxNjY3LDE2OC41NDA4NjE1LDE1MS43Mzk5NzgsMjQxOS41MzIzNTQsNDM0LjQ1NjQwMzIsNjkuMDM0NjAyMzIsMTYyLjY2OTg1OTMsMjI2LjYxMTAzOTUsMzMuNjE1MDM0OTUsMjMzLjAyNzk5NiwzMjQ0Ljk0NTg5MywzNTQuMjUzNDgxMyw3OC42MTA1Mjc0NCwyMzEuNTY2NjE0LDI4My4wMjA2NDkxLDQ1Ny42Nzg1NDI2LDEzNC4yOTMzMDU4LDYxLjM0MjY1MDYzLDM4LjI1NjkxOTEyLDE1NzguOTA5MDQ4LDQ0MS42MTk5ODI2LDc2MS40MTc3Nzc3LDI3NS42OTc4NTg4LDUyMS4wMzU1OTE2LDIxODkuNjI3ODMsMTE3LjI2Njc4ODUsMjQwLjM0MDMxNTMsNjcyLjQ1MTI5MzgsNzUzLjg5NTgwMTksODQuOTg4ODMwNzksMzY2NS40MTc5NzYsNjAuMzU5Nzc1MTUsNC4wMTkzMzMwOSwxNC45NTk1MTM2NiwxOTYuNzE0NjQwMywxNTMuNzEzMzE2NSw5OC4zMTg3NTA1MywxMDQuNDQ4NjMyNCwzNTguOTIwNTg3OCwyLjI1NDk4NjMzMSwyMDU5LjYwMzk1OSwzNy42Mzg5ODYwOCw1Ni40ODk0NDAzNSwxMTQzLjA4Mjk0OCw0MTAuNzU4NTUxNiwxMi42NTQwMjk4NiwxOS44MzMyMjUxNCwxMzA1LjEzNDc5NywyMDE5LjM2MDczNSwxMjg2Ljk4NDc5LDg4OTIuMTgyMTMxLDUuODE0NTE4NzQ5LDI5Ni4xNTUxMjk1LDg2Ljc1MzA4MTYxLDQ4Ni43ODUxNDA1LDcuNDkxMzg5Nzk4LDE4MC4zMjU1MjgxLDE0MTQuMjk3NzQ4LDUyNC40NjI4MjA4LDEwNDIuNjkwMzM0LDEyOTEuNDgxNDc0LDExNS40OTUwOTk4LDM2MC42MzEwNzM3LDMyMzMuNzE2ODM4LDE0OS41MTkxMDM5LDguODQ1ODM3NDczLDgzLjg3Mjk2MzI0LDQyLjk5NjE0NTE3LDYyMy45NzA0ODUzLDQ1Ljc0OTkwMDc0LDE0NC4yNDQ5NzkzLDM2OC41NzU2NDIsODY2LjkyNzI1NDUsNTcuNjE1OTI5MjEsMTgxMi4yMzEzMTUsMjIyOS45OTg3NTQsMzQ0OC4zMzI4ODgsMTEzMTMuMzQ3MjEsMTQ5Mi40OTg4NTYsMTk2LjcyNjI1NzEsNzEuMTE4MTc2MDEsNDI1LjA2MTQ0ODMsMzguMjg2NTMwNDgsNDQuNTA5MDAxNiwzMDguODc4MTY1NywxOTA4MC41MTc0OSw4Mi4wNzYxMzkzLDI1MC4wODM1MjMsNzkuMDc0OTIwNDIsMTgzLjg2OTc2ODYsMzMuODMxNjAzOTEsMjIuNzgyMTgyOTksNjk4Ljk1NDE2NDgsMzIuNzU0MjcwMDMsNDU3LjAxMDQ5MTksMTEyLjE3MTU1NjcsMzk2LjcxNTUyMzQsMTk1LjAyNDA3ODEsMTg2My4xODUzODUsMTgxLjY0NDExMjEsNTkuMDg3MzM2NTUsOTYuMzkwMDQxOTEsODI0LjczMDE4NCwxNTUuODIxNTE2MiwxMS44NTUxMDY2MSw4NzAuODY3OTUwMiw0MjUuMzEzMzA0NSw4NTQuNzI5NjQ3NCwyNTQuMzEwODg5Miw2NjQuMzMyMDEwNyw1Ni4yNjEyMDc1MiwzNzguMjQ0MDE2LDIwNjkuNDMxNDk1LDMxMjEuMDkwMSw4NDQuNDMzNzU5NCw3NDYuMzg1MzY3NSwxODUxLjQ2OTYzMyw0MzEuNzA2MDIzMiwzMzMuMzI1NDgxLDIyLjIzOTcwMTAyLDY2Mi4zODE2ODg4LDExNy43NTkwMDU3LDU3MC40NDEyODI5LDExMjcuMDM1MzA2LDI0Ni4yNjg2MTgzLDE0NjcuNTY4ODY3LDM0Ljg5MTg3MTEyLDIzNy41NzYxNjkzLDM0OS40NTAzOTY3LDIyOS4zMzYyODQ5LDkzNC42OTI2NTYxLDE2Mi45MjU0NDA4LDU4LjI4NDk3MTcyLDEyODAzLjA0NzI2LDE1Ljk0OTA0MTg5LDk1OS45NTQzNDAyLDU4NTMuODc4OTc5LDUzNy4zOTc0MjUzLDc1LjMwNTcyODgzLDcxOC42NTk3NTIxLDYzMy44MjE0NDM4LDM2My4wMzM4MDc2LDk1Ljg0NzYyNjYsODAuMzE3ODY1MywyODYuNzEyMTc2Myw2MzY3LjQ1MzQwOCwzMjEuNTY3NzExLDIxLjUxODM4MDIyLDU5OS40NzEyOTU4LDI0Ni43MDA3MDczLDEzODYzLjc4MTgxLDIxNC43MzQyNTk3LDIzNC4zMjEyOTUyLDk1OC45MTYxNzksMTY1LjI1MjEzODUpXG5kYXQgPC0gZGF0YS5mcmFtZShjbGFpbXMgPSBkYXQpIiwic2FtcGxlIjoibGlicmFyeSh0aWR5cilcbmogPSAwOjlcbnVwYmQgPSBxZXhwKGovMTAsIDEvbWVhbihkYXQkY2xhaW1zKSkgICMxL21lYW4oZGF0JGNsYWltcykgZ2l2ZXMgdGhlIHBhcmFtZXRlciBvZiBleHBvbmVudGlhbCBkaXN0XG5vYnNlcnZhdGlvbiA8LSBjdXQoZGF0JGNsYWltcywgYnJlYWtzID0gdXBiZCxkaWcubGFiPTEwKVxudGFibGUob2JzZXJ2YXRpb24pXG5cbiMgVG8gZml4IHRoZSBjbGFzcyBpbnRlcnZhbCAodXBiZCB3aGVuIGogPSA5LGluZmluaXR5KVxudXBiZFtsZW5ndGgodXBiZCkrMV0gPSAxMDAwMDAwXG51cGJkXG5vYnNlcnZhdGlvbiA8LSBjdXQoZGF0JGNsYWltcywgYnJlYWtzID0gdXBiZClcblxudGFiU3VtbWFyeSA8LSB0YWJsZShvYnNlcnZhdGlvbilcblxuYmFycGxvdCh0YWJTdW1tYXJ5LG1haW49XCJDbGFpbSBzaXplc1wiLGxhcz0yKVxuXG4jIE9idGFpbiBFKEV4cCksIHRoZSBmaXR0ZWQgY2xhaW0gc2l6ZXNcbmVleHAgPC0gZGlmZigyMDAqcGV4cCh1cGJkLCByYXRlID0gMS9tZWFuKGRhdCRjbGFpbXMpKSlcblxudGFiU3VtbWFyeSA8LSBkYXRhLmZyYW1lKHRhYlN1bW1hcnkpIFxuY29sbmFtZXModGFiU3VtbWFyeSkgPC0gYyhcIlJhbmdlXCIsIFwiT2JzZXJ2YXRpb25cIilcblxudGFiU3VtbWFyeSA8LSBkYXRhLmZyYW1lKHRhYlN1bW1hcnksIEV4cG9uZW50aWFsID0gZWV4cClcblxuI3N1bW1hcnkgb2YgZXhwb25lbnRpYWwgZml0XG5jaGlzcS50ZXN0KHggPSB0YWJTdW1tYXJ5JE9ic2VydmF0aW9uLFxuICAgICAgICAgICBwID0gZWV4cC9zdW0oZWV4cCkpXG5cbiMgTWFudWFsbHkgY29tcHV0ZSBjaGktc3F1YXJlIHN0YXRpc3RpY3NcbnN1bSgodGFiU3VtbWFyeSRPYnNlcnZhdGlvbiAtIGVleHApXjIgL2VleHApICAgIn0= \hypertarget{deductibles-and-reinsurance}{% \chapter{Deductibles and reinsurance}\label{deductibles-and-reinsurance}} \hypertarget{introduction-1}{% \section{Introduction}\label{introduction-1}} In this chapter, we will introduce the concept of risk-sharing. We will consider two types of risk-sharing including deductibles and reinsurance. The purpose of risk sharing is to spread the risk among the parties involved. For example, \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item A policyholder purchases automobile insurance with a deductible. The policyholder is responsible for some of the risk , and transfer the larger portion of the risk to the insurer. The policyholder will submit a claim when the loss exceeds the deductible. \item A direct insurer can pass on some of the risks to another insurance company known as a reinsurer by purchasing insurance from the reinsurer. It will protect the insurer from paying large claims. \end{enumerate} The main goals of the chapter include the derivation of the distribution and corresponding moments of the claim amounts paid by the policyholder, direct insurer and the reinsurer in the presence of risk-sharing arrangements. In addition, the effects of risk-sharing arrangements will reduce the mean and variability of the amount paid by the direct insurer, and also the probability that the insurer will be involved on very large claims. \hypertarget{deductibles}{% \section{Deductibles}\label{deductibles}} The insurer can modify the policy so that the policyholder is responsible for some of the risk by including a deductible (also known as policy excess). Given a financial loss of \(X\) and a deductible of \(d\), \begin{itemize} \item the insured agrees to bear the first amount of \(d\) of any loss \(X\), and only submits a claim when \(X\) exceeds \(d\). \item the insurer will pay the remaining of \(X - d\) if the loss \(X\) exceeds \(d\). \end{itemize} For example, suppose a policy has a deductible of 1000, and you incur a loss of 3000 in a car accident. You pay the deductible of 1000 and the car insurance company pays the remaining of 2000. Let \(X\) be the claim amount, \(V\) and \(Y\) the amounts of the claim paid by the policyholder, the (direct) insurer, respectively, i.e. \[X = V + Y.\] So the amount paid by the policyholder and the insurer are given by \[\begin{aligned} V = \begin{cases} X &\text{if } X \le d\\ d &\text{if } X > d, \end{cases} \\ Y = \begin{cases} 0 &\text{if } X \le d\\ X - d &\text{if } X > d. \end{cases}\end{aligned}\] The amounts \(V\) and \(Y\) can also be expressed as \[V = \min(X,d), \quad Y = \max(0,X-d).\] The relationship between the policyholder and insurer is similar to that between the insurer and reinsurer. Therefore, the detailed analysis of a policy with a deductible is analogous to reinsurance, which will be discussed in the following section. \hypertarget{reinsurance}{% \section{Reinsurance}\label{reinsurance}} Reinsurance is insurance purchased by an insurance company in order to protect itself from large claims. There are two main types of reinsurance arrangement: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item excess of loss reinsurance; and \item proportional reinsurance. \end{enumerate} \hypertarget{excess-of-loss-reinsurance}{% \section{Excess of loss reinsurance}\label{excess-of-loss-reinsurance}} Under excess of loss reinsurance arrangement, the direct insurer sets a certain limit called a retention level \(M >0\). For a claim \(X\), \begin{itemize} \item the insurance company pays any claim in full if \(X \le M\); and \item the reinsurer (or reinsurance company) pays the remaining amount of \(X - M\) if \(X > M\). \end{itemize} The position of the reinsurer under excess of loss reinsurance is the same as that of the insurer for a policy with a deductible. Let \(X\) be the claim amount, \(V, Y\) and \(Z\) the amounts of the claim paid by the policyholder, (direct) insurer and reinsurer, respectively, i.e.~\[X = V + Y + Z.\] In what follows, without stated otherwise, we consider the case in which there is no deductible in place, i.e.~\(V = 0\) and \[X = Y + Z.\] So the amount paid by the direct insurer and the reinsurer are given by \[\begin{aligned} Y = \begin{cases} X &\text{if } X \le M\\ M &\text{if } X > M, \end{cases} \\ Z = \begin{cases} 0 &\text{if } X \le M\\ X - M &\text{if } X > M. \end{cases}\end{aligned}\] The amounts \(Y\) and \(Z\) can also be expressed as \[Y = \min(X,M), \quad Z = \max(0,X-M).\] \begin{example} \protect\hypertarget{exm:examplePayouts}{}\label{exm:examplePayouts} Suppose a policy has a deductible of 1000 and the insurer arrange excess of loss reinsurance with retention level of 10000. A sample of loss amounts in one year consists of the following values, in unit of Thai baht: \[3000, 800, 25000, 5000, 20000 .\] Calculate the total amount paid by:* \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \emph{the policyholder;} \item \emph{the insurer; and} \item \emph{the reinsurer.} \end{enumerate} \end{example} \textbf{Solution:} The total amounts paid by \begin{itemize} \item the policyholders : \[1000 + 800 + 1000 + 1000 + 1000 = 4800.\] \item The insurer : \[2000 + 0 + 10000 + 4000 + 10000 = 26000.\] \item The reinsurer : \[0 + 0 + 14000 + 0 + 9000 = 23000.\] \end{itemize} \hypertarget{mixed-distributions}{% \section{Mixed distributions}\label{mixed-distributions}} In the subsequent sections, we will derive the probability distribution of the random variables \(Y\) and \(Z\), which are the insurer's and reinsurer's payouts on claims. Their distributions are neither purely continuous, nor purely discrete. First we start with some important properties of such random variables. A random variable \(U\) which is partly discrete and partly continuous is said to be a mixed distribution. The distribution function of \(U\), denoted by \(F_U(x)\) is continuous and differentiable except for some values of \(x\) in a countable set \(S\). For a mixed distribution \(U\), there exists a function \(f_U(x)\) such that \[F_U(x) = \Pr(U \le x) = \int_{-\infty}^x f_U(x) dx + \sum_{x_i \in S, x_i \le x } \Pr(U = x_i).\] The expected value of \(g(U)\) for some function \(g\) is given by \begin{equation} \label{eq:eqnExpectationMixed} \mathrm{E}[g(U)] = \int_{-\infty}^\infty g(x) f_U(x) \mathop{}\!d{x} + \sum_{x_i \in S } g(x_i) \Pr(U = x_i). \end{equation} It is the sum of the integral over the intervals at which \(f_U(x)\) is continuous and the summation over the points in \(S\). The function \(f_U(x)\) is not the probability density function of \(U\) because \(\int_{-\infty}^\infty f_U(x) dx \neq 1\). In particular, it is the derivative of \(F_U(x)\) at the points where \(F_U(x)\) is continuous and differentiable. Recall that \(X\) denotes the claim amount and \(Y\) and \(Z\) be the amounts of the claim paid by the insurer and reinsurer. The distribution function and the density function of the claim amount \(X\) are denoted by \(F_{X}\) and \(f_X(x)\), where we assume that \(X\) is continuous. In the following examples, we will derive the distribution, mean and variance of the random variables \(Y\) and \(Z\). Furthermore, both random variables \(Y\) and \(Z\) are examples of mixed distributions. \begin{example} \protect\hypertarget{exm:unlabeled-div-30}{}\label{exm:unlabeled-div-30} \emph{Let \(F_Y\) denote the distribution function of \(Y = \min(X,M)\). It follows that \[F_Y(x) = \begin{cases} F_X(x) &\text{if } x < M\\ 1 &\text{if } x \ge M \end{cases}.\] Hence, the distribution function of \(Y\) is said to be a mixed distribution.} \end{example} \textbf{Solution:} From \(Y = \min(X,M)\), if \(y < M\), then \[F_Y(y) = \Pr(Y \le y) = \Pr(X \le y) = F_X(y).\] If \(y \ge M\), then \[F_Y(y) = \Pr(Y \le y) = 1,\] which follows because \(\min(X,M) \le M\). Hence, \(Y\) is mixed with a density function \(f_X(x)\), for \(0 \le x < M\) and a mass of probability at \(M\), with \(Pr(Y = M) = 1 - F_X(M)\). The last equality follows from \[\begin{aligned} \Pr(Y = M) &= \Pr(X > M) \\ &= 1 - \Pr(X \le M) = 1 - F_X(M). \end{aligned}\] \begin{example} \protect\hypertarget{exm:unlabeled-div-31}{}\label{exm:unlabeled-div-31} \emph{Show that \[\mathrm{E}[Y] = \mathrm{E}[\min(X,M)] = \mathrm{E}[X] - \int_0^\infty y f_X(y+M) \mathop{}\!dy.\]} \end{example} \(\mathrm{E}[Y]\) is the expected payout by the insurer. \[\begin{aligned} \mathrm{E}[Y] &= \mathrm{E}[\min(X,M)] \\ &= \int_0^\infty \min(X,M) \cdot f_X(x) \, dx \\ &= \int_0^M x \cdot f_X(x) \, dx + \int_M^\infty M \cdot f_X(x) \, dx \\ &= \int_0^M x \cdot f_X(x) \, dx + \int_M^\infty x \cdot f_X(x) \, dx + \int_M^\infty (M - x) \cdot f_X(x) \, dx \\ &= \mathrm{E}[X] + \int_M^\infty (M - x) \cdot f_X(x) \, dx \\ &= \mathrm{E}[X] + \int_0^\infty (-y) \cdot f_X(y+M) \, dy \\ &= \mathrm{E}[X] - \int_0^\infty y \cdot f_X(y+M) \, dy \end{aligned}\] \textbf{Note} Under excess of loss reinsurance arrangement, the mean amount paid by the insurer is reduced by the amount equal to \(\int_0^\infty y f_X(y+M) \mathop{}\!dy.\) \begin{example} \protect\hypertarget{exm:unlabeled-div-32}{}\label{exm:unlabeled-div-32} \emph{Let \(X\) be an exponential distribution with parameter \(\lambda\) and \(Y = \min(X,M)\). Then \[F_Y(x) = \begin{cases} 1 - e^{-\lambda x} &\text{if } x < M\\ 1 &\text{if } x \ge M \end{cases}.\] A plot of the distribution function \(F_Y\) is given in Figure~\protect\hyperlink{figMixedDist}{1}. Hence, \(Y\) is a mixed distribution with a density function \(f_Y(x) = f_X(x)\) for \(0 < x < M\) and a probability mass at \(M\) is \(\Pr(Y = M) = 1 - F_X(M)\).} Using \eqref{eq:eqnExpectationMixed}, the expected value of \(Y\), \(\mathrm{E}[Y]\) is given by \[\begin{aligned} \mathrm{E}[Y] &= \int_{0}^M x f_X(x) \mathop{}\!d{x} + M(1 - F_X(M)). \end{aligned}\] \end{example} \begin{example} \protect\hypertarget{exm:unlabeled-div-33}{}\label{exm:unlabeled-div-33} \emph{Let \(F_Z\) denote the distribution function of \(Z = \max(0,X-M)\). It follows that \[F_Z(x) = \begin{cases} F_{X}(M) &\text{if } x = 0\\ F_{X}(x + M) &\text{if } x > 0 \end{cases}.\] Hence, the distribution function of \(Z\) is a mixed distribution with a mass of probability at \(0\).} \end{example} \textbf{Solution:} The random variable \(Z\) is the \textbf{reinsurer's payout} which also include \textbf{zero claims}. Later we will consider only \textbf{reinsurance claims}, which involve the reinsurer, i.e.~claims such that \(X > M\). The distribution of \(Z\) can be derived as follows: \begin{itemize} \item For \(x =0\), \[F_Z(0) = \Pr(Z = 0) = \Pr(X \le M) = F_X(M).\] \item For \(x > 0\), \[\begin{aligned} F_Z(x) &= \Pr(Z \le x) = \Pr(\max(0,X-M) \le x) \\ &= \Pr(X- M \le x) = \Pr(X \le x + M) = F_X(x + M).\end{aligned}\] \end{itemize} \begin{example} \protect\hypertarget{exm:unlabeled-div-34}{}\label{exm:unlabeled-div-34} \emph{Let \(X\) be an exponential distribution with parameter \(\lambda\) and \(Z = \max(0,X-M)\). Derive and plot the probability distribution \(F_Z\) for \(\lambda = 1\) and \(M = 2\).} \end{example} \begin{example} \protect\hypertarget{exm:unlabeled-div-35}{}\label{exm:unlabeled-div-35} \emph{Show that \[\mathrm{E}[Z] = \mathrm{E}[\max(0,X-M)] = \int_M^\infty (x- M) f_X(x) \mathop{}\!dx = \int_0^\infty y f_X(y+M) \mathop{}\!dy.\] Comment on the result.} \end{example} \textbf{Solution:} The expected payout on the claim by the reinsurer, \(\mathrm{E}[Z]\), and can also be found directly as follows: \[\begin{aligned} \mathrm{E}[Z] &= \mathrm{E}[\max(0,X-M)] \\ &= \int_0^M 0 \cdot f_X(x) \, dx + \int_M^\infty (X-M) \cdot f_X(x) \, dx \\ &= 0 + \int_0^\infty y \cdot f_X(y + M) \, dy.\end{aligned}\] It follows from the previous results that \[\mathrm{E}[X] = \mathrm{E}[Y + Z] = \mathrm{E}[Y]+ \mathrm{E}[Z].\] \begin{example} \protect\hypertarget{exm:unlabeled-div-36}{}\label{exm:unlabeled-div-36} \emph{Let the claim amount \(X\) have exponential distribution with mean \(\mu = 1/\lambda\).} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \emph{Find the proportion of claims which involve the reinsurer.} \item \emph{Find the insurer's expected payout on a claim.} \item \emph{Find the reinsurer's expected payout on a claim.} \end{enumerate} \end{example} \textbf{Solution:} 1. The proportion of claims which involve the reinsurer is \[\Pr(X > M) = 1 - F_X(M) = e^{-\lambda M} = e^{-M/\mu}.\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item The insurer's expected payout on a claim can be calculated by \[\begin{aligned} \mathrm{E}[Y] &= \mathrm{E}[X] - \int_0^\infty y \cdot \lambda e^{-\lambda(y+M)} \, dy \\ &= \mathrm{E}[X] - e^{-\lambda M} \int_0^\infty y \cdot \lambda e^{-\lambda \cdot y} \, dy \\ &= \mathrm{E}[X] - e^{-\lambda M} \mathrm{E}[X] \\ &= (1 - e^{-\lambda M}) \mathrm{E}[X].\end{aligned}\] \item It follows from the above result that the reinsurer's expected payout on a claim is \(e^{-\lambda M} \mathrm{E}[X].\) \end{enumerate} \begin{example} \protect\hypertarget{exm:unlabeled-div-37}{}\label{exm:unlabeled-div-37} \emph{An insurer covers an individual loss \(X\) with excess of loss reinsurance with retention level \(M\). Let \(f_X(x)\) and \(F_X(x)\) denote the pdf and cdf of \(X\), respectively.} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \emph{Show that the variance of the amount paid by the insurer on a single claim satisfies: \[\mathrm{Var}[\min(X,M)] = \int_0^M x^2 f_X(x) \mathop{}\!dx + M^2 (1 - F_X(M)) - (\mathrm{E}[\min(X,M)])^2.\]} \item \emph{Show that the variance of the amount paid by the reinsurer on a single claim satisfies: \[\mathrm{Var}[\max(0,X-M)] = \int_M^\infty (x-M)^2 f_X(x) \mathop{}\!dx - (\mathrm{E}[\max(0,X-M)])^2.\]} \end{enumerate} \end{example} \hypertarget{the-distribution-of-reinsurance-claims}{% \section{The distribution of reinsurance claims}\label{the-distribution-of-reinsurance-claims}} In practice, the reinsurer involves only claims which exceed the retention limit, i.e.~\(X > M\). Information of claims which are less or equal to \(M\) may not be available to the reinsurer. The claim amount \(Z\) paid by the reinsurer can be modified accordingly to take into account of non-zero claim sizes. Recall from Example \ref{exm:examplePayouts}, there are only three claims whose amounts exceed the retention level of . Such claims, consisting of , 9000 and 23000 which involves the reinsurer are known as \textbf{reinsurance claims}. Let \(W = Z|Z>0\) be a random variable representing the amount of a non-zero payment by the reinsurer on a reinsurance claim. The distribution and density of \(W\) can be calculated as follows: for \(x > 0\), \[\begin{aligned} \Pr[W \le x ] &= \Pr[Z \le x | Z >0] \\ &= \Pr[X - M \le x | X > M] \\ &= \frac{\Pr[M < X \le x + M]}{\Pr[X > M]}\\ &= \frac{F_X(x+M) - F_X(M)}{1-F_X(M)}.\end{aligned}\] Differentiating with respect to \(x\), we obtain the density function of \(W\) as \[f_W(x) = \frac{f_X(x+M)}{1 - F_X(M)}.\] Hence, the mean and variance can be directly obtained from the density function of \(W\). \hypertarget{proportional-reinsurance}{% \section{Proportional reinsurance}\label{proportional-reinsurance}} Under excess of loss reinsurance arrangement, the direct insurer pays a fixed proportion \(\alpha\), called the proportion of the risk retained by the insurer, and the reinsurer pays the remainder of the claim. Let \(X\) be the claim amount, \(Y\) and \(Z\) the amounts of the claim paid by the policyholder, (direct) insurer and reinsurer, respectively, i.e. \[X = Y + Z.\] So the amount paid by the direct insurer and the reinsurer are given by \[Y = \alpha X, \quad Z = (1 - \alpha) X.\] Both of the random variables are scaled by the factor of \(\alpha\) and \(1- \alpha\), respectively. \begin{example} \protect\hypertarget{exm:unlabeled-div-38}{}\label{exm:unlabeled-div-38} \emph{Derive the distribution function and density function of \(Y\).} \end{example} \textbf{Solution:} Let \(X\) has a distribution function \(F\) with density function \(f\). The distribution function of \(Y\) is given by \[\Pr(Y \le x) = F(x/a).\] Hence, the density function is \[f_Y(x) = \frac{1}{a}f(x/a).\] You can get more examples from Tutorials. \hypertarget{collective-risk-model}{% \chapter{Collective Risk Model}\label{collective-risk-model}} Mathematical models of the total amount of claims from a portfolio of policies over a short period of time will be presented in this chapter. The models are referred to as short term risk models. Two main sources of uncertainty including the claim numbers and claim sizes will be taken into consideration. We will begin with the model for aggregate (total) claims or collective risk models. We define the following random variables: \begin{itemize} \item \(S\) denotes total amount of claims from a portfolio of policies in a fixed time interval, for e.g.~one year, \item \(N\) represents the number of claims, and \item \(X_i\) denotes the amount of the \(i\)th claim. \end{itemize} Then the total claims \(S\) is given by \[S = X_1 + \ldots + X_N.\] The following assumptions are made for deriving the collective risk model: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \(\{X_i \}_{i=1}^\infty\) are independent and identically distributed with distribution function \(F_X\). \item \(N\) is independent of \(\{X_i \}_{i=1}^\infty\). \end{enumerate} The distribution of the total claim \(S\) is said to be a compound distribution. The properties of the compound distribution will be given in the Section \protect\hyperlink{sectionCompoundDistribution}{2}. \textbf{Note} The distribution of \(S\) can be derived by using convolution technique. In general, the closed form expressions for the compound distribution do not exist so we will mainly concern with the moments of \(S\). For more details about convolution, see Gray and Pitts (2012). \hypertarget{conditional-expectation-and-variance-formulas}{% \section{Conditional expectation and variance formulas}\label{conditional-expectation-and-variance-formulas}} Some useful properties of conditional expectation and conditional variance are given. The conditional expectaion formula is \[E[ E[X|Y ]] = E[X].\] The conditional variance of \(X\) given \(Y\) is defined to be \[\begin{split} Var[X|Y] &= Var[Z] \text{ where } Z = X|Y \\ &= E[(Z - E[Z])^2] = E[Z^2] - (E[Z])^2 \\ &= E[(X - E[X|Y])^2 | Y] \\ &= E[X^2| Y] - (E[X|Y])^2. \\ \end{split}\] The conditional variance formula is \begin{equation} \label{eq:exampleVariance} Var[X] = E[ Var[X|Y ]] + Var[E[X|Y ]]. \end{equation} \begin{example} \protect\hypertarget{exm:unlabeled-div-39}{}\label{exm:unlabeled-div-39} Show that \[Var[X] = E[ Var[X|Y ]] + Var[E[X|Y ]].\] \end{example} \textbf{Solution:} Consider the terms on the right-hand side of \eqref{eq:exampleVariance}. We have \[\begin{aligned} E[Var[X|Y]] &= E\left[ E[X^2|Y] - (E[X|Y])^2 \right] \\ &= E[X^2] - E\left[(E[X|Y])^2 \right], \end{aligned}\] and \[\begin{aligned} Var[E[X|Y ]] &= Var[Z] \text{ where } Z = E[X|Y ] \\ &= E[(E[X|Y ])^2] - (E[E[X|Y ]])^2 \\ &= E[(E[X|Y ])^2] - (E[X])^2 \\ \end{aligned}\] Adding both terms gives the required result. \begin{example} \protect\hypertarget{exm:unlabeled-div-40}{}\label{exm:unlabeled-div-40} In three coloured boxes - Red, Green and Blue, each box has two bags. The bags of Red box contain 1 and 2 (in units of THB) respectively, those of Green box contain 1 and 5, and those of Blue contain 1 and 10 . A box is chosen at random in such a way that \(\Pr(\text{Red}) = \Pr(\text{Green}) = \Pr(\text{Blue}) = 1/3\). A fair coin is tossed to determined which bag to be chosen from the chosen box. Let \(X\) be the value of the contents of the chosen bag. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Find the distribution of \(X\). \item Find \(E[X]\) and \(Var[X]\). \item Use the conditional expectation and conditional variance formulas to verify your results. \end{enumerate} \end{example} \textbf{Solution:} 1. The distribution of \(X\) can be obtained by using the law of total probability: for example \[\begin{aligned} P(X= 1) &= P(X = 1 , R) + P(X = 1 , G) + P(X = 1 , B) \\ &= P(X = 1 | R) \cdot P(R) + P(X = 1 | G) \cdot P(G) + P(X = 1 | B) \cdot P(B) \\ &= \frac{1}{2} \cdot \frac{1}{3} + \frac{1}{2} \cdot \frac{1}{3} + \frac{1}{2} \cdot \frac{1}{3} = \frac{1}{2}. \end{aligned}\] Similarly, we have \[P(X = 1 ) = \frac{1}{2}, \quad P(X = 2 ) = P(X = 5 ) = P(X = 10 ) = \frac{1}{6}.\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item It follows that \[E[X] = \frac{10}{3}, \quad Var[X] = \frac{98}{9}.\] \item We first calculate \[\begin{aligned} E[X|R] &= \frac{1}{2}\cdot(1 + 2) = \frac{3}{2} \\ E[X|G] &= \frac{1}{2}\cdot(1 + 5) = 3 \\ E[X|B] &= \frac{1}{2}\cdot(1 + 10) = \frac{11}{2}. \\ \end{aligned}\] We have \[\begin{aligned} E[X] &= E[X | R] \cdot P(R) + E[X | G] \cdot P(G) + E[X | B] \cdot P(B) \\ &= \frac{1}{3}\cdot(\frac{3}{2} + 3 + \frac{11}{2}) = \frac{10}{3}. \end{aligned}\] \end{enumerate} \hypertarget{sectionCompoundDistribution}{% \section{\texorpdfstring{The moments of a compound distribution \(S\)}{The moments of a compound distribution S}}\label{sectionCompoundDistribution}} The moments and moment generating function of \(S\) can be easily derived from the conditional expectation formula. \hypertarget{the-mean-of-s}{% \subsection{\texorpdfstring{The mean of \(S\)}{The mean of S}}\label{the-mean-of-s}} Let \(m_k\) be the \(k\)th moment of \(X_1\), i.e.~\(E[X_1^k] = m_k\). Conditional on \(N = n\), we have \[E[S | N = n] = E[ \sum_{i=1}^n X_i] = \sum_{i=1}^n E[ X_i] = n E[ X_i] = n \cdot m_1.\] Hence, \(E[S | N] = N m_1\) and \[E[S] = E[E[S | N]] = E[N m_1] = E[N] m_1 = E[N] \cdot E[X_1].\] It is no surprise that the mean of the total claims is the product of the means of the number of claims and the mean of claim sizes. \hypertarget{the-variance-of-s}{% \subsection{\texorpdfstring{The variance of \(S\)}{The variance of S}}\label{the-variance-of-s}} Using the fact that \(\{X_i \}_{i=1}^\infty\) are independent, we have \[Var[S | N = n] = Var[ \sum_{i=1}^n X_i] = \sum_{i=1}^n Var[ X_i] = n Var[ X_i] =n (m_2 - m_1^2),\] and \(Var[S | N] = N (m_2 - m_1^2).\) It follows that \[\begin{aligned} Var[S] &= E[ Var[S | N] ] + Var[ E[S | N] ] \\ &= E[ N (m_2 - m_1^2) ] + Var[ N m_1 ] \\ &= E[ N ] (m_2 - m_1^2) + Var[ N ] m_1 ^2. \end{aligned}\] \begin{example} \protect\hypertarget{exm:unlabeled-div-41}{}\label{exm:unlabeled-div-41} Show that \(M_S(t) = M_N(\log(M_X(t)))\). \end{example} \textbf{Solution:} First, consider the following conditional expectation: \[\begin{aligned} E\left [e^{t S} | N = n \right] &= E\left[e^{t (X_1 + X_2 + \cdots X_n)}\right] \\ &= E\left[e^{t X_1}\right] \cdot E\left[e^{t X_2}\right] \cdots E\left[e^{t X_n}\right] \text{, since } X_1, X_2 \ldots, X_n \text{ are independent} \\ &= (M_X(t))^n.\end{aligned}\] Hence \(E \left [e^{t S} | N \right] = (M_X(t))^N.\) From the definition of the moment generating function, \[\begin{aligned} M_S(t) &= E[e^{t S}] \\ &= E \left[ E[e^{t S} | N] \right ] \\ &= E \left[ (M_X(t))^N \right ]\\ &= E \left[ Exp( N \cdot \log (M_X(t) ) \right] \\ &= M_N(\log(M_X(t))) (\text{ since } M_X(t) = E[e^{tX}] ).\end{aligned}\] \hypertarget{special-compound-distributions}{% \section{Special compound distributions}\label{special-compound-distributions}} \hypertarget{compound-poisson-distributions}{% \subsection{Compound Poisson distributions}\label{compound-poisson-distributions}} Let \(N\) be a Poisson distribution with the parameter \(\lambda\), i.e. \(N \sim Poisson(\lambda)\) and \(\{X_i \}_{i=1}^\infty\) are independent and identically distributed with distribution function \(F_X\). Then \(S = X_1 + \ldots + X_N\) is said to have a compound Poisson distribution and denote by \(\mathcal{CP}(\lambda,F_X)\). \textbf{Note} The same terminology can be defined similarly for other distributions, for e.g.~if \(N\) has a negative binomial distribution, then \(S\) is said to have a compound negative binomial distribution. \begin{example} \protect\hypertarget{exm:unlabeled-div-42}{}\label{exm:unlabeled-div-42} Let \(S \sim \mathcal{CP}(\lambda,F_X)\). Show that \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \(E[S] = \lambda m_1\), \item \(Var[S] = \lambda m_2\), \item \(M_S(t) = Exp{(\lambda(M_X(t) - 1))}.\) \item The third central moment \(E[(S- E[S])^3] = \lambda m_3\), and hence \[Sk[S] = \frac{\lambda m_3}{(\lambda m_2)^{3/2}},\] \end{enumerate} where \(m_k\) be the \(k\)th moment of \(X_1\) \end{example} \textbf{Solution:} 1. \(E[S] = E[N] \cdot E[X] = \lambda m_1\), \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item \(Var[S] = E[ N ] (m_2 - m_1^2) + Var[ N ] m_1 ^2 = \lambda(m_2 - m_1^2) + \lambda m_1^2 = \lambda m_2\), \item From \[\begin{aligned} M_S(t) &= M_N(\log(M_X(t))) \\ &= Exp\left( \lambda \left( e^{\log(M_X(t))} - 1 \right) \right), \text{ since } M_N(t) = Exp(\lambda(e^t - 1)) \\ &= Exp{(\lambda(M_X(t) - 1))}. \end{aligned}\] \item The third central moment \(E[(S- E[S])^3] = \lambda m_3\), and hence \[Sk[S] = \frac{\lambda m_3}{(\lambda m_2)^{3/2}}.\] In particular, we have \[\begin{aligned} E[(N- E[N])^3] &= E\left [ N^3 - 3 N^2 \cdot E[N] + 3 N \cdot (E[N])^2 - (E[N])^3 \right] \\ &= E[N^3] - 3 E[N^2] \cdot E[N] + 2 (E[N])^3 \\ &= M_N'''(0) - 3 M_N''(0) \cdot M_N'(0) + 2 (M_N'(0))^3\end{aligned}\] For \(N \sim Poisson(\lambda)\), \(M_N(t) = Exp(\lambda(e^t - 1)).\) By differentiating \(M_N(t)\) and evaluating at \(t = 0\), we can show that \[M'(0) = \lambda, \quad M''(0) = \lambda (1 + \lambda), \quad M'''(0) = \lambda (1 + 3\lambda + \lambda^2).\] Hence, \(E[(N- E[N])^3] = \lambda.\) Similarly, \[\begin{aligned} E[(S- E[S])^3] &= E[S^3] - 3 E[S^2] \cdot E[S] + 2 (E[S])^3 \\ \end{aligned}\] In addition, \(M_S(t) = Exp{(\lambda(M_X(t) - 1))}.\) By differentiating \(M_S(t)\) we can show that \[\begin{aligned} M'''_S(t) &= \lambda M'''_X(t) M_S(t) + 2 \lambda M''_X(t) M'_S(t) + \lambda M'_X(t) M''_S(t).\\ \end{aligned}\] Evaluating \(M'''_S(t)\) at \(t = 0\) results in \[\begin{aligned} M'''_S(0) &= E[S^3] = \lambda m_3 + 3 E[S] \cdot E[S^2] - 2( E[S])^3,\end{aligned}\] which gives \[\begin{aligned} E[(S- E[S])^3] &= E[S^3] - 3 E[S^2] \cdot E[S] + 2 (E[S])^3 \\ &= \lambda m_3.\end{aligned}\] \end{enumerate} \begin{example} \protect\hypertarget{exm:unlabeled-div-43}{}\label{exm:unlabeled-div-43} Let \(S\) be the aggregate annual claims for a risk where \(S \sim \mathcal{CP}(10,F_X)\) and the individual claim amounts have a \(Pa(4,1)\) distribution. Calculate \(E[S], Var[S]\) and \(Sk[S]\). \end{example} \textbf{Solution:} Since \(X \sim Pa(4,1)\) with \(\alpha = 4\) and \(\lambda = 1\), we have \[\begin{aligned} E[X^r] &= \frac{\Gamma(\alpha - r) \cdot \Gamma(1 + r) \cdot \lambda^r }{\Gamma(\alpha)}\\ E[X] &= \frac{\lambda}{\alpha - 1} = \frac{1}{4-1} = \frac{1}{3}\\ E[X^2] &= \frac{\Gamma(2) \cdot \Gamma(3) \cdot \lambda^2 }{\Gamma(4)} = \frac{1}{3}\\ E[X^3] &= \frac{\Gamma(1) \cdot \Gamma(4) \cdot \lambda^3 }{\Gamma(4)} = 1.\\ \end{aligned}\] We have \[\begin{aligned} E[S] &= \lambda E[X] = \frac{10}{3} \\ Var[S] &= \lambda E[X^2] = \frac{10}{3} \\ Sk[S] &= \frac{\lambda E[X^3]}{\left(\lambda E[X^2]\right)^{3/2}} = \frac{10}{(10/3)^{3/2}} = 1.6432.\\ \end{aligned}\]\\ In what follows, we will use R to simulate \(n\) observations from a compound Poisson distribution, where the Poisson parameter is \(\lambda\) and where the claims are exponentially distributed with mean \(\mu\), i.e.~ \(CP(\lambda, Exp(1/\mu))\) \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Simulation n observations from a CP(lambda,FX) distribution} \CommentTok{\# Assumptions:} \CommentTok{\# N \textasciitilde{} Poisson(lambad)} \CommentTok{\# X \textasciitilde{} Pa(alpha,beta)} \FunctionTok{library}\NormalTok{(actuar)} \NormalTok{n }\OtherTok{\textless{}{-}} \DecValTok{10000} \NormalTok{lambda }\OtherTok{\textless{}{-}} \DecValTok{10} \NormalTok{alpha }\OtherTok{\textless{}{-}} \DecValTok{4} \NormalTok{beta }\OtherTok{\textless{}{-}} \DecValTok{1} \NormalTok{totalClaims }\OtherTok{\textless{}{-}} \FunctionTok{rep}\NormalTok{(}\DecValTok{0}\NormalTok{,n)} \NormalTok{numclaims }\OtherTok{\textless{}{-}} \FunctionTok{rpois}\NormalTok{(n,lambda)} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\SpecialCharTok{:}\NormalTok{n)} \NormalTok{ totalClaims[i] }\OtherTok{\textless{}{-}} \FunctionTok{sum}\NormalTok{(}\FunctionTok{rpareto}\NormalTok{(numclaims[i], }\AttributeTok{shape =}\NormalTok{alpha, }\AttributeTok{scale =}\NormalTok{ beta))} \FunctionTok{hist}\NormalTok{(totalClaims)} \end{Highlighting} \end{Shaded} \includegraphics{SCMA470Bookdownproj_files/figure-latex/simulationCP-1.pdf} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{mean}\NormalTok{(totalClaims)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 3.338976 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{var}\NormalTok{(totalClaims)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 3.29011 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(moments)} \FunctionTok{skewness}\NormalTok{(totalClaims)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 1.345385 \end{verbatim} \textbf{Note} An important property of independent, but not necessarily identically distributed, compound Poisson random variables is that the sum of a fixed number of them is also a compound Poisson random variable. \begin{example} \protect\hypertarget{exm:Additivity}{}\label{exm:Additivity} Let \(S_1, \ldots, S_n\) be independent compound Poisson random variables, with parameters \(\lambda_i\) and \(F_i\). Then \(S = \sum_{i=1}^n S_i\) has a compound distribution with parameter \[\lambda = \sum_{i=1}^n \lambda_i,\] and \[F = \frac{1}{\lambda}\sum_{i=1}^n \lambda_i F_i.\] \end{example} \textbf{Solution:} Exercise. \textbf{Note} The compound Poisson distribution is the most often used in practice. It possesses the additivity of independent compound Poisson distributions (as shown in Example \ref{exm:Additivity}, and the expressions of the first three moments are very simple. \hypertarget{compound-negative-binomial-distributions}{% \subsection{Compound negative binomial distributions}\label{compound-negative-binomial-distributions}} A useful discrete random variable that can be used for modelling the distributions of claim numbers is a negative binomial distribution. A random variable \(N\) has a negative distribution with parameters \(k\) and \(p\), denoted by \(N \sim NB(k,p)\) if its probability mass function is given by \[f_N(n) = \Pr(N = n) = \frac{\Gamma(k+n)}{\Gamma(n+1)\Gamma(k)} p^k (1- p)^n \quad n = 0,1,2,\ldots.\] It can be interpreted as the probability of getting \(n\) failures before the \(k\)th success occurs in a sequence of independent Bernoulli trials with probability of success \(p\). \begin{example} \protect\hypertarget{exm:unlabeled-div-44}{}\label{exm:unlabeled-div-44} Let \(N \sim NB(k,p)\). Show that the mean, variance and moment generating function of the compound negative binomial distribution, denoted by \(\mathcal{CNB}(k,p,F_X)\), are as follows: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \(E[S] = \frac{k q}{p} m_1\), \item \(Var[S] = \frac{k q}{p^2} (p m_2 + q m_1^2)\), \item \(M_S(t) = \left( \frac{ p }{ 1 - q M_X(t) } \right)^k,\) \end{enumerate} where \(m_k\) be the \(k\)th moment of \(X_1\) and \(q = 1-p\). \end{example} \textbf{Solution:} The results follows from the properties of the negative binomial distribution \(N \sim NB(k,p)\): \[E[N] = \frac{kq}{p}, \quad Var[N] = \frac{kq}{p^2},\] and the moments of a compound distribution \(S\) derived in Section \ref{sectionCompoundDistribution}. \textbf{Notes} 1. The negative binomial distribution is an alternative to the Poisson distribution for \(N\), in the sense that it allows for any value of \(N = 0, 1, 2, \ldots\), unlike the binomial distribution which has an upper limit. One advantage that the negative binomial distribution has over the Poisson distribution is that its variance exceeds its mean. These two quantities are equal for the Poisson distribution. Thus, the negative binomial distribution may give a better fit to a data set which has a sample variance in excess of the sample mean. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \tightlist \item The compound negative binomial distribution is an appropriate to model the heterogeneity of the numbers of claims occurring for different risks. In particular, suppose that for each policy, the number of claims in a year has a Poisson distribution \(N | \lambda \sim Poisson(\lambda)\), and that the variation in \(\lambda\) across the portfolio can be modelled using a Gamma distribution \(\mathcal{G}(\alpha, \lambda)\). Then the number of claims in the year for a policy chosen at random from the portfolio has a negative binomial distribution. \end{enumerate} \hypertarget{misture-distributions}{% \subsubsection{Misture distributions}\label{misture-distributions}} Suppose we model a policyholder's claim number \(N\) using a conditional distribution \(N | \lambda\), where \(\lambda\) can be thought of as a ``risk parameter'' for that policyholder. Policyholders represent a variety of risks and have different risk parameters, and we model the variation across policyholders by regarding the various \(\lambda\)s as being independent realisations of a random variable with known probability distribution. This gives the joint density, which we can write as \(f_{N,\lambda}(k, \lambda) = f_\lambda(\lambda) f_{N|\lambda}(k | \lambda)\). This enables us to allow for variability in the risks across a portfolio; that is, to model the heterogeneity of the numbers of claims occurring for different risks. \begin{example} \protect\hypertarget{exm:unlabeled-div-45}{}\label{exm:unlabeled-div-45} A portfolio consists of a large number of individual policies. For each policy, the number of claims in a year has a poisson distribution \(N | \lambda \sim Poisson(\lambda)\). Let us suppose that the variation in \(\lambda\) across the portfolio of risks can be modelled using a gamma \(\mathcal{G}(\alpha,\beta)\) distribution with known parameters, and let us use this to average across the risks. We are considering a \textbf{mixture} of Poissons where the mixing distribution if gamma. This is also known as a mixture \textbf{distribution}. Derive the probability mass function of the mixture distribution. \end{example} \textbf{Solution:} For \(k = 0,1,2,\ldots\), we have \[ \begin{aligned} \Pr(N = k) &= \int f_\lambda(\lambda) \Pr(N = k | \lambda) \, d\lambda \\ &= \int_{0}^\infty \frac{\beta^\alpha}{\Gamma(\alpha)} \lambda^{\alpha -1} e^{\beta \lambda} e^{-\lambda} \frac{\lambda^k}{k!} \, d\lambda \\ &= \frac{\Gamma(\alpha + k)}{\Gamma(\alpha) \Gamma(1 + k)} \frac{\beta^\alpha}{(\beta+1)^{\alpha+k}} \times \int_0^\infty h(\lambda) \, d\lambda \end{aligned} \] where \(h(\lambda)\) is the probability density function of \(\lambda \sim \mathcal{G}(\alpha + k, \beta + 1)\). Hence, \[ \Pr(N =k) = \frac{\Gamma(\alpha + k)}{\Gamma(\alpha) \Gamma(1 + k)} \left(\frac{\beta}{\beta+1}\right)^\alpha \left(\frac{1}{\beta+1}\right)^k, k = 0,1,2,\ldots,\] which is the probability mass function of a \(\mathcal{NB}(\alpha, \beta/(\beta + 1))\) distribution. This provides an illuminating view of the negative binomial distribution -- it arises as a mixture of Poissons where the mixing distribution is gamma. \hypertarget{an-example-in-r}{% \subsubsection{an Example in R}\label{an-example-in-r}} This R Markdown introduces the concept of mixture distributions which applies to models for claim numbers. Suppose we model a policyholder's claim numbers \(N\) using a conditional distribution \(N | \lambda\), where \(\lambda\) can be thought of as a \textbf{risk parameter} for that policyholder. Policyholders represent a variety of risks and have different risk parameters, and we model the variation across policyholders by regarding the various \(\lambda\)s as being independent realisations of a random variable with known probability distribution. The following R code produces the required \(n\) simulated values from this mixture distribution, where \(N | \lambda \sim Poisson(\lambda)\) with mixing distribution \(\mathcal{G}(\alpha, \beta)\), i.e.~\(\lambda \sim \mathcal{G}(\alpha, \beta)\). eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiJzZXQuc2VlZCg1MzUzKVxubiA8LSA1MDAwXG5hbHBoYSA8LSA0XG5iZXRhIDwtIDEvM1xubGFtYmRhIDwtIHJnYW1tYShuLCBzaGFwZSA9IGFscGhhLCByYXRlID0gYmV0YSlcbm51bWNsYWltcyA8LSBzYXBwbHkobGFtYmRhLHJwb2lzLG4gPTEpXG5oaXN0KG51bWNsYWltcylcbnByaW50KG1lYW4obnVtY2xhaW1zKSlcbnByaW50KHZhcihudW1jbGFpbXMpKVxuXG4jTWl4dHVyZSB+IG5iKGFscGhhLGJldGEvKGJldGErMSkpXG5hbHBoYU1peHR1cmUgPC0gYWxwaGFcbmJldGFNaXh0dXJlIDwtIGJldGEvKGJldGErMSlcbm1lYW5NaXh0dXJlIDwtIGFscGhhTWl4dHVyZSooMS1iZXRhTWl4dHVyZSkvYmV0YU1peHR1cmVcbnZhck1peHR1cmUgPC0gbWVhbk1peHR1cmUvYmV0YU1peHR1cmUifQ== \hypertarget{compound-binomial-distributions}{% \subsection{Compound binomial distributions}\label{compound-binomial-distributions}} A compound binomial distribution can be used to model a portfolio of policies, each of which can give rise to at most one claim. \begin{example} \protect\hypertarget{exm:unlabeled-div-46}{}\label{exm:unlabeled-div-46} Consider a portfolio of \(n\) independent and identical policies where there is at most one claim on each policy in a year (for e.g.~life insurance). Let \(p\) be the probability that a claim occurs. Explain that the aggregate sum \(S\) in this portfolio has a compound binomial distribution, denoted by \(\mathcal{CB}(n,p,F_X)\). Derive the mean, variance and moment generating function of \(S\). \end{example} \textbf{Solution:} Since \(n\) policies (lives) are independent with the probability \(p\) that a claim occurs, the number \(N\) of claims on the portfolio in one year has a binomial distribution i.e.~\(N \sim \text{bi}(n,p)\). If the sizes of the claims are i.i.d. random variables, independent of \(N\), then the total amount \(S\) claimed on this policy in one year has a compound binomial distribution. The mean, variance and the moment generating function of \(S\) are as follows: \[\begin{aligned} E[S] &= n p m_1, \\ Var[S] &= np m_2 - n p^2 m_1^2, \\ M_S(t) &= \left( q + p M_X(t) \right)^n,\end{aligned}\] where \(m_k\) be the \(k\)th moment of \(X_1\) and \(q = 1-p\). \hypertarget{the-effect-of-reinsurance}{% \section{The effect of reinsurance}\label{the-effect-of-reinsurance}} The effect of reinsurance arrangements on an aggregate claims distribution will be presented. Let \(S\) denotes the total aggregate claims from a risk in a given time, \(S_I\) and \(S_R\) denote the insurance and reinsurance aggregate claims, respectively. It follows that \[S = S_I + S_R.\] \hypertarget{proportional-reinsurance-1}{% \subsection{Proportional reinsurance}\label{proportional-reinsurance-1}} Recall that under proportional reinsurance arrangement, a fixed proportion \(\alpha\) is paid by the direct insurer and the remainder of the claim is paid by the reinsurer. It follows that \[S_I = \sum_{i= 1}^N \alpha X_i = \alpha S\] and \[S_R = \sum_{i= 1}^N (1- \alpha) X_i = (1- \alpha) S,\] where \(X_i\) is the amount of the \(i\)th claim. \textbf{Notes} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Both direct insurer and the reinsurer are involved in paying each claim. \item Both have unlimited liability unless a cap on the claim amount is arranged. \end{enumerate} \begin{example} \protect\hypertarget{exm:unlabeled-div-47}{}\label{exm:unlabeled-div-47} Aggregate claims from a risk in a given time have a compound Poisson distribution with Poisson parameter \(\lambda = 10\) and an individual claim amount distribution that is a Pareto distribution, \(Pa(4,1)\). The insurer has effected proportional reinsurance with proportion retained \(\alpha = 0.8\). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Find the distribution of \(S_I\) and \(S_R\) and their means and variances. \item Compare the variances \(Var[S_I] +Var[S_R]\) and \(Var[S]\). Comment on the results obtained. \end{enumerate} \end{example} \textbf{Solution:} 1. We have \[\begin{aligned} S_I &= \sum_{i=1}^N \left( \alpha X_i \right) = \alpha \sum_{i=1}^N X_i = \alpha \cdot S, \\ S_R &= \sum_{i=1}^N \left( (1- \alpha) X_i \right) = (1- \alpha) \sum_{i=1}^N X_i = (1- \alpha) \cdot S,\end{aligned}\] since both insurer and reinsurer are involved in paying each claim, i.e.~\(Y_i = \alpha X_i\) and \(Z_i = (1- \alpha) X_i\). It follows that \(S_I \sim \mathcal{CP}(10,F_Y)\) and \(S_R \sim \mathcal{CP}(10,F_Z)\). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item As we can show that if \(X \sim Pa(\beta,\lambda)\), then \(W = k X \sim Pa(\beta,k \lambda)\). For \(X \sim Pa(4,1)\) with \(\beta = 4\) and \(\lambda = 1\), we have \(Y_i \sim Pa(\beta, \alpha \cdot \lambda) = Pa(4, 0.8)\) and \[\begin{aligned} E[S_I] &= 10 \cdot E[Y_i] = 10 \cdot \frac{\alpha \cdot \lambda}{\beta - 1} \\ & = \frac{8}{3}, \\ Var[S_I] &=10 \cdot E[Y_i^2] = 10 \cdot \frac{\Gamma(\beta - 2) \cdot \Gamma(1 + 2) \cdot (\alpha \cdot \lambda)^2 }{\Gamma(\beta)} \\ &= 10 \cdot \frac{\Gamma(2) \cdot \Gamma(3) \cdot (\alpha \cdot \lambda)^2 }{\Gamma(4)} = 10 \cdot \frac{2!}{3!} \cdot (0.8)^2 \\ &= \frac{32}{15} \end{aligned}\] Alternatively, we can calculate by using the properties of the expectation and variance as follows: \[\begin{aligned} E[S_I] &= E[\alpha S] = \alpha\cdot E[S] = \alpha\cdot \lambda \cdot E[X] = 10 \cdot 0.8 \cdot \frac{1}{3} = \frac{8}{3}, \\ Var[S_I] &= Var[\alpha S] = \alpha^2 \cdot Var[S] \\ &= \alpha^2 \cdot \lambda \cdot E[X^2] = \frac{32}{15} .\end{aligned}\] \end{enumerate} Similarly, \[\begin{aligned} E[S_R] &= E[(1 - \alpha) S] = (1 - \alpha)\cdot E[S] = \frac{2}{3}, \\ Var[S_R] &= Var[(1 - \alpha) S] = (1 - \alpha)^2 \cdot Var[S] = \frac{2}{15} .\end{aligned}\] Note that \(E[S_I] + E[S_R] = E[S]\), while \(Var[S_I] + Var[S_R] = \frac{34}{15} < Var[S] = \frac{10}{3}.\) \hypertarget{excess-of-loss-reinsurance-1}{% \subsection{Excess of loss reinsurance}\label{excess-of-loss-reinsurance-1}} Recall that under excess of loss reinsurance arrangement, the direct insurer has effected excess of loss reinsurance with retention level \(M >0\). For a claim \(X\), \begin{itemize} \item the insurance company pays any claim in full if \(X \le M\); and \item the reinsurer (or reinsurance company) pays the remaining amount of \(X - M\) if \(X > M\). \end{itemize} It follows that \begin{equation} \label{eq:eqnSI} S_I = \sum_{i= 1}^N Y_1 + Y_2 + \ldots + Y_N = \sum_{i= 1}^N \min(X_i,M) \end{equation} and \begin{equation} \label{eq:eqnSR} S_R = \sum_{i= 1}^N Z_1 + Z_2 + \ldots + Z_N = \sum_{i= 1}^N \max(0,X_i - M), \end{equation} where \(X_i\) is the amount of the \(i\)th claim. When \(N = 0\), we set \(S_I = 0\) and \(S_R = 0\). \textbf{Note} \(S_R\) can equal 0 even if \(N > 0\). This occurs when all claims do not exceed \(M\) and hence the insurer pays the full amounts of claims. As discussed in the previous section, the reinsurer is involved only claims which exceed the retention limit (a claim such that \(X > M\)). Such claims are called \textbf{reinsurance claims}. Taking in account of counting only non-zero claims, we can rewrite \(S_R\) as follows. Let \(N_R\) be the number of insurance (non-zero) claims for the reinsurer and \(W_i\) be the amount of the \(i\)th non-zero payment by the reinsurer. The aggregate claim amount paid by the reinsurer can be written as \[S_R = \sum_{i= 1}^{N_R} W_i.\] \begin{example} \protect\hypertarget{exm:unlabeled-div-48}{}\label{exm:unlabeled-div-48} By using the probability generating function, show that if \(N \sim Poisson(\lambda)\), then the distribution \(N_R \sim Poisson(\lambda \pi_M)\) where \(\pi_M = \Pr(X_j > M)\). \end{example} \textbf{Solution:} Define the indicator random variable \(\{I_j\}_{j=1}^\infty\), where \[\begin{aligned} I_j = \begin{cases} 1 &\text{if } X_j > M\\ 0 &\text{if } X_j \le M. \end{cases}\end{aligned}\] Therefore, \[N_R = \sum_{j= 1}^{N} I_j.\] The variable \(N_R\) has a compound distribution with its probability generating function \[P_{N_R}(r) = P_N[P_I(r)],\] where \(P_I\) is the probability generating function of the indicator random variable. It can be shown that \[P_I(r) = 1 - \pi_M + \pi_M r,\] where \(\pi_M = \Pr(I_j = 1) = \Pr(X_j > M) = 1 - F(M)\). \textbf{Note} In the above example, one can derive the distribution of \(N_R\) by using the moment generating function: \[M_{N_R}(t) = M_N(\log M_I(t)),\] where \(M_N\) and \(M_I\) are the moment generating functions of \(N\) and \(I\). Note also that \[M_I(t) = 1 - \pi_M + \pi_M Exp(t).\] \hypertarget{compound-poisson-distributions-under-excess-of-loss-reinsurance}{% \subsection{Compound Poisson distributions under excess of loss reinsurance}\label{compound-poisson-distributions-under-excess-of-loss-reinsurance}} Assume that aggregate claim amount \(S \sim \mathcal{CP}(\lambda,F_X)\) has a compound Poisson distribution. Under excess of loss reinsurance with retention level \(M\), it follows from \eqref{eq:eqnSI} and \eqref{eq:eqnSR} that \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \(S_I \sim \mathcal{CP}(\lambda,F_Y),\) where \(f_Y(x) = f_X(x)\) for \(0 < x < M\) and \(\Pr(Y = M) = 1 - F_X(M)\). \item \(S_R \sim \mathcal{CP}(\lambda,F_Z),\) where \(F_Z(0) = F_X(M)\) and \(f_Z(x) = f_X(x+M), x > 0\). \item Excluding zero claims, \(S_R \sim \mathcal{CP}(\lambda \,( 1 - F_X(M)) , F_W),\) where \(f_W(x) = \displaystyle{\frac{f_X(x+M)}{1- F_X(M)}}, x > 0\). \end{enumerate} \begin{example} \protect\hypertarget{exm:unlabeled-div-49}{}\label{exm:unlabeled-div-49} Suppose that \(S\) has a compound Poisson distribution with Poisson parameters \(\lambda = 10\) and the claim sizes have the following distribution \begin{longtable}[]{@{}lllll@{}} \toprule \(x\) & 1 & 2 & 5 & 10 \\ \midrule \endhead \(\Pr(X = x)\) & 0.4 & 0.3 & 0.2 & 0.1 \\ \bottomrule \end{longtable} The insurer enters into an excess of loss reinsurance contract with retention level \(M = 4\). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Show that \(S_I \sim\mathcal{CP}(\lambda,F_Y)\). \item Show that \(S_R \sim\mathcal{CP}(\lambda,F_Z)\). \item By excluding zero claims, show that the \(S_R\) can also be expressed as \(S_R \sim \mathcal{CP}(\lambda \, p,F_W)\) where \(p = \Pr(X > M)\). \item Find the mean and variance of the aggregate claim amount for both insurer and reinsurer. \end{enumerate} \end{example} \textbf{Solution:} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Recall that \(S_I = \sum_{i=1}^N \min\{X_i,4\}.\) The number of claim remains the same, and hence \(N \sim Poisson(10)\). The distribution of claim amount paid by the insurer, \(F_Y(x)\) is given by \end{enumerate} \begin{longtable}[]{@{}llll@{}} \toprule \(x\) & 1 & 2 & 4 \\ \midrule \endhead \(\Pr(Y = x)\) & 0.4 & 0.3 & 0.3 \\ \bottomrule \end{longtable} Therefore, \(S_I \sim \mathcal{CP}(10,F_Y)\) and \[\begin{aligned} \mathrm{E}[S_I] &= 10 E[Y] = 10(1(0.4) + 2(0.3) + 4(0.3)) = 22, \\ \mathrm{Var}[S_I] &= 10 E[Y] = 10(1^2(0.4) + 2^2(0.3) + 4^2(0.3)) = 64. \end{aligned}\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \tightlist \item We have \(S_R = \sum_{i=1}^N \max\{0, X_i - 4\}.\) When zero claims are included, \(N_R = N \sim Poisson(10)\). The distribution of claim amount paid by the reinsurer, \(F_Z(x)\) is given by \end{enumerate} \begin{longtable}[]{@{}llll@{}} \toprule \(x\) & 0 & 1 & 6 \\ \midrule \endhead \(\Pr(Z = x)\) & 0.7 & 0.2 & 0.1 \\ \bottomrule \end{longtable} Therefore, \(S_R \sim \mathcal{CP}(10,F_Z)\) and \[\begin{aligned} \mathrm{E}[S_R] &= 10 E[Z] = 8, \\ \mathrm{Var}[S_R] &= 10 E[Z^2] = 38. \end{aligned}\] \textbf{Notes} \begin{enumerate} \def\labelenumi{\alph{enumi}.} \item \(\mathrm{E}[S] = 10 \mathrm{E}[X] = 30\) and \(\mathrm{Var}[S]= 10 \mathrm{E}[X^2] = 166.\) \item \(\mathrm{E}[S_I + S_R] = \mathrm{E}[S]\), and \[\mathrm{Var}[S_I + S_R] = 64 + 38 < 166 = \mathrm{Var}[S].\] \end{enumerate} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item Consider the reinsurer's position when zero claims are excluded. We define \[ W = Z | Z >0 = X - 4 | X > 4.\] We first compute \(\pi_M\), the proportion of claims which involve the reinsurer, from \[ \pi_M = \mathrm{Pr}(X>4) = \mathrm{Pr}(X = 5) + \mathrm{Pr}(X = 10) = 0.3.\] Recall that \(S_R = \sum_{i=1}^{N_R} W_i.\) We have \(S_R \sim \mathcal{CP}(0.3 \times 10, F_W)\) and the distribution of \(W, F_W(x)\) is given by \end{enumerate} \[ \begin{aligned} \mathrm{Pr}(W = 1) &= \mathrm{Pr}(X = 5 | X > 4) = \frac{ \mathrm{Pr}(X = 5 , X > 4) }{\mathrm{Pr}(X > 4)} = \frac{ \mathrm{Pr}(X = 5) }{\mathrm{Pr}(X > 4)} = \frac{2}{3} \\ \mathrm{Pr}(W = 6) &= 1- \mathrm{Pr}(W = 1) = \frac{1}{3}. \end{aligned} \] Hence, \[ \begin{aligned} \mathrm{E}[S_R] &= (10 \times 0.3)(1(2/3) + 6(1/3)) = 8, \\ \mathrm{Var}[S_R] &= (10 \times 0.3)(1^2(2/3) + 6^2(1/3)) = 38. \end{aligned} \] \begin{example} \protect\hypertarget{exm:unlabeled-div-50}{}\label{exm:unlabeled-div-50} Suppose that \(S\) has a compound Poisson distribution with Poisson parameters \(\lambda = 40\) and the claim sizes have a Pareto distribution \(Pa(3,4)\). The insurer has an excess of loss reinsurance contract in place with retention level \(M = 2\). Find the mean and variance of the aggregate claim amount for both insurer and reinsurer. \end{example} \textbf{Solution:} 1. \textbf{Zero claims included}. Recall that that for \(X \sim \mathcal{Pa}(\alpha,\lambda)\), its density function is \[ f_X(x) = \frac{\alpha \lambda^\alpha}{ (x + \lambda)^{\alpha + 1}}.\] We know that \(S_R \sim \mathcal{CP}(40,F_Z)\). Moreover, \[ \begin{aligned} \mathrm{E}[Z] &= 0 F_Z(0) + \int_0^\infty x f_Z(x)\, dx \\ &= \int_0^\infty x f_X(x + M)\, dx \\ &= \int_0^\infty \frac{x \cdot 3 \cdot 4^3}{(x + 2 + 4)^{3+1}} \, dx\\ &= \frac{4^3}{6^3} \int_0^\infty \frac{x \cdot 3 \cdot 6^3}{(x + 6)^{3+1}} \, dx \\ &= \frac{4^3}{6^3} \frac{6}{3-1} = 0.88\ldots .\\ \end{aligned} \] Note that the last integral above is the mean of \(\mathcal{Pa}(3,6)\), which is equal to \(6/(3-1)\). For \(\mathrm{E}[Z^2]\), we proceed as follows: \[ \begin{aligned} \mathrm{E}[Z^2] &= 0^2 F_Z(0) + \int_0^\infty x^2 f_Z(x)\, dx \\ &= \int_0^\infty x^2 f_X(x + M)\, dx \\ &= \int_0^\infty \frac{x^2 \cdot 3 \cdot 4^3}{(x + 2 + 4)^{3+1}} \, dx\\ &= \frac{4^3}{6^3} \int_0^\infty \frac{x^2 \cdot 3 \cdot 6^3}{(x + 6)^{3+1}} \, dx \\ &= \frac{4^3}{6^3} 6^2 = 10.66\ldots .\\ \end{aligned} \] Note that the last integral above is the second moment about the origin of \(\mathcal{Pa}(3,6)\), which is equal to \([6^2 \cdot\Gamma(3-2)\Gamma(1+2)] /\Gamma(3) = 6^2\). Therefore, \[\begin{aligned} \mathrm{E}[S_R] &= \lambda E[Z] = 320/9, \\ \mathrm{Var}[S_R] &= \lambda E[Z^2] = 1280/3. \end{aligned}\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \tightlist \item \textbf{Zero claims excluded} We define \[ W = Z | Z >0 = X - M | X > M.\] \end{enumerate} \[ \begin{aligned} \mathrm{E}[W] &= \int_0^\infty x f_W(x)\, dx \\ &= \int_0^\infty \frac{x f_X(x + 2)}{(1 - F_X(2))} \, dx \\ &= \frac{1}{(1 - F_X(2))} \int_0^\infty x f_X(x + 2) \, dx \\ &= \frac{1}{(1 - F_X(2))}\cdot \mathrm{E}[Z]. \end{aligned} \] It follows that \[ \mathrm{E}[S_R] = \lambda\cdot \mathrm{Pr}(X > M) \cdot \mathrm{E}[W] = 40 ( 1 - F_X(2)) \mathrm{E}[W] = 40\mathrm{E}[Z] = 320/9.\] Similarly, one can show that \[ \begin{aligned} \mathrm{E}[W^2] &= \int_0^\infty x^2 f_W(x)\, dx \\ &= \frac{1}{(1 - F_X(2))}\cdot \mathrm{E}[Z^2]. \end{aligned} \] This results in \[\mathrm{Var}[S_R] = \lambda \cdot \mathrm{Pr}(X > M) \cdot \mathrm{E}[W^2] = 40 ( 1 - F_X(2)) \mathrm{E}[W^2] = 40\mathrm{E}[Z^2] = 1280/3.\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item Note that \(S = S_I + S_R\) and \[ \mathrm{E}[S] = \lambda \mathrm{E}[X] = 40 \frac{4}{3-1} = 80. \] Therefore,\\ \[ \mathrm{E}[S_I] = 80 - \frac{320}{9} = \frac{400}{9}. \] \end{enumerate} \hypertarget{approximation-of-the-collective-risk-model}{% \section{Approximation of the collective risk model}\label{approximation-of-the-collective-risk-model}} \hypertarget{the-normal-approximation}{% \subsection{The normal approximation}\label{the-normal-approximation}} According to the Central Limit Theorem, if the mean number of claims is large, then the distribution of aggregate claims \(S\) can be approximated by a normal distribution, i.e.~\(S \sim \mathcal{N}(\mathrm{E}[S], \mathrm{Var}[S])\). \textbf{Notes} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item The normal approximation may not provide a good approximation to the distribution of \(S\) because the true distribution of \(S\) is skew. However, the normal approximation is symmetric. \item The normal approximation is likely to underestimate tail probabilities which are the most interest quantities of insurers. \end{enumerate} \begin{example} \protect\hypertarget{exm:approximation}{}\label{exm:approximation} Aggregate claims from a risk in a given time have a compound Poisson distribution with Poisson parameter \(\lambda\) and an individual claim amount distribution that is a lognormal distribution with mean 1 and variance 2.5. \end{example} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Approximate the distribution of \(S\) using the normal distribution when (a) \(\lambda = 10\) and (b) \(\lambda = 100\). \item Find \(x\) such that \(\Pr(S \le x) = 0.95\) in both cases. \item Comment on the obtained results. \end{enumerate} \hypertarget{the-translated-gamma-approximation}{% \subsection{The translated gamma approximation}\label{the-translated-gamma-approximation}} The translated gamma approximation makes use of the first three moments of \(S\) and provides an improvement of the approximation over the normal approximation. We assume that \(S\) can be approximated by \(Y + k\) where \(Y \sim \mathcal{G}(\alpha, \lambda)\) and \(k\) is a constant. This distribution \(Y + k\) is said to have a translated gamma distribution. By matching the moments of the two distribution, the parameters \(\alpha, \lambda\) and \(k\) can be found from \[\begin{aligned} Sk[S] &= \frac{2}{\sqrt{\alpha}}, \\ Var[S] &= \frac{\alpha}{\lambda^2},\\ E[S] &= \frac{\alpha}{\lambda} + k.\end{aligned}\] \begin{example} \protect\hypertarget{exm:unlabeled-div-51}{}\label{exm:unlabeled-div-51} Show that the parameters \(\alpha, \lambda\) and \(k\) satisfy \[\begin{aligned} \alpha &= \frac{4}{Sk[S]^2}, \\ \lambda &= \sqrt{\frac{\alpha}{Var[S]}} \\ k &= E[S] - \frac{\alpha}{\lambda}.\end{aligned}\] \end{example} \begin{example} \protect\hypertarget{exm:unlabeled-div-52}{}\label{exm:unlabeled-div-52} The aggregate claims \(S\) have the compound Poisson distribution as given in Example \ref{exm:approximation}. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Use the translated gamma approximation to find \(x\) such that \(\Pr(S \le x) = 0.95\) when (a) \(\lambda = 10\) and (b) \(\lambda = 100\). \item Comment on the obtained results. \end{enumerate} \end{example} \hypertarget{recursive-calculation-of-the-collective-risk-model}{% \section{Recursive calculation of the collective risk model}\label{recursive-calculation-of-the-collective-risk-model}} The Panjer recursion formula provides recursive calculation of the collective risk model. The algorithm can be numerically computed on a computer provided that distribution of claim numbers \(N\) satisfy Panjer's recursion formula, \[p_n = \left( a + \frac{b}{n} \right) p_{n-1}, \quad n = 1,2, \ldots,\] where \(a\) and \(b\) are constants. \begin{example} \protect\hypertarget{exm:unlabeled-div-53}{}\label{exm:unlabeled-div-53} Show that a Poisson distribution \(N \sim Poisson(\lambda)\) satisfies Panjer's recursion formula, i.e.~find the constants \(a\) and \(b\). \end{example} Assume that the claim size variable \(X\) takes only \textbf{positive integers} and the distribution of claim numbers satisfies the Panjer's recursion formula. We define \begin{itemize} \item \(f_k = \Pr(X = k), \quad k = 1,2, \ldots,\) \item \(g_r = \Pr(S = r), \quad r = 0,1,2, \ldots.\) \end{itemize} Then the unknown \(g_r\) can be recursively calculated by \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \(g_0 = p_0\), \item \(g_r = \sum_{j=1}^r \left( a + \frac{bj}{r} \right) f_j g_{r-j}, \quad r = 1,2 \ldots.\) \end{enumerate} \textbf{Note} If \(X\) is not a discrete random variable, then we first approximate it by a discrete distribution and then apply the Panjer's recursion algorithm. \begin{example} \protect\hypertarget{exm:unlabeled-div-54}{}\label{exm:unlabeled-div-54} Aggregate claims \(S\) have a compound Poisson distribution \(\mathcal{CP}(\lambda,F_X)\) where \(\lambda = 1\) and an individual claim amount \(X\) is either 1 or 2 with probability 3/4 and 1/4, respectively. Calculate \(g_r\) for \(r = 0,1,2,3,4,5\). \end{example} \hypertarget{premium-calculation}{% \section{Premium calculation}\label{premium-calculation}} In this section, rules for setting premium to be charged to cover a risk \(S\) (aggregate claims) are presented. The expected (mean) risk \(E[S]\) is referred to as the \textbf{pure premium}. In practice, the premium must be set to cover the expected risk, i.e.~\(P > E[S]\). Some premium calculation rules are as follows: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \textbf{The expected value principle (EVP)} The premium is given by a simple formula: \[P = E[S] + \theta\, E[S] = ( 1 + \theta)E[S],\] for some \(\theta > 0\), which is called the \textbf{relative security loading} on the pure premium \(E[S]\). The premium is increased by a percentage of the mean of the risk. \item \textbf{The standard deviation principle (SVP)} The premium is increased by a percentage of the standard deviation of the risk. \[P = E[S] + \theta\, \text{SD}[S].\] \item \textbf{The variance principle (VP)} The premium is increased by a percentage of the variance of the risk. \[P = E[S] + \theta\, Var[S].\] \end{enumerate} \begin{example} \protect\hypertarget{exm:unlabeled-div-55}{}\label{exm:unlabeled-div-55} Suppose that \(S\) has a compound Poisson distribution with Poisson parameters \(\lambda = 10\) and the claim sizes have a Pareto distribution \(Pa(4,3)\). \end{example} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Use the normal approximation and the translated gamma approximation to calculate the relative security loading such that the probability of a profit in the year is 0.95. \item Repeat the same question as above assumed that the SVP is applied. \end{enumerate} \textbf{Solution:} The loss distribution has Pareto distribution \(X \sim Pa(4,3)\). Therefore, \(\mathrm{E}[X] = 1\) and \(\mathrm{E}[X^2] = 3\). The mean and variance of the aggregate claim amounts are \(\mathrm{E}[S] = \lambda \mathrm{E}[X] = 10\) and \(\mathrm{Var}[S] = \lambda \mathrm{E}[X^2] = 30\). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Use the normal approximation \(S \sim \mathcal{N}(10, 30)\). Let \(P\) denote the premium charged. We need to find the security loading \(\theta\) such that \[\Pr( P - S > 0) = 0.95.\] Assume that the premium charged to cover the risk follows the expected value principle (EVP). So we set \[P = E[S] + \theta\, E[S] = ( 1 + \theta)E[S] = ( 1 + \theta)(10).\] \begin{align} 0.95 &= \Pr( P - S > 0) \\ &= \Pr( ( 1 + \theta)(10) - S > 0) \\ &= \Pr( S < ( 1 + \theta)(10) ) \\ &= \Pr( Z < \frac{( 1 + \theta)(10) - 10 }{\sqrt{30}} ) \\ &= \Pr( Z < \frac{( \theta)(10) }{\sqrt{30}} ). \end{align} This gives \(\frac{( \theta)(10) }{\sqrt{30}} = 1.6448536\) and \[\theta = 0.9009234, \quad P = 19.009.\] Instead of using the normal approximation to aggregate claims \(S\), we now assume that \(S\) is approximated by \(Y + k\) where \(Y \sim \mathcal{G}(1.4814815, 0.2222222)\) and \(k = 3.3333333\) is a constant. It follows that \begin{align} 0.95 &= \Pr( P - S > 0) \\ &= \Pr( ( 1 + \theta)(10) - S > 0) \\ &= \Pr( S < ( 1 + \theta)(10) ) \\ &= \Pr( Y < ( 1 + \theta)(10) - 3.3333333 ) \\ \end{align} \end{enumerate} Therefore, \((1 + \theta)\)(10) - 3.3333333 = qgamma(0.95,shape = 1.481481, rate = 0.222222) = 17.43845, which results in \[\theta = 1.0771784, \quad P = 20.772.\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \tightlist \item Using the standard deviation principle (SVP), we set \[P = E[S] + \theta\, \text{SD}[S] = 10 + \theta \cdot \sqrt{30}\] For the normal approximation of the aggregate claims, we obtain \begin{align} 0.95 &= \Pr( P - S > 0) \\ &= \Pr( 10 + \theta \cdot \sqrt{30} - S > 0) \\ &= \Pr( S < 10 + \theta \cdot \sqrt{30} ) \\ &= \Pr( Z < \frac{10 + \theta \cdot \sqrt{30} - 10 }{\sqrt{30}} ) \\ &= \Pr( Z < \theta). \end{align} \end{enumerate} Hence, \(\theta = 1.644854\) and \(P = 19.009\). \textbf{Comment} The loading factors (for normal approximation) in both two cases are different because they are applied to different quantities (i.e.~\(\mathrm{E}[S]\) and \(\mathrm{Var}[S]\)). But they give the same premium. \hypertarget{ruin-theory}{% \chapter{Ruin Theory}\label{ruin-theory}} \hypertarget{the-classical-risk-process}{% \section{The classical risk process}\label{the-classical-risk-process}} Short term risk models for a fixed time period have been studied in the previous sections. In this section, risk models that evolve over time will be presented. Suppose that an insurer \begin{itemize} \item begins with an initial capital \(u\), called an initial surplus, \item collects premiums at a constant rate \(c\) per unit time, \item and pays claims when losses occur. \end{itemize} The insurer is in ruin if the insurer's capital becomes negative at some point in time, i.e.~the insurer's surplus falls to zero or below. \textbf{Note} A surplus is an excess of income or assets over expenditure or liabilities in a given period, typically a financial year: \begin{example} \protect\hypertarget{exm:ExampleSurplus}{}\label{exm:ExampleSurplus} \emph{An insurer has initial surplus \(u\) of 1 (in suitable units) and receives premium payments at a rate of 1 per year. Suppose claims from a portfolio of insurance over the first two years are as follows:} \begin{longtable}[]{@{}llll@{}} \toprule \emph{Time (years)} & \emph{0.4} & \emph{0.9} & \emph{1.5} \\ \midrule \endhead \emph{Amount} & \emph{0.8} & \emph{0.7} & \emph{1.2} \\ \bottomrule \end{longtable} \emph{Plot a surplus process and determine whether ruin occurs within the first three years.} \end{example} \textbf{Solution:} The insurer's surplus (or cash flow) at any future time \(t\) (\textgreater{} 0) is a random variable, since its value depends on the claims experience up to time \(t\). The insurer's surplus at time \(t\) is a random variable. The insurer's surplus at time \(t\) is denoted \(U(t)\). The following formula for \(U(t)\) can be written as \begin{equation} U(t) = u + ct - S(t), \end{equation} where the \textbf{aggregate claim amount up to time} \(t\), \(S(t)\) is \begin{equation} S(t) = \sum_{i = 1}^{N(t)} X_i . \end{equation} The following table summarises the values of the surplus function at the time when claims occurs. \begin{longtable}[]{@{}ccc@{}} \toprule Time & Surplus (before claim) & Surplus (after claim) \\ \midrule \endhead 0 & 1 & 1 \\ 0.4 & 1.4 & 0.6 \\ 0.9 & 1.1 & 0.4 \\ 1.5 & 1 & -0.2 \\ \bottomrule \end{longtable} The surplus function increases at a constant rate \(c\) until there is a claim and the surplus drops by the amount of the claim. The surplus then increases again at the same rate \(c\) and drops are repeated when claims occur. In this example, ruin occurs at time 1.5. The plot of the surplus process is given in the following figure. \begin{figure} \centering \includegraphics{SCMA470Bookdownproj_files/figure-latex/unnamed-chunk-19-1.pdf} \caption{\label{fig:unnamed-chunk-19}The surplus process befor reinsurance arrangment.} \end{figure} \begin{example} \protect\hypertarget{exm:unlabeled-div-56}{}\label{exm:unlabeled-div-56} \emph{As given in Example \ref{exm:ExampleSurplus}, suppose that the insurer has effected proportional reinsurance with retained proportion of 0.7. The reinsurance premium is 0.4 per year to be paid continuously. Plot a surplus process and determine whether ruin occurs within the first three years. Comment on the results.} \end{example} \textbf{Solution:} The insurer's net premium income is 0.6 per year. The insurer's cash flow or surplus process is now given by \begin{equation} U_I(t) = u + (c - c_r)t - \alpha \cdot S(t), \end{equation} where \(c_r\) is the reinsurance premium rate and \(\alpha\) is the retained proportion. The following table summarises the values of the surplus function at the time when claims occurs. \begin{longtable}[]{@{}ccc@{}} \toprule Time & Surplus (before claim) & Surplus (after claim) \\ \midrule \endhead 0 & 1 & 1 \\ 0.4 & 1.24 & 0.68 \\ 0.9 & 0.98 & 0.49 \\ 1.5 & 0.85 & 0.01 \\ \bottomrule \end{longtable} \begin{figure} \centering \includegraphics{SCMA470Bookdownproj_files/figure-latex/unnamed-chunk-21-1.pdf} \caption{\label{fig:unnamed-chunk-21}The surplus process under a proportional reinsurance arrangement.} \end{figure} It should be emphasised that under this proportional reinsurance arrangement, ruin does not occur within 2 years. \hypertarget{classical-risk-process}{% \subsection{Classical risk process}\label{classical-risk-process}} The following assumptions are assumed for the study of the evolution of insurer's surplus over time. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item The insurer's initial capital is \(u\). \item The premium rate per unit of time received continuously is \(c\), i.e. the total amount of premiums received by time \(t\) is \(ct\). \item The counting process \(\{N(t) \}_{t \ge 0}\) for the number of claims occurred in the time interval \([0,t]\) is a Poisson process with parameter \(\lambda\). \item The claim sizes (or individual claim amounts) \(X_1, X_2, \ldots\) are independent and identically distributed random variables. \item The claim sizes \(X_1, X_2, \ldots\) are independent of the counting process \(N(t)\). \end{enumerate} The \textbf{surplus process} \(\{U(t) \}_{t \ge 0}\) is then given by \begin{equation} \label{eq:surplus} U(t) = u + ct - S(t), \end{equation} where the \textbf{aggregate claim amount up to time} \(t\), \(S(t)\) is \begin{equation} \label{eq:St} S(t) = \sum_{i = 1}^{N(t)} X_i . \end{equation} The evolution of insurer's surplus defined in \eqref{eq:surplus} is also known as the \textbf{classical risk process}. The only random and uncertain quantity in \eqref{eq:surplus} is the aggregate claims \(S(t)\). \textbf{Notes} The classical risk model contains many simplification. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item The claim-arrival rate \(\lambda\) remains constant over time. \item No interest is pain on the surplus. \item There is no inflation. \item The premium income is received continuously in time. \item Claims are paid out \textbf{immediately}. \item there are assumptions of independence. \end{enumerate} \hypertarget{poisson-processes}{% \subsection{Poisson processes}\label{poisson-processes}} A \textbf{Poisson process} is a special type of counting process. It can be represented by a continuous time stochastic process \(\{N(t)\}_{t \ge 0}\) which takes values in the non-negative integers. It can be used to model the occurrence or arrival of events over a continuous time interval. The state space is discrete but the time set is continuous. Here \(N(t)\) represents the number of events in the interval \((0,t]\). The following examples can also be modelled by a Poisson process: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Claims arrivals at an insurance company, \item Accidents occurring on the highway, and \item Telephone calls to a call centre. \end{enumerate} \hypertarget{counting-process}{% \subsubsection*{Counting Process}\label{counting-process}} \addcontentsline{toc}{subsubsection}{Counting Process} A counting process \(\{N_t \}_{t \ge 0}\) is a collection of non-negative, integer-valued random variables such that if \(0 \le s \le t\), then \(N(s) \le N(t)\). The following figure illustrates a trajectory of the Poisson process. The sample path of a Poisson process is a right-continuous step function. There are jumps occurring at time \(t_1, t_2, t_3, \ldots\). \begin{Shaded} \begin{Highlighting}[] \NormalTok{lambda }\OtherTok{\textless{}{-}} \DecValTok{17} \CommentTok{\# the length of time horizon for the simulation T\_length \textless{}{-} 31} \NormalTok{last\_arrival }\OtherTok{\textless{}{-}} \DecValTok{0} \NormalTok{arrival\_time }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{()} \NormalTok{inter\_arrival }\OtherTok{\textless{}{-}} \FunctionTok{rexp}\NormalTok{(}\DecValTok{1}\NormalTok{, }\AttributeTok{rate =}\NormalTok{ lambda)} \NormalTok{T\_length }\OtherTok{\textless{}{-}} \DecValTok{1} \ControlFlowTok{while}\NormalTok{ (inter\_arrival }\SpecialCharTok{+}\NormalTok{ last\_arrival }\SpecialCharTok{\textless{}}\NormalTok{ T\_length) \{ } \NormalTok{ last\_arrival }\OtherTok{\textless{}{-}}\NormalTok{ inter\_arrival }\SpecialCharTok{+}\NormalTok{ last\_arrival } \NormalTok{ arrival\_time }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(arrival\_time,last\_arrival) } \NormalTok{ inter\_arrival }\OtherTok{\textless{}{-}} \FunctionTok{rexp}\NormalTok{(}\DecValTok{1}\NormalTok{, }\AttributeTok{rate =}\NormalTok{ lambda)} \NormalTok{\}} \NormalTok{n }\OtherTok{\textless{}{-}} \FunctionTok{length}\NormalTok{(arrival\_time)} \NormalTok{counts }\OtherTok{\textless{}{-}} \DecValTok{1}\SpecialCharTok{:}\NormalTok{n} \FunctionTok{plot}\NormalTok{(arrival\_time, counts, }\AttributeTok{pch=}\DecValTok{16}\NormalTok{, }\AttributeTok{ylim=}\FunctionTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{, n))} \FunctionTok{points}\NormalTok{(arrival\_time, }\FunctionTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{, counts[}\SpecialCharTok{{-}}\NormalTok{n]))} \FunctionTok{segments}\NormalTok{(} \AttributeTok{x0 =} \FunctionTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{, arrival\_time[}\SpecialCharTok{{-}}\NormalTok{n]),} \AttributeTok{y0 =} \FunctionTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{, counts[}\SpecialCharTok{{-}}\NormalTok{n]),} \AttributeTok{x1 =}\NormalTok{ arrival\_time,} \AttributeTok{y1 =} \FunctionTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{, counts[}\SpecialCharTok{{-}}\NormalTok{n])} \NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{SCMA470Bookdownproj_files/figure-latex/PlotPoissonProcess-1.pdf} Recall that a stochastic process \(\{N(t) \}_{t \ge 0}\) is a Poisson process with parameter \(\lambda\) if the process satisfies the three properties: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \(N(0) = 0\). \item \textbf{Independent increments} For \(0 < s < t \le u < v\), the increment \(N(t) - N(s)\) is independent of the increment \(N(v) - N(u)\), i.e. the number of events in \((s,t]\) is independent of the number of events in \((u,v]\). \item \textbf{Stationary increments} For \(0 < s <t\), the distribution \(N(t) - N(s)\) depends only on \(t -s\) and not on the values \(s\) and \(t\), i.e.~the increments of the process over time has a distribution that only depend on the time difference \(t - s\), the lenght of the time interval. \item \textbf{Poisson distribution} For \(t \ge 0\), the random variable \(N(t)\) has a Poisson distribution with mean \(\lambda t\). \end{enumerate} It follows from conditions the \textbf{Stationary Increments} and \textbf{Poisson Distribution} properties that \[\Pr(N(t) - N(s) = n) = \Pr(N(t-s) - N(0) = n) = \frac{ ( \lambda(t-s))^n e^{-\lambda(t-s)} }{n!}, \quad \quad s < t, \, n = 0,1,2, \ldots\] \textbf{Notes} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item The sample paths of \(\{N(t) \}_{t \ge 0}\) are non-decreasing step functions, or the process is referred to be as a counting process. \item A process with stationary and independent increments can be thought of as \textbf{starting over} at any point in time in a probabilistic sense. The `starting over' property follows from the fact that \begin{itemize} \item the exponential distribution has the memoryless property, and \item the times between successive events (or interarrival times) are independent and identically distributed exponential random variables with mean \(1/\lambda\). \end{itemize} \item For more details about Poisson processes, please refer to the contents of the course ``SCMA 469 Actuarial Statistics'' \end{enumerate} \hypertarget{compound-poisson-processes}{% \subsection{Compound Poisson processes}\label{compound-poisson-processes}} The aggregate claims process \(S(t)\) defined in \eqref{eq:St} of the classical risk process is said to be a \textbf{compound Poisson process} with Poisson parameter \(\lambda\). The compound Poisson process has the following important properties: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item For each \(t\), the random variable \(S(t)\) has a compound Poisson distribution with parameter \(\lambda t\), i.e. \[S(t) \sim \mathcal{CP}(\lambda t, F_X(x)).\] Thus, the mean and variance of the compound Poisson distribution are \[\mathrm{E}[S(t)] = \lambda t \mathrm{E}[X], \quad \mathrm{Var}[S(t)] =\lambda t \mathrm{E}[X^2].\] The moment generating function of \(S(t)\) is \[M_{S(t)}(r) = \exp(\lambda t(M_X(r) - 1)).\] \item It has stationary and independent increments, i.e.~for disjoint time intervals \(0 < s < t \le u < v\), the random variables \(N(t) - N(s)\) and \(N(v) - N(u)\) are independent and \(N(t) - N(s)\) depends only on \(t -s\) and not on the values \(s\) and \(t\). Hence, the random variables \(S(t) - S(s)\) and \(S(v) - S(u)\) are \textbf{independent} and have \(\mathcal{CP}(\lambda (t -s), F_X(x))\) and \(\mathcal{CP}(\lambda (v - u), F_X(x))\) distributions, respectively. \end{enumerate} \textbf{Notes} Various properties of the aggregate claims process \(S(t)\) can be summarised as follows: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \(S(1) \sim \mathcal{CP}(\lambda, F_X(x))\) is the aggregate claims in the first year. \item \(S(n) - S(n-1) \sim \mathcal{CP}(\lambda, F_X(x))\) is the aggregate claims in the \(n\)th year, for \(n = 1,2, \ldots\). \item The process \(\{ S(n) - S(n-1) \}_{n=1}^\infty\) is a sequence of \textbf{independent and identically distributed} random variables representing the aggregate claims in successive years. \end{enumerate} \hypertarget{the-relative-safety-loading}{% \subsection{The relative safety loading}\label{the-relative-safety-loading}} According to the expected value principle, the premium rate \(c\) per unit time is defined by \[c = (1 + \theta) \mathrm{E}[S(1)] = (1 + \theta) \lambda \mu_X.\] Hence the \textbf{relative safety loading} (or \textbf{premium loading factor} or \textbf{relative security loading}) \(\theta\) is given by \[\theta = \frac{c - \lambda \mu_X}{\lambda \mu_X}.\] In addition, the insurer should load the premium for profit so that \(c > \lambda \mu_X\). This finding follows from the following example. Let \(\mu_X\) and \(\sigma^2_X\) denote the mean and the variance of claim sizes \(X_i\) (in one period). \begin{example} \protect\hypertarget{exm:unlabeled-div-57}{}\label{exm:unlabeled-div-57} Consider the following questions. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Calculate the expected surplus and the variance surplus at time \(t\). \item Calculate the expected profit per unit time in \((0, t]\). \end{enumerate} \end{example} \textbf{Solution:} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item From \(U(t) = u + ct - S(t),\) the expected surplus at time \(t\) is \begin{align} \mathrm{E}[U(t)] &= u + ct - \mathrm{E}[S(t)] \\ &= u + ct - (\lambda t)\mathrm{E}[X] \\ &= u + ct - (\lambda t)\mu_X \\ &= u + (c - \lambda \mu_X)\cdot t, \end{align} and \[ \mathrm{Var}[U(t)] = \mathrm{Var}[S(t)] = (\lambda t)\mathrm{E}[X^2].\] \end{enumerate} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \tightlist \item The expected profit per unit time in \((0, t]\) can be calculated from \[\frac{\mathrm{E}[U(t) - U(0)]}{t} = c - \lambda \mu_X.\] This motivates the \textbf{net profit condition}: \[c > \lambda \mu_X. \] Given \(\lambda\) and \(\mu_X\), we aim to set the premium rate \(c\) that satisfies the net profit condition. \end{enumerate} \textbf{Notes} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item The insurer can make a profit provided that \(c > \lambda \mu_X\) or the relative safety loading \(\theta\) is positive. In this case, the surplus will drift to \(\infty\), but ruin could still occur. The rate at which premium income comes in is greater than the rate at which claims are paid out. \item On the other hand, if \(c < \lambda \mu_X\), then the surplus will drift to \(-\infty\), but ruin is certain. \item If \(c = \lambda \mu_X\), the surplus will drift to \(\infty\) and \(-\infty\), but ruin is certain (eventually). \end{enumerate} \hypertarget{ruin-probabilities}{% \subsection{Ruin probabilities}\label{ruin-probabilities}} Various definitions of ruin probabilities are given. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item The \textbf{probability of ruin in infinite time} (or the \textbf{ultimate ruin probability}) is defined by \[\psi(u) = \Pr(U(t) < 0 \quad \text{ for some } t > 0).\] \item The \textbf{finite-time ruin probability} (or the \textbf{probability of ruin by time \(t\)}) is defined by \[\psi(u,t) = \Pr(U(s) < 0 \quad \text{ for some } s \in (0,t]).\] \item The \textbf{discrete time ultimate ruin probability} is defined by \[\psi_h(u) = \Pr(U(t) < 0 \quad \text{ for some } t \in \{h, 2h, 3h, \ldots \}).\] \item The \textbf{discrete time ruin probability in finite time} is defined by \[\psi_h(u,t) = \Pr(U(s) < 0 \quad \text{ for some } s \in \{h, 2h, 3h, \ldots, t\}).\] \end{enumerate} \textbf{Notes} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item For \(0 \le u_1 \le u_2\), \[\psi(u_1) \ge \psi(u_2),\] and \[\psi(u_1,t) \ge \psi(u_2,t),\] i.e.~the ultimate ruin probability and finite-time ruin probability are non-increasing in \(u\). Intuitively, the larger the initial surplus, the less likely it is that ruin will occur either in a finite time period or an unlimited time period. \item If ruin occurs under the discrete time, it must occur under the continuous time, i.e.~\[\psi_h(u) < \psi(u).\] Similarly, \[\psi_h(u,t) < \psi(u,t).\] \item For a given initial surplus \(u\) and \(0 < t_1 < t_2\), \[\psi(u,t_1) < \psi(u,t_2).\] Intuitively, the longer the period considered when checking for ruin, the more likely it is that ruin will occur. \item The discrete time ultimate ruin probability \(\psi_h(u)\) could be used as an approximation of \(\psi(u)\) provided \(h\) is sufficiently small. \item The discrete time ruin probability in finite time \(\psi_h(u,t)\) could be used as an approximation of \(\psi(u,t)\) provided \(h\) is sufficiently small. \end{enumerate} \begin{example} \protect\hypertarget{exm:unlabeled-div-58}{}\label{exm:unlabeled-div-58} \emph{Suppose the annual aggregate claims for a portfolio of policies is approximately normal.} \begin{itemize} \item \emph{The insurer's initial surplus is 1000 (in suitable units) and the premium rate is 1500 per year.} \item \emph{The number of claims per year has a Poisson distribution with parameter 50.} \item \emph{The distribution of claim sizes is lognormal with parameters \(\mu = 3\) and \(\sigma^2 = 0.9\).} \end{itemize} \emph{Calculate the probability that the insurer's surplus at time 2 will be negative.} \end{example} \textbf{Solution:} Using the normal approximation, the total claims \(S\) can be approximated by \(S \sim \mathcal{N}(\mathrm{E}[S], \mathrm{Var}[S])\). We have \begin{align} \mathrm{E}[X] &= e^{\mu + \sigma^2/2} = 31.500392 \\ \mathrm{E}[X^2] &= e^{2\mu + 2\sigma^2} = 2440.601978. \end{align} Therefore, \begin{align} \mathrm{E}[S(2)] &= 2(50)\mathrm{E}[X] = 3150.039231 \\ \mathrm{Var}[S(2)] &= 2(50)\mathrm{E}[X2] = \ensuremath{2.440602\times 10^{5}} . \end{align} Hence, ruin will occur if \(S(2)\) is greater than the initial surplus plus premiums received. Therefore, the probability of ruin is \begin{align} \Pr(S(2) > u + 2c) &= \Pr(S(2) > 1000 + 2(1500)) \\ &= \Pr(Z > \frac{1000 + 2(1500) - 3150.0392309}{\sqrt{\ensuremath{2.440602\times 10^{5}}}}) \\ &= \Pr(Z > 1.720483) = 0.04267. \end{align} the probability of ruin is approximately 4.267\%. \hypertarget{simulation-of-ruin-probabilities}{% \section{Simulation of ruin probabilities}\label{simulation-of-ruin-probabilities}} In this section, we will use simulation to numerically estimate the probability of ruin. First, we introduce the \textbf{inverse transform method}, which is a method for generating random numbers from any probability distribution by using its inverse cumulative distribution. \begin{example} \protect\hypertarget{exm:unlabeled-div-59}{}\label{exm:unlabeled-div-59} \emph{Let \(F(x)\) be a continuous cumulative density function. Let \(Y\) be a random variable with a \(U(0,1)\) distribution. Define the random variable \(X\) by \[X = F^{-1}(Y).\] Show that the cumulative density function of \(X\), \(F_X(x)\) is \(F(x)\).} \end{example} \textbf{Solution:} We need to show that \(Pr(X \le x) = F(x)\) for all \(x\), i.e.~\(F_X(x)= F(x)\) as defined above. It follows from the monotonicity of \(F\) and the definition \begin{align} F_X(x) &= \Pr(X \le x)\\ &= \Pr(F^{-1}(Y) \le x)\\ &= \Pr(F(F^{-1}(Y)) \le F(x)\\ &= \Pr(Y \le F(x))\\ \end{align} Since \(Y \sim U(0,1)\), we have \(\Pr(Y \le t) = t\) for any \(t \in [0,1]\). Therefore, \[ F_X(x) = \Pr(Y \le F(x)) = F(x).\] \textbf{Note} We can use this result to generate values from the required probability distribution (which will be useful in Excel). In order to generate \(X_1, X_2, X_3, \ldots, X_n\) from \(\mathcal{G}(\alpha,\lambda)\) (or any other distributions) in Excel, we use \texttt{GAMMAINV(RAND(),\ alpha,\ 1/lambda)}. However, in R, we can simply use \texttt{rgamma(n,\ alpha,\ lambda)} to generate \(n\) random numbers from the \(\mathcal{G}(\alpha,\lambda)\) distribution. \begin{example} \protect\hypertarget{exm:unlabeled-div-60}{}\label{exm:unlabeled-div-60} \emph{The aggregate claims process for a risk is compound Poisson with Poisson parameter \(\lambda = 100\) per year. Individual claim amounts have \(\text{Pa}(4,3)\). The premium income per year is \(c = 110\) (in suitable units), received continuously.} \emph{Using either Excel or R to simulate 1000 values of aggregate claims \(S\), assuming that \(S\) is approximated by a translated gamma approximation,} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \emph{Estimate \(\hat{\psi}_1(50, 5)\), an estimate of \(\psi_1(50,5)\).} \item \emph{Estimate the standard error of \(\hat{\psi}_1(50, 5)\).} \item \emph{Calculate a 95\% confidence interval for your estimate in 1.} \item \emph{Estimate \(\psi_{0.5}(50, 5)\).} \end{enumerate} \end{example} \textbf{Solution:} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item An estimate of \(\psi_1(u,5)\) with \(u = 50\) and \(c =110\) can be obtained as follows: \end{enumerate} From the properties of \(S(t)\), \begin{itemize} \item The aggregate claims in the first year have \(S(1) \sim \mathcal{CP}(\lambda, F_X(x))\) distribution with \(\lambda = 100\) and \(X \sim \text{Pa}(4,3)\) \item The aggregate claims in the \(j\)th year, for \(j = 1,2, \ldots, 5\) have \(S(j) - S(j-1) \sim \mathcal{CP}(\lambda, F_X(x))\) distribution. \end{itemize} It follows that \begin{align} \psi_1(u,5) &= \Pr(U(j) < 0 \quad \text{ for at least one of } j \in \{1,2, \ldots, 5\}) \\ &= \Pr(u + cj - S(j) < 0 ), \text{ for at least one of } j = 1,2, \ldots, 5. \end{align} When \begin{itemize} \item \(j = 1, U(1) = 50 + 110 - S(1)\) \item \(j = 2, U(2) = 50 + (2)110 - S(2) = U(1) + c - (S(2) - S(1))\) \item \(j = 3, U(3) = 50 + (3)110 - S(3) = U(2) + c - (S(3) - S(2))\) \item \(j = 4, U(4) = 50 + (4)110 - S(4) = U(3) + c - (S(4) - S(3))\) \item \(j = 5, U(5) = 50 + (5)110 - S(5) = U(4) + c - (S(5) - S(4))\) \end{itemize} The algorithm to estimate the finite time ruin in discrete time can be described as follows: Step 1. Simulate values of \(S(1), S(2) - S(1), \ldots, , S(5) - S(4)\) from \(\mathcal{CP}(\lambda, F_X(x))\) distribution. Then compute \(U(1), U(2), \ldots U(5)\). Step 2. Check if one of \(U(1), U(2), \ldots U(5)\) are negative. Step 3. Repeat the simulations (1 and 2) 1000 times. Step 4. Let \(M\) be the number of simulations out of 1000 where ruin occurs. Then \(\hat{\psi}_1(50, 5) = \frac{M}{1000}.\) From the results, there are \(M = 21\) simulations that ruin occurs, and hence \[\hat{\psi}_1(50, 5) = \frac{M}{1000} = \frac{21}{1000}.\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item The estimation of the standard error of \(\hat{\psi}_1(50, 5)\) can be obtained as follows. We know that \(M \sim \mathcal{B}(1000,p)\) where \(p = \hat{\psi}_1(50, 5)\). Then, \[\textrm{Var}\left[\frac{M}{1000}\right] = \frac{1}{1000^2}\textrm{Var}[M] = \frac{1000 (p) (1-p)}{1000^2}\] and \[ \textrm{SD}[\hat{\psi}_1(50, 5)] = \frac{1000 (0.021) (1-0.021)}{1000^2} = 0.004534 \] \item The 95\% confidence interval of the estimate is \[ (\hat{\psi}_1(50, 5) - z_{\alpha/2}\textrm{SD}[\hat{\psi}_1(50, 5)], \hat{\psi}_1(50, 5) + z_{\alpha/2} \textrm{SD}[\hat{\psi}_1(50, 5)]) = (0.012113,0.029887).\] \item For the estimation of discrete time probability of ruin where the surplus process is checked at time intervals of length 0.5, we proceed as follows. First we note that \end{enumerate} \(S(1/2), S(1) - S(1/2), S(3/2) - S(1), \ldots, S(5) - S(9/2) \sim \mathcal{CP}((1/2)\lambda, F_X(x))\) distribution. In addition, \begin{itemize} \item U(1/2) = U(0) + c (1/2) - S(1/2)\$ \item U(1) = U(1/2) + c (1/2) - ( S(1) - S(1/2) )\$ \item U(3/2) = U(1) + c (1/2) - (S(3/2) - S(1))\$ , \(\ldots\), \item U(5) = U(9/2) + `c (1/2) - ( S(5) - S(9/2) )\$. \end{itemize} It follows that \begin{align} \mathrm{E}[S(1/2)] &= (1/2)(100)\mathrm{E}[X] = (1/2)\mathrm{E}[S(1)] = 50 \\ \mathrm{Var}[S(1/2)] &= (1/2)(100)\mathrm{E}[X^2] = (1/2)\mathrm{Var}[S(1)] = 150 \\ \mathrm{Sk}[S(1/2)] &= \sqrt{2} \mathrm{Sk}[S(1)] = 0.734847 . \end{align} Now we assume that \(S(j) - S(j-1)\) for \(j = 1/2, 1, 3/2, \ldots, 5\) can be approximated by \(Y + k\) where \(Y \sim \mathcal{G}(\alpha, \lambda)\) and \(k\) is a constant. It follows that \[ \hat{\alpha} = 7.407407, \quad \hat{\beta} = 0.222222, \quad \hat{k} = 16.666667. \] The simulations can be obtained in the same way. \begin{verbatim} ## ## Attaching package: 'dplyr' \end{verbatim} \begin{verbatim} ## The following object is masked from 'package:MASS': ## ## select \end{verbatim} \begin{verbatim} ## The following objects are masked from 'package:stats': ## ## filter, lag \end{verbatim} \begin{verbatim} ## The following objects are masked from 'package:base': ## ## intersect, setdiff, setequal, union \end{verbatim} \begin{verbatim} ## -- Attaching packages --------------------------------------- tidyverse 1.3.1 -- \end{verbatim} \begin{verbatim} ## v tibble 3.1.4 v stringr 1.4.0 ## v readr 1.4.0 v forcats 0.5.1 ## v purrr 0.3.4 \end{verbatim} \begin{verbatim} ## -- Conflicts ------------------------------------------ tidyverse_conflicts() -- ## x dplyr::filter() masks stats::filter() ## x dplyr::lag() masks stats::lag() ## x dplyr::select() masks MASS::select() \end{verbatim} \begin{verbatim} ## Warning: The `x` argument of `as_tibble.matrix()` must have unique column names if `.name_repair` is omitted as of tibble 2.0.0. ## Using compatibility `.name_repair`. ## This warning is displayed once every 8 hours. ## Call `lifecycle::last_warnings()` to see where this warning was generated. \end{verbatim} \begin{figure} {\centering \includegraphics{SCMA470Bookdownproj_files/figure-latex/figSurplus-1} } \caption{The sample paths of surplus process.}\label{fig:figSurplus} \end{figure} eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiJsaWJyYXJ5KGRwbHlyKVxubGlicmFyeSh0aWR5cilcbmxpYnJhcnkoZ2dwbG90MilcblxubGFtYmRhTiA8LSAxMDBcbmMgPC0gMTEwXG51IDwtIDUwXG5hbHBoYSA8LSA0XG5iZXRhICA8LSAzXG5cbiMgbiA9IG51bWJlciBvZiBzaW11bGF0aW9uc1xubiA8LSAxMDAwXG4jIFdlIHRoZW4gaGF2ZSBTKDEpIH4gQ1AobGFtYmRhTiwgUGEoYWxwaGEsYmV0YSkpXG4jIFMoMikgLSBTKDEpIH4gQ1AobGFtYmRhTiwgUGEoYWxwaGEsYmV0YSkpXG4jIFMoMykgLSBTKDIpIH4gQ1AobGFtYmRhTiwgUGEoYWxwaGEsYmV0YSkpXG5cbiMgV2UgdGhlbiBhcHByb3hpbWF0ZSB0aGUgY29tcG91bmQgUG9pc3NvbiB3aXRoIHRyYW5zbGF0ZWQgR2FtbWEgZGlzdHJpYnV0aW9uXG4jIFRoZXJlZm9yZSwgd2UgbmVlZCB0byBhcHByb3hpbWF0ZSBDUChsYW1iZGFOLCBQYShhbHBoYSxiZXRhKSkgfiBZICsga1xuIyB3aGVyZSBZIH4gR2EoYSxiKVxuXG5tMSA8LSBiZXRhLyhhbHBoYS0xKVxubTIgPC0gKDIqYmV0YV4yKS8gKChhbHBoYSAtIDEpICogKGFscGhhIC0yKSkgICAgXG5tMyA8LSAoNipiZXRhXjMpLyAoKGFscGhhIC0gMSkgKiAoYWxwaGEgLTIpICogKGFscGhhIC0zKSkgXG5cbkVTIDwtIGxhbWJkYU4qbTFcblZTIDwtIGxhbWJkYU4qbTJcblNrUyA8LSAobGFtYmRhTiAqIG0zKS8obGFtYmRhTiAqIG0yKV4oMy8yKVxuXG5cbmsgPC0gRVMgLSAyKnNxcnQoVlMpL1NrUyAgXG5hIDwtIDQvU2tTXjJcbmIgPC0gMi8oU2tTKnNxcnQoVlMpKVxuXG5cbiMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyNcbiMgVVNFIHJhbmRvbSBudW1iZXIgZ2VuZXJhdG9ycyBmcm9tIHN0YW5kYXJkIG5vcm1hbCBkaXN0cmlidXRpb24uXG4jbGlicmFyeShyZWFkeGwpXG4jWiA8LSByZWFkX2V4Y2VsKFwiL1VzZXJzL0thZW15dWlqYW5nL0RvY3VtZW50cy9SIERpcmVjdG9yeS9SaXNrIFRoZW9yeS9SdWluVGhlb3J5L1R1dG9yaWFsNkhXX1J1aW5UaGVvcnkueGxzeFwiLCBza2lwID0gMilcbiNaIDwtIHJlYWRfZXhjZWwoXCJUdXRvcmlhbDZIV19SdWluVGhlb3J5Lnhsc3hcIiwgc2tpcCA9IDIpXG5cbiNsb2FkKFwic2ltWi5SZGF0YVwiKVxuXG4jIFNpbXVsYXRlIG51bWJlcnMgZnJvbSBVKDAsMSlcbnNldC5zZWVkKDEpXG5aMSA8LSBydW5pZigxMDAwLCBtaW4gPSAwLCBtYXggPSAxKVxuWjIgPC0gcnVuaWYoMTAwMCwgbWluID0gMCwgbWF4ID0gMSlcblozIDwtIHJ1bmlmKDEwMDAsIG1pbiA9IDAsIG1heCA9IDEpXG5aNCA8LSBydW5pZigxMDAwLCBtaW4gPSAwLCBtYXggPSAxKVxuWjUgPC0gcnVuaWYoMTAwMCwgbWluID0gMCwgbWF4ID0gMSlcblxuWiA8LSBjYmluZChaMSxaMixaMyxaNCxaNSlcblxuXG5cblogPC0gYXMuZGF0YS5mcmFtZShaKVxuXG4jZGF0IDwtIGFzX3RpYmJsZShaKVxuZGF0IDwtIFpcblxuIyMjICAjIyNcbiMgU2ltdWxhdGUgbnVtYmVycyBmcm9tIENQKGxhbWJkYU4sIFBhKGFscGhhLGJldGEpKSB+IFkgKyBrLCAgd2hlcmUgWSB+IEdhKGEsYilcbiN0cmFuc2xhdGVkR2FtbWEgPC0gZnVuY3Rpb24oeCwgYSwgYiwgayl7XG4jICBxZ2FtbWEoYXMubnVtZXJpYyh4KSxzaGFwZSA9IGEsIHJhdGUgPSBiKSArIGtcbiN9XG4jdHJhbnNsYXRlZEdhbW1hKFpbMToyXSwgYSwgYiwgaylcblxuIyBTaW11bGF0ZSBudW1iZXJzIGZyb20gQ1AobGFtYmRhTiwgUGEoYWxwaGEsYmV0YSkpIH4gWSArIGssICB3aGVyZSBZIH4gR2EoYSxiKVxudHJhbnNsYXRlZEdhbW1hIDwtIGZ1bmN0aW9uKHgsIGEsIGIsIGspe1xuICBxZ2FtbWEoYXMubnVtZXJpYyh1bmxpc3QoeCkpLHNoYXBlID0gYSwgcmF0ZSA9IGIpICsga1xufVxuI3RyYW5zbGF0ZWRHYW1tYShaWzE6Ml0sIGEsIGIsIGspXG5cblxuXG4jIEFkZCBVMCBjb2x1bW5cbmRhdCA8LSBkYXQgJT4lIG11dGF0ZShVMCA9IHUpXG5cbiMgQ3JlYXRlIFUxLCBVMiwgVTMsIFU0LCBVNVxuVSA8LSBzZXROYW1lcyhkYXRhLmZyYW1lKG1hdHJpeChuY29sID0gNSwgbnJvdyA9IG4pKSwgIHBhc3RlMChcIlVcIiwxOjUsc2VwPVwiXCIpKVxuVVsxXSA8LSB1ICsgYyAtICB0cmFuc2xhdGVkR2FtbWEoWlsxXSxhLCBiLCBrKVxuZm9yIChpIGluIDI6NSl7XG4gIFVbaV0gPC0gIFVbaS0xXSArIGMgLSB0cmFuc2xhdGVkR2FtbWEoWltpXSxhLCBiLCBrKVxufVxuXG5kYXQgPC0gY2JpbmQoZGF0LFUpXG5cbiMgQ2hlY2sgd2hldGhlciBydWluIG9jY3VycyBvciBub3RcbiMgV2l0aCBkcGx5ciwgdXNpbmcgdGhlIGZ1bmN0aW9uIHJvd3dpc2U6IHJlZmVyZW5jZTogaHR0cHM6Ly9zdGFja292ZXJmbG93LmNvbS9xdWVzdGlvbnMvMzU5MDI4NDIvbWluLWFuZC1tZWRpYW4tb2YtbXVsdGlwbGUtY29sdW1ucy1vZi1hLWRmLWJ5LXJvdy1pbi1yXG5kYXQgPC0gbXV0YXRlKHJvd3dpc2UoZGF0KSwgUnVpbiA9IG1pbihVMSxVMixVMyxVNCxVNSk8MCkgIFxuIyBvclxuZGF0IDwtIGRhdCAlPiUgcm93d2lzZSgpICU+JSBtdXRhdGUoUnVpbiA9IG1pbihVMSxVMixVMyxVNCxVNSk8MCkgIFxuXG4jIExldCBNIGJlIHRoZSBudW1iZXIgb2Ygc2ltdWxhdGlvbnMgb3V0IG9mIDEwMDAgd2hlcmUgcnVpbiBvY2N1cnMuIFxuTSA8LSBkYXQkUnVpbiAlPiUgc3VtKClcblxuXG5cblxuIyBVc2luZyBnZ3Bsb3QyIHRvIHBsb3Qgc2FtcGxlIHBhdGhzXG4jIFdlIG5lZWQgdG8gc2VsZWN0IG9ubHkgY29sdW1ucyBVMCwuLi4sVTUgYW5kIHRoZW4gdHJhbnNwb3NlIHRoZSBzZWxlY3RlZCByYW5nZS5cbiMgQnkgdHJhbnNwb3NpbmcgdGhlIHNlbGVjdGVkIHJhbmdlLCB3ZSBuZWVkIHRvIGZvcm1hdCBvdXIgZGF0YSB0YWJsZSBhcyBmb2xsb3dzOlxuI1xuIyB0aW1lIHwgIFAxKG9yIFBhdGgxKXwgIFAyICAgICAgICB8ICAuLi5cbiMgMCAgICB8IFUoMCkgb2YgUDEgICB8IFUoMCkgb2YgUDIgfCAgLi4uXG4jIDEgICAgfCBVKDEpIG9mIFAxICAgfCBVKDEpIG9mIFAyIHwgIC4uLlxuIyAyICAgIHwgVSgyKSBvZiBQMSAgIHwgVSgyKSBvZiBQMiB8ICAuLi5cbiMgMyAgICB8IFUoMykgb2YgUDEgICB8IFUoMykgb2YgUDIgfCAgLi4uXG4jIDQgICAgfCBVKDQpIG9mIFAxICAgfCBVKDQpIG9mIFAyIHwgIC4uLlxuIyA1ICAgIHwgVSg1KSBvZiBQMSAgIHwgVSg1KSBvZiBQMiB8ICAuLi5cblxuXG5kYXRTYW1wbGUgPC0gZHBseXI6OnNlbGVjdChkYXQsbnVtX3JhbmdlKFwiVVwiLCAwOjUpKSAlPiUgaGVhZCgxMClcbiNkYXRTYW1wbGUgPC0gYXNfdGliYmxlKHQoZGF0U2FtcGxlKSlcbmRhdFNhbXBsZSA8LSBhcy5kYXRhLmZyYW1lKHQoZGF0U2FtcGxlKSlcbmNvbG5hbWVzKGRhdFNhbXBsZSkgPC0gcGFzdGUoXCJQXCIsMToxMCxzZXA9XCJcIilcblxuXG5kYXRTYW1wbGUgPC0gZGF0U2FtcGxlICU+JSBtdXRhdGUodGltZT0wOjUpXG5cbiMgUmVmb3JtYXQgZGF0U2FtcGxlIGZvciBnZ3Bsb3QyXG5kYXRQYXRoc0dhdGhlciA8LSBkYXRTYW1wbGUgJT4lIGdhdGhlcihcInBhdGhcIixcInlcIiwtdGltZSlcbiNkYXRQYXRoc0dhdGhlciA8LSBkYXRTYW1wbGUgJT4lIHBpdm90X2xvbmdlcigxOjEwLCBuYW1lc190byA9IFwicGF0aFwiLCB2YWx1ZXNfdG8gPSBcInlcIilcblxuZGF0UGF0aHNHYXRoZXIgJT4lIGdncGxvdChhZXModGltZSx5LGNvbCA9IHBhdGgpKSArIGdlb21fbGluZSgpICsgIHNjYWxlX3hfY29udGludW91cyhicmVha3M9MDo1KSJ9 \hypertarget{tutorials}{% \chapter{Tutorials}\label{tutorials}} \hypertarget{tutorial-1}{% \section{Tutorial 1}\label{tutorial-1}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Using the method of moments, calculate the parameter values for the gamma, lognormal and Pareto distributions for which \[\mathrm{E}[X] = 500 \quad \text{and} \quad \mathrm{Var}[X] = 100^2.\] \textbf{Answer:} \end{enumerate} \begin{enumerate} \def\labelenumi{\alph{enumi}.} \item gamma: \(\tilde{\alpha} = 25\), \(\tilde{\lambda} = 0.05.\) \item lognormal: \(\tilde{\mu} = 6.194998\), \(\tilde{\sigma} = 0.1980422.\) \item the MME cannot apply for the Pareto distribution. For the given values of \[\mathrm{E}[X] = 500 \quad \text{and} \quad \mathrm{Var}[X] = 100^2\], the obtained values of \(\tilde{\alpha}\) and \(\tilde{\lambda}\) from the method of moments are negative. Note that if we fix \(s = 100^2\) and vary \(\bar{x}\) within the interval \([90,110]\), the plot of \(\bar{x}\) against \(\tilde{\alpha}\) is shown below. Notice that when \(\bar{x} = 100\), \(\tilde{\alpha}\) tends to infinity. \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \NormalTok{xbar }\OtherTok{\textless{}{-}}\DecValTok{90}\SpecialCharTok{:}\DecValTok{110} \NormalTok{s }\OtherTok{\textless{}{-}} \DecValTok{100} \CommentTok{\# MME} \NormalTok{alpha\_tilde }\OtherTok{\textless{}{-}} \DecValTok{2}\SpecialCharTok{*}\NormalTok{s}\SpecialCharTok{\^{}}\DecValTok{2}\SpecialCharTok{/}\NormalTok{xbar}\SpecialCharTok{\^{}}\DecValTok{2} \SpecialCharTok{*} \DecValTok{1}\SpecialCharTok{/}\NormalTok{(s}\SpecialCharTok{\^{}}\DecValTok{2}\SpecialCharTok{/}\NormalTok{xbar}\SpecialCharTok{\^{}}\DecValTok{2} \SpecialCharTok{{-}} \DecValTok{1}\NormalTok{)} \NormalTok{lambda\_tilde }\OtherTok{\textless{}{-}}\NormalTok{ xbar}\SpecialCharTok{*}\NormalTok{(alpha\_tilde }\SpecialCharTok{{-}}\DecValTok{1}\NormalTok{)} \FunctionTok{plot}\NormalTok{(xbar,alpha\_tilde, }\AttributeTok{xlab =} \FunctionTok{expression}\NormalTok{(}\FunctionTok{bar}\NormalTok{(x)), }\AttributeTok{ylab =} \FunctionTok{expression}\NormalTok{(alpha))} \end{Highlighting} \end{Shaded} \includegraphics{SCMA470Bookdownproj_files/figure-latex/unnamed-chunk-26-1.pdf} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \tightlist \item Show that if \(X \sim \text{Exp}(\lambda)\), then the random variable \(X-w\) conditional on \(X > w\) has the same distribution as \(X\), i.e. \[X \sim \text{Exp}(\lambda)\Rightarrow X - w | X > w \sim \text{Exp}(\lambda).\] \end{enumerate} \textbf{Solution:} Let \(W = Z|Z>0\) be a random variable representing the amount of a non-zero payment by the reinsurer on a reinsurance claim. The distribution and density of \(W\) can be calculated as follows: for \(x > 0\), \[\begin{aligned} \Pr[W \le x ] &= \Pr[Z \le x | Z >0] \\ &= \Pr[X - M \le x | X > M] \\ &= \frac{\Pr[M < X \le x + M]}{\Pr[X > M]}\\ &= \frac{F_X(x+M) - F_X(M)}{1-F_X(M)}.\end{aligned}\] Given \(X \sim \textrm{Exp}(\lambda)\), \(F_X(x) = 1 - e^{-\lambda x}.\) Moreover, \[\begin{aligned} \Pr[W \le x ] &= \frac{F_X(x+M) - F_X(M)}{1-F_X(M)} \\ &= \frac{ e^{-\lambda (M)} - e^{-\lambda (x+M)}}{e^{-\lambda (M)}} \\ &= 1 - e^{-\lambda x}. \end{aligned} \] Hence, \(W \sim \textrm{Exp}(\lambda)\). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item Derive an expression for the variance of the \(\text{Pa}(\alpha, \lambda)\) distribution. (Hint: using the pdf) \end{enumerate} \textbf{Solution:} Recall that that for \(X \sim \mathcal{Pa}(\alpha,\lambda)\), its density function is \[ f_X(x) = \frac{\alpha \lambda^\alpha}{ (x + \lambda)^{\alpha + 1}}.\] \[ \begin{aligned} \mathrm{E}[X] &= \int_0^\infty x \frac{\alpha \lambda^\alpha}{ (x + \lambda)^{\alpha + 1}}\, dx \quad \text{(using integration by part:} \quad u = x \text{ and } (\lambda + x)^{-(\alpha + 1)}dx = dv) \\ &= -\alpha \lambda^\alpha \left( \frac{x}{\alpha} (\lambda + x)^{-\alpha} \right)\bigg|_0^\infty + (\alpha \lambda^\alpha)(\frac{1}{\alpha})\int_0^\infty \frac{1}{(\lambda + x)^\alpha} \, dx \end{aligned} \] Using the fact that for \(X \sim \mathcal{Pa}(\alpha,\lambda)\), \(\textrm{E}[X]\) exists when \(\alpha > 1\), which will be assumed on the first term above. This assumption simplifies the above results as follows: \[ \begin{aligned} \mathrm{E}[X] &= 0 + \int_0^\infty \frac{\lambda^\alpha}{(\lambda + x)^\alpha} \, dx \\ &= \frac{\lambda}{\alpha - 1} \int_0^\infty \frac{(\alpha - 1)\lambda^{\alpha - 1}}{(\lambda + x)^\alpha} \, dx \\ &= \frac{\lambda}{\alpha - 1} \cdot 1. \end{aligned} \] Note that the last integral integrate to 1 because the integrand is the density function of a Pareto distribution. One can also show that \[ \mathrm{E}[X^2] = \frac{2 \lambda^2}{(\alpha - 1)(\alpha - 2)}.\] Therefore, \[ \mathrm{Var}[X] = \mathrm{E}[X^2] - (\mathrm{E}[X])^2 = \frac{\alpha \lambda^2}{(\alpha - 1)^2(\alpha - 2)}.\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{3} \item Show that the MLE (the maximum likelihood estimation) of \(\lambda\) for an \(\text{Exp}(\lambda)\) distribution is the reciprocal of the sample mean, i.e.~\(\hat\lambda = 1/ \bar x\). \textbf{Solution:} Suppose we have a random sample \(x = (x_1, x_2, \ldots, x_n)\) of \(X \sim \text{Exp}(\lambda).\) We have \[ \begin{aligned} L(\lambda) &= \Pi_{i=1}^n f(x_i, \lambda) = \Pi_{i=1}^n \lambda e^{-\lambda x_i} = \lambda^n e^{-\lambda \sum x_i}. \\ l(\lambda) &= \log( L(\lambda)) = n \log(\lambda) - \lambda \sum x_i \end{aligned} \] The MLE can be obtained by maximise \(l(\lambda)\) with respect to \(\lambda\). \[ \frac{d\, l(\lambda)}{d \lambda } = \frac{n}{\lambda} - \sum x_i = 0. \] Therefore, the MLE of \(\lambda\) is \(\hat{\lambda} = 1/\bar{x}.\) \item Claims last year on a portfolio of policies of a risk had a lognormal distribution with parameter \(\mu = 5\) and \(\sigma^2 = 0.4\). It is estimated that all claims will increase by 15\% next year. Find the probability that a claim next year will exceed 1000. \textbf{Solution:} From \(X \sim \mathcal{LN}(5,0.4)\), \(\log X \sim \mathcal{N}(5,0.4)\). Claims in next year will increase by 15\%. We define \(Y = (1+15\%) X = 1.15 X\). We also have \[ \begin{aligned} \Pr(Y > 1000) &=\Pr(1.15 X > 1000) \\ &=\Pr(\log X > \log (1000/1.15)) \\ &=\Pr\left(Z > \frac{\log (1000/1.15)) - 5 }{\sqrt{0.4}}\right) \\ &=\Pr\left(Z > 2.7954\right) \\ &=1 - \Pr\left(Z \le 2.7954\right) \\ &= 1 - 0.9974082 = 0.002591777. \\ \end{aligned} \] I used \texttt{R} to obtain the required probability. \end{enumerate} \hypertarget{tutorial-2}{% \section{Tutorial 2}\label{tutorial-2}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Claims occur on a general insurance portfolio independently and at random. Each claim is classified as being of ``Type A'' or ``Type B''. Type A claim amounts are distributed \(\text{Pa}(3,400)\) and Type B claim amounts are distributed \(\text{Pa}(4,1000)\). It is known that 90\% of all claims are of Type A. Let \(X\) denote a claim chosen at random from the portfolio. \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Calculate \(\Pr(X > 1000).\) \item Calculate \(\mathrm{E}[X]\) and \(\mathrm{Var}[X]\). \item Let \(Y\) have a Pareto distribution with the same mean and variance as \(X\). Calculate \(\Pr(Y > 1000).\) \item Comment on the difference in the answers found in 1.1 and 1.2. \end{enumerate} \item An insurer covers an individual loss \(X\) with excess of loss reinsurance with retention level \(M\). Let \(Y\) and \(Z\) be random variables representing the amounts paid by the insurer and reinsurer, respectively, i.e.~\(X = Y + Z\). Show that \(\mathrm{Cov}[Y,Z] \ge 0\) and deduce that \[\mathrm{Var}[X] \ge \mathrm{Var}[Y] + \mathrm{Var}[Z].\] Comment on the results obtained. \item Claim amounts from a general insurance portfolio are lognormally distributed with mean 200 and variance 2916. Excess of loss reinsurance with retenton level 250 is arranged. Calculate the probability that the reinsurer is involved in a claim. \item Show that if \(X \sim \text{Pa}(\alpha, \lambda)\), then the random variable \(X-d\) conditional on \(X > d\) has a pareto distribution with parameters \(\alpha\) and \(\lambda + d\), i.e. \[X \sim \text{Pa}(\alpha, \lambda)\Rightarrow X - d | X > d \sim \text{Pa}(\alpha,\lambda + d).\] \item Consider a portfolio of motor insurance policies. In the event of an accident, the cost of the repairs to a car has a Pareto distribution with parameters \(\alpha\) and \(\lambda\). A deductible of 100 is applied to all claims and a claim is always made if the cost of the repairs exceeds this amount. A sample of 100 claims has mean 200 and standard deviation 250. \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Using the method of moments, estimate \(\alpha\) and \(\lambda\). \item Estimate the proportion of accidents that do not result in a claim being made. \item The insurance company arranges excess of loss reinsurance with another insurance company to reduce the mean amount it pays on a claim to 160. Calculate the retention limit needed to achieve this. \end{enumerate} \end{enumerate} \hypertarget{solutions-to-tutorial-2}{% \section{Solutions to Tutorial 2}\label{solutions-to-tutorial-2}} \textbf{Solution:} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item \[\begin{aligned} \Pr(X > 1000) &= \Pr(X > 1000 | A) \Pr(A) + \Pr(X > 1000 | B) \Pr(B) \\ &= \left( \frac{ 400 }{ 1000 + 400} \right)^{3} (0.9) + \left( \frac{ 1000 }{ 1000 + 1000} \right)^{4} (0.1) \\ &= 0.0272413. \end{aligned}\] \item Using the conditional expectation and conditional variance formulas, we have \end{enumerate} \end{enumerate} \[\begin{aligned} \mathrm{E}[X|A] &= \frac{400 }{3 - 1} = 200, \\ \mathrm{E}[X|B] &= \frac{1000 }{4 - 1} = 333.3333333. \end{aligned} \] Hence, \[\begin{aligned} \mathrm{E}[X] &= \mathrm{E}[\mathrm{E}[X|\text{type}]] \\ &= \Pr(A) \, \mathrm{E}[X|A] + \Pr(B) \, \mathrm{E}[X|B] \\ &= (0.9)(200) + (0.1)(333.3333333) \\ &= 213.3333333. \end{aligned} \] \begin{verbatim} We also have \end{verbatim} \[\begin{aligned} \mathrm{E}[X^2|A] &= \mathrm{Var}[X|A] + (\mathrm{E}[X|A])^2 = \frac{(3) (400)^2}{(3 - 1)^2 (3 - 2)} + (200)^2 = \ensuremath{1.6\times 10^{5}},\\ \mathrm{E}[X^2|B] &= \mathrm{Var}[X|B] + (\mathrm{E}[X|B])^2 = \frac{(4) (1000)^2}{(4 - 1)^2 (4 - 2)} + (333.3333333)^2 = \ensuremath{3.3333333\times 10^{5}}. \end{aligned} \] Therefore, \(\mathrm{E}[X^2] = (0.9)(\ensuremath{1.6\times 10^{5}}) + (0.1)(\ensuremath{3.3333333\times 10^{5}}) = \ensuremath{1.7733333\times 10^{5}}\), and \(\mathrm{Var}[X] = \ensuremath{1.3182222\times 10^{5}}\), \quad \(\mathrm{SD}[X] = 363.0733014\). \begin{verbatim} 3. Using moment matching estimation, we have \end{verbatim} \(\tilde{\alpha} = 3.0545829\) and \(\tilde{\beta} = 438.3110196\). Therefore, \(\Pr[Y > 1000] = 0.0265228 < 0.0272413\). \begin{verbatim} 4. Failure to separate two types of claims leads to an underestimation of tail probability. This affects the determination of premiums, reinsurance, and security. \end{verbatim} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \tightlist \item From \(X = Y + Z\), where \(Y = \min(X,M)\) and \(Z = \max(0,X -M)\). \[\begin{aligned} \mathrm{Cov}(Y,Z) &= \mathrm{E}[YZ] - \mathrm{E}[Y]\mathrm{E}[Z] \\ &= \int_0^\infty \min(X,M) \max(0,X -M) f_X(x) \, dx - \mathrm{E}[Y]\mathrm{E}[Z] \\ &= M \int_M^\infty \max(0,X -M) f_X(x) \, dx - \mathrm{E}[Y]\mathrm{E}[Z] \\ &= M \mathrm{E}[Z] - \mathrm{E}[Y]\mathrm{E}[Z] \\ &= \mathrm{E}[Z](M - \mathrm{E}[Y]) \ge 0. \\ \end{aligned}\] \end{enumerate} \end{enumerate} Therefore, \[\mathrm{Var}[X] = \mathrm{Var}[Y + Z] = \mathrm{Var}[Y] + \mathrm{Var}[Z] + 2Cov(X,Y) \ge \mathrm{Var}[Y] + \mathrm{Var}[Z].\] Consequently, there is a reduction in the variability of the amount paid out by the direct insurer on claims. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item By the method of moments, the MMEs of the parameter \(\mu\) and \(\sigma\) can be found by matching \(\mathrm{E}[X]\) to the sample mean and \(\mathrm{Var}[X]\) to the sample variance: \end{enumerate} \[\mathrm{E}[X] = \exp\left(\mu + \frac{1}{2} \sigma^2 \right) = 200 \text{ and } \mathrm{Var}[X] =\exp\left(2\mu + \sigma^2 \right) (\exp(\sigma^2) - 1) = 2916.\] We find that \(\tilde{\mu} = 5.2631347\) and \(\tilde{\sigma} = 0.2652645\). Moreover, \[ \Pr(X > 250) = P( Z > \frac{\ln(250) - \tilde{\mu}}{\tilde{\sigma}}) = 0.1650671,\] where \(Z \sim \mathcal{N}(0,1)\). We find that the reinsurer is involved in about 16.51\% of claims. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{3} \tightlist \item Given \(X \sim \mathcal{Pa}(\alpha,\lambda)\), define \(W = X- d | X > d\).\\ \[\begin{aligned} \Pr[W \le x ] &= \Pr[Z \le x | Z >0] \\ &= \Pr[X - d \le x | X > d] \\ &= \frac{\Pr[d < X \le x + d]}{\Pr[X > d]}\\ &= \frac{F_X(x+d) - F_X(d)}{1-F_X(d)} \\ &= 1 - \left( \frac{\lambda + d}{\lambda + x + d} \right)^\alpha. \end{aligned}\] \end{enumerate} Hence, \(W \sim \mathcal{Pa}(\alpha, \lambda + d).\) \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{4} \item \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \tightlist \item Let \(X\) be the cost of repair and \(Y\) be the cost of claims. Suppose that \(X \sim \mathcal{Pa}(\alpha, \lambda)\). It follows that \[ Y = X - d | X > d, \] where \(d = 100\). Moreover, \(Y \sim \mathcal{Pa}(\alpha, \lambda + d).\) It should be emphasised that the given information implies that \[ \mathrm{E}[Y] = 200, \quad \mathrm{Var}[Y] = 250^2.\] Letting \(\lambda + d = \phi\) results in \(Y \sim \mathcal{Pa}(\alpha, \phi).\) \end{enumerate} \end{enumerate} By the mothod of moments, the MMEs of \(\alpha\) and \(\phi\) (and hence \(\lambda\)) are \(\tilde{\alpha} = 5.5555556\), \(\tilde{\phi} = 911.1111111\) and \(\tilde{\lambda} = \tilde{\phi} - d = 811.1111111\). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \tightlist \item The proportion of accidents that do not result in a claim being made is \end{enumerate} \[\Pr(X < d) = 1 - \left(\frac{\tilde{\lambda}}{\tilde{\lambda} + d}\right)^\alpha = 0.4392931.\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item We know that claims before excess of loss reinsurance is \(Y \sim \mathcal{Pa}(\alpha, \phi).\) With excess of loss reinsurance contract, the expected value of the amount paid out by the insurer is \(\mathrm{E}[\min(Y,M)],\) which can be calculated as follows: \end{enumerate} \[\begin{aligned} \mathrm{E}[Y_I] &= \mathrm{E}[\min(Y,M)] \\ &= \mathrm{E}[Y] - \int_0^\infty y \cdot f_Y(y+M) \, dy \\ &= \mathrm{E}[Y] - \int_0^\infty y \cdot \frac{\alpha \phi^\alpha}{(\phi + y + M)^{\alpha + 1}} \, dy \\ &= \mathrm{E}[Y] - \left(\frac{\phi}{\phi+M} \right)^\alpha\int_0^\infty y \cdot \frac{\alpha (\phi + M)^\alpha}{(\phi + y + M)^{\alpha + 1}} \, dy. \end{aligned}\] The last integral defines the mean of the Pareto random variable with parameters \(\alpha\) and \(\phi + M\) and so equals to \(\frac{\phi + M}{\alpha - 1}\). After simplifying, we have \[\begin{aligned} \mathrm{E}[Y_I] &= \mathrm{E}[Y] - \left(\frac{\phi}{\phi+M} \right)^\alpha \left(\frac{\phi + M}{\alpha - 1}\right) = \left(\frac{\lambda + d}{\alpha - 1} \right)\left(1 - \left( \frac{\lambda + d}{\lambda + d + M} \right)^{\alpha - 1} \right). \end{aligned}\] Substituting all the parameter values and \(\mathrm{E}[Y_I] = 160\), we solve for \(M\) which results in \[M = \frac{\tilde{\lambda} + d}{\left(1- \frac{(\tilde{\alpha} -1)\mathrm{E}[Y_I]}{(\tilde{\lambda} + d)} \right)^{ \frac{1}{\tilde{\alpha} - 1} }} - (\tilde{\lambda} + d ) = 386.0795.\] \hypertarget{tutorial-3}{% \section{Tutorial 3}\label{tutorial-3}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item The aggregate claims \(S\) have a compound Poisson random variable with Poisson parameter \(\lambda = 20\) and claim amounts have a \(\mathcal{G}(2,1)\) distribution. Find the coefficient of skewness of the aggregate claim amount \(Sk[S]\). \item Suppose that \(S_1\) and \(S_2\) are independent compound Poisson random variables with Poisson parameters \(\lambda_1 = 10\) and \(\lambda_2 = 30\) and the claim sizes for \(S_i\) are exponentially distributed with mean \(\mu_i\) where \(\mu_1 = 1\) and \(\mu_2 = 2\), respectively. Find the distribution of the random sum \(S = S_1 + S_2\). \item The number of claims in one time period has a negative binomial distribution \(\mathcal{NB}(k, p)\) with \(k = 1\) and claim sizes have an exponential distribution with mean \(\mu\). \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Use the moment generating formula to obtain the distribution of the aggregate claim amount \(S\). \item Find the mean and variance of the aggregate claims for this time period. \end{enumerate} \item A portfolio consists of 100 car insurance policies. 60\% of the policies have a deductible of 10 and the remaining have a deductible of 0. The insurance policy pays the amount of damage in excess of the deductible subject to a maximum of 125 per accident. Assume that \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item The number of accident per year \textbf{per policy} has a Poisson distribution with mean 0.02; and \item The amount of damage has the distribution: \[Pr(X = 50) = 1/3, Pr(X = 150) = 1/3, Pr(X = 200) = 1/3.\] \end{enumerate} Find the expected insurer's payout. \item The number of claims \(N\) per a fixed time period has the following distribution: \[Pr(N = 0) = 0.5, Pr(N = 1) = 0.3, Pr(N = 2) = 0.1, \text{ and } Pr(N = 3) = 0.1.\] The loss distribution is uniformly distributed on the interval \((0,100)\). Assume that the number of claims and the amount of losses are mutually independent. \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Find the mean and variance of the aggregate claims for this fixed time period. \item Suppose that a policy deductible of 20 is in place. Find the expected insurer's payout. \end{enumerate} \end{enumerate} \hypertarget{solutions-to-tutorial-3}{% \section{Solutions to Tutorial 3}\label{solutions-to-tutorial-3}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \tightlist \item The aggregate claims \(S\) have a compound Poisson random variable, \(S \sim \mathcal{CP}(\lambda, F_X)\) with \(\lambda = 20\) and \(X \sim \mathcal{G}(\alpha,\beta) = \mathcal{G}(2,1)\) distribution. From \(\mathrm{Sk}[S] = \frac{\lambda m_3}{(\lambda m_2)^{3/2}}\) with \(m_r = \mathrm{E}[X^r]\), we also have \end{enumerate} \end{enumerate} \[\mathrm{E}[X^r] = \frac{1}{\beta^r} \frac{\Gamma(\alpha + r)}{\Gamma(\alpha )}, \quad r > 0.\] Note that if \(\alpha\) is an integer, then \(\Gamma(\alpha) = (\alpha-1)!\). Substituting all the parameter values, we have \(\mathrm{E}[X^2] = 6\), \(\mathrm{E}[X^3] = 24\) and \(\mathrm{Sk}[S] = 0.3651484\). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item \(S_1 \sim \mathcal{CP}(\lambda, F_1)\) and \(S_2 \sim \mathcal{CP}(\lambda, F_2)\) are independent compound Poisson random variables with claim sizes exponential distributions for \(S_i\), \(\text{Exp}(1/\mu_i)\) where \(\mu_1 = 1\) and \(\mu_2 = 2\). By the additivity of independent compound Poisson distributions, the distribution of \(S = S_1 + S_2\) is \(\mathcal{CP}(\lambda, F_X)\), where \[ F_X(x) = \frac{10}{40} F_1(x) + \frac{30}{40} F_2(x) = \frac{1}{4}(1 - e^{-x}) + \frac{3}{4} (1 - e^{-x/2}).\] It is the mixture of exponential distribution with mean 1 and 2. \item \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \tightlist \item From \(N \sim \mathcal{NB}(1, p)\) and \(X \sim \text{Exp}(\mu)\), the moment generating functions of \(N\) and \(X\) are \[ M_N(t) = \frac{p}{1-q e^t}, \quad M_X(t) = \frac{1}{1 - \mu t},\] where \(q = 1- p\). \end{enumerate} Therefore, \[\begin{aligned} M_S(t) &= M_N(\log(M_X(t))) \\ &= \frac{p}{1-q M_X(t)} \\ &= p + q\frac{1}{1- \frac{\mu}{p}t} \end{aligned}.\] This distribution can be regarded as a mixture of a distribution with moment generating function 1 and a mixture of a distribution with moment generating function \((1- \frac{\mu}{p}t)^{-1}\), i.e.~the moment generating function of an exponential random variable with mean \(\mu/p\). \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \setcounter{enumii}{1} \tightlist \item Applying the properties of the moment generating function, \(M_S^{(k)}(0) = \mathrm{E}[X^k]\). \end{enumerate} It follows that \[ \frac{d}{dt} M_S(t) = \frac{p q \mu}{(p - \mu t)^2}, \quad \mathrm{E}[S] = \frac{q \mu}{p}\] and \[ \frac{d^2}{dt^2} M_S(t) = \frac{2 p q \mu^2}{(p - \mu t)^3}, \quad \mathrm{E}[S^2] = \frac{2 q \mu^2}{p^2}.\] Therefore, \[\mathrm{Var}[S] = \mathrm{E}[S^2] - (\mathrm{E}[S])^2 = \frac{2 q \mu^2}{p^2} - \left(\frac{q \mu}{p}\right)^2 = \frac{q(2-q)\mu^2}{p^2}.\] Alternatively, the mean and variance of the aggregate claims for this time period can be calculated from the properties of a compound negative distribution (with k = 1), \[\mathrm{E}[S] = \frac{q}{p} \mathrm{E}[X] = \frac{q \mu}{p},\] and \[\mathrm{Var}[S] = \frac{q}{p^2}(p \mathrm{E}[X^2] + q (\mathrm{E}[X])^2) = \frac{q}{p^2} (p (2\mu^2) + q \mu^2) = \frac{q(2-q)\mu^2}{p^2}.\] \item Let \(Y\) be the aggregate claims paid by the insurer of 60 policies with a deductible of 10 and policy limit of 125. Then, \(Y \sim \mathcal{CP}(60 \times 0.02, F_X)\), where \(X\) is the individual claim amount paid by the insurer. The distribution of \(X\) is \end{enumerate} \begin{longtable}[]{@{}ccc@{}} \toprule x & 40 & 125 \\ \midrule \endhead \(\Pr(X = x)\) & 1/3 & 2/3 \\ \bottomrule \end{longtable} Hence, \(\mathrm{E}[X] = 290/3\) and \(\mathrm{E}[Y] = 116\). Let \(U\) be the aggregate claims paid by the insurer of 40 policies with a deductible of 0 and policy limit of 125. Then, \(U \sim \mathcal{CP}(40 \times 0.02, F_{X'})\), where \(X'\) is the individual claim amount paid by the insurer. The distribution of \(X'\) is \begin{longtable}[]{@{}ccc@{}} \toprule x & 50 & 125 \\ \midrule \endhead \(\Pr(X' = x)\) & 1/3 & 2/3 \\ \bottomrule \end{longtable} Hence, \(\mathrm{E}[X] = 100\) and \(\mathrm{E}[U] = 80\). The total expected total claim payout of the insurer is \(116 + 80 = 196\). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{4} \item \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \tightlist \item The number of claims \(N\) per a fixed time period has the following distribution: \[\Pr(N = 0) = 0.5, \Pr(N = 1) = 0.3, \Pr(N = 2) = 0.1, \text{ and } \Pr(N = 3) = 0.1.\] Then, \(\mathrm{E}[N] = 0.8\) and \(\mathrm{Var}[N] = 0.96\). \end{enumerate} \end{enumerate} The loss distribution is uniformly distributed \(X \sim \mathcal{U}(0,100)\). Then, Then, \(\mathrm{E}[X] = 50\) and \(\mathrm{E}[X^2] = 3333.3333333\). The mean and variance of the aggregate claims for this fixed time period. \[ \mathrm{E}[S] = \mathrm{E}[N] \mathrm{E}[X] = 40 \] and \[ \mathrm{Var}[S] = \mathrm{E}[N] (\mathrm{E}[X^2] - (\mathrm{E}[X])^2 ) + \mathrm{Var}[N] (\mathrm{E}[X])^2 = 3066.6666667 .\] \begin{verbatim} 2. Suppose that a policy deductible of 20 is in place. Then \end{verbatim} \[\begin{aligned} \mathrm{E}[\max(0, X - 20)] &= \int_{20}^{100} (x - 20) \frac{1}{100} \, dx\\ &= 32. \end{aligned}\] Therefore, the expected insurer's payout is \[ \mathrm{E}[N]\mathrm{E}[\max(0, X - 20)] = (0.8)(32) = 25.6. \] \hypertarget{tutorial-4}{% \section{Tutorial 4}\label{tutorial-4}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Given \(X \sim \mathcal{G}(\alpha, \lambda)\), find the distribution of \(Y = kX\) for some positive \(k\). Repeat the same question if \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item \(X \sim \mathcal{G}(\alpha, \lambda)\), and \item \(X \sim \mathcal{LN}(\mu, \sigma^2)\). \item \(X \sim \mathcal{Pa}(\alpha, \lambda)\). \end{enumerate} \item Aggregate claims from a risk in a given time have a compound Poisson distribution with Poisson parameter \(\lambda = 200\) and an individual claim amount distribution that is an exponential distribution with mean 500. The insurer has effected proportional reinsurance with proportion retained \(\alpha = 0.8\). \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Find the distribution of \(S_I\) and \(S_R\) and their means and variances. \item Compare the variances \(Var[S_I] +Var[S_R]\) and \(Var[S]\). Comment on the results obtained. \end{enumerate} \item Show that if \(N \sim \mathcal{NB}(k,p)\) represents the distribution of claim numbers, then the number of non-zero claims for the reinsurer is \[N_R \sim \mathcal{NB}(k,p^*),\] where \(p^* = p/(p + (1-p)\pi_M\)) and \(\pi_M = Pr(X > M)\) for the claim size random variable \(X\). \item The number of claims \(N\) per a fixed time period has the following distribution: \[Pr(N = 0) = 0.5, Pr(N = 1) = 0.3, Pr(N = 2) = 0.1, \text{ and } Pr(N = 3) = 0.1.\] The loss distribution has Pareto distribution \(Pa(4,1)\). Assume that the number of claims and the amount of losses are mutually independent. Find the mean and variance of the aggregate claims for this fixed time period. \end{enumerate} \hypertarget{solutions-to-tutorial-4}{% \section{Solutions to Tutorial 4}\label{solutions-to-tutorial-4}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \tightlist \item Given \(X \sim \mathcal{G}(\alpha, \lambda)\), the distribution of \(Y = kX\) can be founded by using the moment generating function. \end{enumerate} \end{enumerate} \[\begin{aligned} M_Y(t) &= M_{kX}(t) = \mathrm{E}[e^{(tk)X}] = M_X(kt) \\ &= \left(\frac{\lambda}{\lambda - kt}\right)^\alpha \\ &= \left(\frac{\lambda/k}{\lambda/k - t}\right)^\alpha \end{aligned}. \] Therefore, \(Y \sim \mathcal{G}(\alpha, \lambda/k)\). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \tightlist \item Note that for \(X \sim \mathcal{LN}(\mu, \sigma^2)\), \[F_X(x) = \Pr(X \le x) = \Pr(\ln(X) \le \ln(x)) = \Pr(Z \le \frac{\ln(X) - \mu}{\sigma}),\] \end{enumerate} where \(Z \sim N(0,1)\). Consider \[\begin{aligned} F_Y(x) &= \Pr(Y \le x) = \Pr(kX \le x) \\ &= \Pr(\ln(X) \le \ln(x/k)) \\ &= \Pr(\frac{\ln(X) - \mu}{\sigma} \le \frac{\ln(x) - \ln(k) - \mu}{\sigma} ) \\ &= \Pr(Z \le \frac{\ln(x) - \ln(k) - \mu}{\sigma} ). \end{aligned} \] It follows that \(Y = kX \sim \mathcal{LN}(\mu + \ln(k), \sigma^2)\). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item Given \(X \sim \mathcal{Pa}(\alpha, \lambda)\), \(F_X(x) = 1 - \left(\frac{\lambda}{\lambda + x}\right)^\alpha\). \end{enumerate} Therefore, \[\begin{aligned} F_Y(x) &= \Pr(Y \le x) = \Pr(kX \le x) = \Pr(X \le x/k) \\ &= 1 - \left(\frac{\lambda}{\lambda + (x/k)}\right)^\alpha \\ &= 1 - \left(\frac{\lambda k}{\lambda k + x}\right)^\alpha. \end{aligned}\] Therefore, \(Y = kX \sim \mathcal{Pa}(\alpha, k\lambda)\). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item Given \(X \sim \text{Exp}(\beta)\), \(kX \sim \text{Exp}(\beta/k)\), for \(k > 0\). \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \tightlist \item From \(X \sim \text{Exp}(1/500)\), the aggregate claim amount paid by the direct insurer \(S_I\) is \[S_I \sim \mathcal{CP}(200,F_Y), \quad Y= \alpha X \sim \text{Exp}(\frac{1}{ (0.8) (500) })\] and the aggregate claim amount paid by the direct insurer \(S_R\) is \[S_R \sim \mathcal{CP}(200,F_Z), \quad Z= (1-\alpha) X \sim \text{Exp}(\frac{1}{ (0.2) (500) })\] \end{enumerate} \end{enumerate} Moreover, \[ \mathrm{E}[Y] = 400, \quad \mathrm{E}[Y^2] = \ensuremath{3.2\times 10^{5}}, \] \[ \mathrm{E}[Z] = 100, \quad \mathrm{E}[Z^2] = \ensuremath{2\times 10^{4}}. \] The total expected total claim payout of the insurer and its variance are \[ \mathrm{E}[S_I] = \ensuremath{8\times 10^{4}}, \quad \mathrm{Var}[S_I] = \ensuremath{6.4\times 10^{7}}.\] The total expected total claim payout of the reinsurer and its variance are \[ \mathrm{E}[S_R] = \ensuremath{2\times 10^{4}}, \quad \mathrm{Var}[S_R] = \ensuremath{4\times 10^{6}}.\] Alternatively, we know that \[ \mathrm{E}[S] = (200)\mathrm{E}[X] = \ensuremath{10^{5}}, \quad \mathrm{Var}[S] = (200)\mathrm{E}[X^2] = \ensuremath{10^{8}}.\] Hence, \[ \mathrm{E}[S_I] = \mathrm{E}[\alpha S] = (0.8)\mathrm{E}[S] = \ensuremath{8\times 10^{4}}, \quad \mathrm{Var}[S_I] = \mathrm{Var}[\alpha S] = (0.8)^2\mathrm{Var}[S] = \ensuremath{6.4\times 10^{7}}.\] \[ \mathrm{E}[S_R] = \mathrm{E}[(1-\alpha) S] = (0.2)\mathrm{E}[S] = \ensuremath{2\times 10^{4}}, \quad \mathrm{Var}[S_R] = \mathrm{Var}[(1-\alpha) S] = (0.2)^2\mathrm{Var}[S] = \ensuremath{4\times 10^{6}}.\] \begin{verbatim} 2. It follows that \end{verbatim} \(\mathrm{Var}[S_I] + \mathrm{Var}[S_R] = \ensuremath{6.8\times 10^{7}} < \ensuremath{10^{8}} = \mathrm{Var}[S]\). \begin{verbatim} After effecting proportional reinsurance proportion, there is a reduction in the variability of the amount paid out by the insurer on claims. \end{verbatim} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item Given \(N \sim \mathcal{NB}(k,p)\), the probability generating function of \(N\) is \[P_N(r) = \left(\frac{p}{1- qr}\right)^k,\] where \(q = 1 - p\). \end{enumerate} Define the indicator random variable \(\{I_j\}_{j=1}^\infty\), where \[\begin{aligned} I_j = \begin{cases} 1 &\text{if } X_j > M\\ 0 &\text{if } X_j \le M. \end{cases}\end{aligned}\] Therefore, the number of non-zero claims for the reinsurer is \[N_R = \sum_{j= 1}^{N} I_j.\] The variable \(N_R\) has a compound distribution with its probability generating function \[P_{N_R}(r) = P_N[P_I(r)],\] where \(P_I\) is the probability generating function of the indicator random variable. It can be shown that \[P_I(r) = 1 - \pi_M + \pi_M r,\] where \(\pi_M = \Pr(I_j = 1) = \Pr(X_j > M) = 1 - F(M)\). Therefore, \[P_{N_R}(r) = P_N[P_I(r)] = \left(\frac{p}{1- q(1 - \pi_M + \pi_M r)}\right)^k.\] Let \(p* = \frac{p}{p + q \pi_M}\). By dividing both the numerator and the denominator above by \(p + q \pi_M\), we have \[P_{N_R}(r) = \left(\frac{p^*}{1 - q^* r}\right)^k,\] and \(N_R \sim \mathcal{NB}(k,p^*).\) \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{3} \tightlist \item The number of claims \(N\) per a fixed time period has the following distribution: \[\Pr(N = 0) = 0.5, \Pr(N = 1) = 0.3, \Pr(N = 2) = 0.1, \text{ and } \Pr(N = 3) = 0.1.\] Then, \(\mathrm{E}[N] = 0.8\) and \(\mathrm{Var}[N] = 0.96\). \end{enumerate} The loss distribution has Pareto distribution \(X \sim Pa(4,1)\). Therefore, \(\mathrm{E}[X] = 0.3333333\) and \(\mathrm{E}[X^2] = 0.3333333\). Therefore, the mean and variance of the aggregate claim amount for this fixed time period. \[ \mathrm{E}[S] = \mathrm{E}[N] \mathrm{E}[X] = 0.2666667 \] and \[ \mathrm{Var}[S] = \mathrm{E}[N] (\mathrm{E}[X^2] - (\mathrm{E}[X])^2 ) + \mathrm{Var}[N] (\mathrm{E}[X])^2 = 0.2844444 .\] \hypertarget{tutorial-5}{% \section{Tutorial 5}\label{tutorial-5}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Aggregate claims from a risk in a given time have a compound Poisson distribution with Poisson parameter \(10\) and an individual claim amount distribution that is a Pareto distribution \(Pa(3,2000)\). The insurer sets a premium using the expected value principle with relative security loading of 0.15. The insurer is considering effecting excess of loss reinsurance with retention limit \(1200\). The reinsurance premium would be calculated using the same principle with relative security loading of 0.2. \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Calculate the insurer's expected profit before reinsurance. \item Under excess of loss reinsurance, the insurer's profit is defined to be the premium charged by the insurer, less the reinsurance premium and less the claim paid by the insurer (also called net of reinsurance). Calculate the insurer's expected profit after effecting excess of loss reinsurance. \item Comments on these results. \end{enumerate} \item Aggregate claims from a risk in a given time have a compound Poisson distribution with Poisson parameter \(80\) and an individual claim amount distribution that is an exponential distribution with mean 10. The insurer has effected excess of loss reinsurance with retention level \(M = 20\). \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Find the distribution of \(S_I\) and \(S_R\) and their means and variances. \item Compare the variances \(Var[S_I] +Var[S_R]\) and \(Var[S]\). Comment on the results obtained. \end{enumerate} \item Aggregate claims \(S\) have a compound Poisson distribution \(\mathcal{CP}(\lambda,F_X)\) where \(\lambda = 0.5\) and an individual claim amounts \(X\) are either 1, 2 or 3 with probability 1/2, 1/4 and 1/4 respectively. Calculate \(g_r\) for \(r = 0,1,\ldots, 10\). \item (Required the use of Excel or R) Suppose \(\{ S(t) \}_{t \ge 0}\) is a compound Poisson process with Poisson parameter 1 and individual claim distribution that is an exponential distribution \(Exp(1)\) so that for each fixed \(t\), \(S(t) \sim \mathcal{CP}(t, F_X)\) where \(F_X(x) = 1 - e^{-x}\), for \(x > 0\). \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Calculate the mean, variance and coefficient of skewness of \(S(1)\). \item Use (a) the normal approximation and (b) the translated Gamma approximation to approximate the values of \(Pr(S(10) > 20)\). \item Use (a) the normal approximation and (b) the translated Gamma approximation to approximate the values of \(Pr(S(100) > 120)\). \end{enumerate} \end{enumerate} \hypertarget{solutions-to-tutorial-5}{% \section{Solutions to Tutorial 5}\label{solutions-to-tutorial-5}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \tightlist \item Aggregate claims \(S\) from a risk in a given time have a compound Poisson distribution \(S \sim \mathcal{CP}(10,F_X)\) where \(X \sim \mathcal{Pa}(\alpha,\beta) = \mathcal{Pa}(3,2000)\). Using the expected value principle, the insurer sets a premium before reinsurance \(P = (1 + \theta)\mathrm{E}[S]\) with relative security loading of \(\theta = 0.15\). The insurer's expected profit before reinsurance can be obtained from \[\begin{aligned} \mathrm{E}[\text{Profit}] &= \mathrm{E}[P-S] \\ &= P- \mathrm{E}[S] \\ &= (1 + \theta)\mathrm{E}[S] - \mathrm{E}[S] \\ &= \theta\mathrm{E}[S] \end{aligned}\] \end{enumerate} \end{enumerate} Note that \(\mathrm{E}[S] = \lambda \mathrm{E}[X] = (10)(1000) = \ensuremath{10^{4}}\). Hence \(\mathrm{E}[\text{Profit}] = 1500\). \begin{verbatim} 2. The insurer is considering effecting excess of loss reinsurance with retention limit $M = 1200$. \end{verbatim} The reinsurance premium would be calculated using the same principle with relative security loading of \(\theta_R = 0.2\). The insurer's expected profit after reinsurance can be obtained from (in terms of \(\mathrm{E}[S]\) and \(\mathrm{E}[S_R]\)) \[\begin{aligned} \mathrm{E}[\text{Profit (after reinsurance)}] &= \mathrm{E}[P- P_R - S_I] \\ &= P- P_R - \mathrm{E}[S_I] \\ &= P- P_R - \mathrm{E}[S - S_R] \\ &= (1 + \theta)\mathrm{E}[S] - (1 + \theta_R)\mathrm{E}[S_R] - \mathrm{E}[S] + \mathrm{E}[S_R]\\ &= \theta\mathrm{E}[S] - \theta_R\mathrm{E}[S_R] \end{aligned},\]\\ where \(P_R\) is the reinsurer premium. The total claim amount paid by reinsurer \(S_R \sim \mathcal{CP}(10 \pi_M,F_W)\), where \(\pi_M = \Pr(X > M)\) is the proportion of claims involved the reinsurer, \(W \sim X - M|X > M\) and \(F_W \sim \mathcal{Pa}(\alpha,\beta + M) = \mathcal{Pa}(3,3200)\). It follows that \[\mathrm{E}[S_R] = (10)(0.2441406)\mathrm{E}[W] = (10)(0.2441406)(1600) = 3906.25.\] Substituting this into the above equation yields \[ \mathrm{E}[\text{Profit (after reinsurance)}] = 718.75.\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item Consider the variance of the profit, firstly without reinsurance. The insurer's profit is equal to premiums charged less claims paid. Since only the claims are random, the variance of the profit (before reinsurance) is the same as the variance of the total claims. \[ \mathrm{Var}[\text{Profit (without reinsurance)}] = \ensuremath{4\times 10^{7}}.\] With reinsurance, the insurer's profit is equal to premiums charged less the reinsurance premium less the net claims paid. So if the insurer's aggregate net claims paid are \(S_I\) , then the variance of the profit is equal to the variance of \(S_I\). \end{enumerate} \[ \mathrm{Var}[\text{Profit (with reinsurance)}] = \ensuremath{5.625\times 10^{6}}.\] \begin{itemize} \item The percentage reduction in the expected profit is 52.08\(\%\). \item The percentage reduction in the standard deviation of the profit is 62.5\(\%\). \end{itemize} Note that here the reinsurance has a greater effect on the variability of the claim size than on the average, ie the standard deviation is reduced by a greater percentage than the mean. This is very often the case for excess-of-loss reinsurance. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item Given \(X \sim \text{Exp}(\beta)\) with \(\beta = 1/10\), \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \tightlist \item The aggregate claim amount paid by the direct insurer \(S_I\) is \[S_I \sim \mathcal{CP}(80,F_Y), \quad Y= \min(X,M)\] The distribution and its first two moments of \(Y\) can be found as shown in the lecture note (Section 3.4 Excess of Loss Reinsurance) or by using the moment generating function. Here we will use the moment generating function. \end{enumerate} \end{enumerate} From \(X \sim \mathrm{Exp}(\beta) = \mathrm{Exp}(1/\mu)\) (Note that \(\mu\) is the expected value of \(X\)), \[ M_Y(t) = \frac{1}{1- \mu t} ( 1 - \mu p t e^{Mt}),\] where \(p = \Pr(X > M) = e^{-\lambda M} = e^{-\lambda/\mu}\) is the proportion of claims which involve the reinsurer (see the \href{https://math.stackexchange.com/questions/3033272/finding-the-moment-generating-function-of-miny-1}{link} for more helps) It follows that \[\begin{aligned} \mathrm{E}[Y] &= M_Y'(0) = (1 - p)\mu = 8.6466472 \\ \mathrm{E}[Y^2] &= M_Y''(0) = -2 \mu (M p + (-1 + p) \mu) = 118.7988301. \end{aligned} \] The total expected total claim payout of the insurer and its variance are \[ \mathrm{E}[S_I] = \lambda \mathrm{E}[Y] = 691.7317734, \quad \mathrm{Var}[S_I] = \lambda \mathrm{E}[Y^2] = 9503.9064046.\] The aggregate claim amount paid by the reinsurer \(S_R\) is \[S_R \sim \mathcal{CP}(80,F_Z), \quad Z= \max(0,X-M)\] It follows that \[\begin{aligned} \mathrm{E}[Z] &= p\mu = 1.3533528 \\ \mathrm{E}[Z^2] &= 2 \mu^2 p = 27.0670566. \end{aligned} \] The total expected total claim payout of the insurer and its variance are \[ \mathrm{E}[S_R] = \lambda \mathrm{E}[Z] = 108.2682266, \quad \mathrm{Var}[S_R] = \lambda \mathrm{E}[Z^2] = 2165.3645318.\] \begin{verbatim} 2. It follows that \end{verbatim} \(\mathrm{Var}[S_I] + \mathrm{Var}[S_R] = \ensuremath{1.1669271\times 10^{4}} < \ensuremath{1.6\times 10^{4}} = \mathrm{Var}[S]\). \begin{verbatim} After excess of loss reinsurance arrangement, there is a reduction in the variability of the amount paid out by the insurer on claims. \end{verbatim} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item From Panjer's recursion formula, \[ g_r = \frac{\lambda}{r}\sum_{j=1}^r j f_j g_{r-j}.\] Therefore, \[\begin{aligned} g_0 &= e^{-\lambda} = 0.606531 \\ g_1 &= \lambda f_1 g_0 = 0.151633 \\ g_2 &= (\lambda/2)(f_1 g_1 + 2 f_2 g_0) = 0.09477 \\ g_3 &= (\lambda/3)(f_1 g_2 + 2 f_2 g_1 + 3 f_3 g_0) = 0.09635 \\ g_4 &= (\lambda/4)(f_1 g_3 + 2 f_2 g_2 + 3 f_3 g_1) = 0.026161 \\ &\vdots \\ g_r &= (\lambda/r)(f_1 g_{r-1} + 2 f_2 g_{r-2} + 3 f_3 g_{r-3}). \end{aligned} \] Also \(g_5 - g_{10} = 0.013233, 0.007663, 0.002148, \ensuremath{9.27\times 10^{-4}}, \ensuremath{4.05\times 10^{-4}}, \ensuremath{1.14\times 10^{-4}}\), respectively. \end{enumerate} \hypertarget{tutorial-6}{% \section{Tutorial 6}\label{tutorial-6}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Suppose \(S \sim \mathcal{CP}(\lambda, F_X)\) where individual claim amounts are distributed on the positive integers and \(\lambda = 0.5\). An individual claim amounts \(X\) are either 1 or 2 with probability 2/3 and 1/3 respectively. \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item \protect\hypertarget{QuestionPanjer}{}{\[QuestionPanjer\]} Write down an expression for \(E[S]\) in terms of \(\lambda\) and the mean of \(X\). \item Use Panjer's recursion to show that \[g_r = \frac{1}{3r} (g_{r-1} + g_{r-2}), \quad r = 2,3,\ldots.\] \item Calculate \(g_r\) for \(r = 0,1,2, 3, 4\). \item Verify that \(\sum_{r=0}^4 g_r > 0.995\). \item Compare \(\sum_{r=0}^4 r g_r\) with the exact mean of \(S\) computed by using \protect\hyperlink{QuestionPanjer}{\[QuestionPanjer\]}. \item Comment on the results. \end{enumerate} \item Consider a portfolio of 1000 life insurance policies over a one-year time period. For each policy at most one claim can occur in the year. The probability that a claim occurs is 0.04. Claim amounts are distributed \(X \sim Exp(1/2)\). \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Calculate the mean and variance of the aggregate claims. \item Calculate the relative security loading \(\theta_1\) such that the probability of a profit on this portfolio is 0.95. \item Suppose that the insurer imposes a deductible of 1. Calculate the mean and the variance of the aggregate claim paid by the insurer. Also calculate the relative security loading \(\theta_2\) such that the probability of a profit on this portfolio is 0.95. \item Comment on the difference between \(\theta_1\) and \(\theta_2\). \end{enumerate} \item A portfolio of 5000 life insurance policies for one year term with the benefit amount as shown in the table \begin{longtable}[]{@{}lcc@{}} \toprule Benefit amount & 1 & 2 \\ \midrule \endhead Number of policies & 4000 & 1000 \\ \bottomrule \end{longtable} The policyholders can be assumed to be independent and the probability that a claim occurs is 0.03. \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Calculate the mean and variance of the aggregate claims. \item Use the normal approximation to compute \(Pr(S > 200)\) \item The insurer aims to reduce the size of \(Pr(S > 200)\). The insurer arranges excess loss reinsurance with retention 1.5. The reinsurer calculates the reinsurance premium \(P_R\) by using the relative security loading of 20\%. Calculate the reinsurPrance premium. \item After reinsurance, calculate the mean and variance of the aggregate claims paid out by the insurer, i.e.~\(E[S_I]\) and \(E[S_I]\). \item Calculate \(Pr(S_I + P_R > 200)\). \item Comment on the results. \end{enumerate} \item An insurance company issues travel insurance policies. There are two types of claims with a maximum of one claim per policy. Type I claims for delay : Claim amounts follow an Exponential distribution with parameter \(\lambda = 0.002\). Type II claims for a flight cancellation: Claim amounts follow a Uniform distribution \(U(20,000, 50,000)\). Suppose that 10\% of policies result in a claim, 80\% of which are Type I and the remaining are type II. Calculate the premium charged for each policy. \end{enumerate} \hypertarget{tutorial-7}{% \section{Tutorial 7}\label{tutorial-7}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item An insurer has initial surplus \(u\) of 5.5 (in suitable units) and receives premium payments at a rate of 3 per year. Suppose claims from a portfolio of insurance over the first two years are as follows: \begin{longtable}[]{@{}lccc@{}} \toprule Time (years) & 0.3 & 0.8 & 1.5 \\ \midrule \endhead Amount & 4 & 6 & 2 \\ \bottomrule \end{longtable} Plot a surplus process and determine whether ruin occurs within the first two years in each of the following cases: \begin{enumerate} \def\labelenumiii{\arabic{enumiii}.} \item Ruin was checked continuously. \item Ruin was checked only at the end of each year. \end{enumerate} \item Suppose that the insurer has arranged excess loss reinsurance with retention limit 3.5. The reinsurance premium is 1 per year to be paid continuously. Plot a surplus process and determine whether ruin occurs within the first two years n each of the following cases: \begin{enumerate} \def\labelenumiii{\arabic{enumiii}.} \item Ruin was checked continuously. \item Ruin was checked only at the end of each year. \end{enumerate} \item Comment on the results. \end{enumerate} \item The aggregate claims process for a risk is compound Poisson with Poisson parameter 0.1 per year. Individual claim amounts \(X\) have the following distribution: \begin{longtable}[]{@{}cccc@{}} \toprule \(x\) & 50 & 75 & 120 \\ \midrule \endhead \(Pr(X = x)\) & 0.7 & 0.25 & 0.05 \\ \bottomrule \end{longtable} The insurer's initial surplus is 100 (in suitable units) and the insurer calculate the premium using a relative security loading of 10\% on the expected amount of annual aggregate claim at the beginning of each year. Calculate the probability that the insurer's surplus at time 2 will be negative. \item The aggregate claims process for a risk is compound Poisson with Poisson parameter 0.1 per year. Individual claim amounts \(X\) have the following distribution: \begin{longtable}[]{@{}ccc@{}} \toprule \(x\) & 1 & 2 \\ \midrule \endhead \(Pr(X = x)\) & 0.7 & 0.3 \\ \bottomrule \end{longtable} The insurer's initial surplus is 0.3 (in suitable units) and the premium rate is 0.4 per year, received continuously. Calculate the following probabilities of ruin. \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item \(\psi(0.3,1).\) \item \(\psi(0.3,2).\) \end{enumerate} \end{enumerate} \hypertarget{tutorial-8}{% \section{Tutorial 8}\label{tutorial-8}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item The table below gives the payments (in 000s THB) in cumulative form in successive development years in respect of a motor insurance portfolio. All claims are assumed to be fully settled by the end of development year 4. Use the chain ladder method to estimate the amount the insurer will pay in the calendar years \(2018, 2019, 2020 ,2021\). \begin{longtable}[]{@{}lllllll@{}} \toprule \endhead & & Development year & & & & \\ & & 0 & 1 & 2 & 3 & 4 \\ & 2013 & 750 & 768 & 844 & 929 & 1072 \\ Accident & 2014 & 820 & 876 & 946 & 1041 & \\ Year & 2015 & 960 & 997 & 1096 & & \\ & 2016 & 1040 & 1087 & & & \\ & 2017 & 1180 & & & & \\ \bottomrule \end{longtable} \item The table below shows the claims payments (in 000s THB) in cumulative form for a portfolio of insurance policies. All claims are assumed to be fully settled by the end of development year 4 and the payments are made at the middle of each calendar year. The past rates of inflation over the 12 months up to the middle of the given year are as follows: \begin{longtable}[]{@{}cc@{}} \toprule \endhead 2014 & 5\% \\ 2015 & 6\% \\ 2016 & 7\% \\ 2017 & 5\% \\ \bottomrule \end{longtable} The future rate of inflation from mid-2017 is assumed to be 10\% per year. \begin{longtable}[]{@{}lllllll@{}} \toprule \endhead & & Development year & & & & \\ & & 0 & 1 & 2 & 3 & 4 \\ & 2013 & 880 & 988 & 1046 & 1065 & 1262 \\ Accident & 2014 & 940 & 1034 & 1091 & 1095 & \\ Year & 2015 & 1060 & 1161 & 1229 & & \\ & 2016 & 1120 & 1221 & & & \\ & 2017 & 1240 & & & & \\ \bottomrule \end{longtable} \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Use the inflation-adjusted chain ladder method to calculate the outstanding claims payments in future years. \item Using an interest rate of 7\% per year, calculate the outstanding claims reserve the insurer should have hold on 1 January 2018. \end{enumerate} \item The table below shows the cumulative claims payments and the cumulative number of claims (amounts appear above claim numbers) for a portfolio of insurance policies. All claims are assumed to be fully settled by the end of development year 5 and that the effects of claims-cost inflation have been removed from these data. Use the average cost per claim method to estimate the outstanding claims reserve which should be held at the end of 2017. \begin{longtable}[]{@{}llllllll@{}} \toprule \endhead & & Development year & & & & & \\ & & 0 & 1 & 2 & 3 & 4 & 5 \\ & 2012 & 2800 & 2954 & 3005 & 3275 & 3624 & 3895 \\ & & 420 & 440 & 453 & 493 & 551 & 591 \\ & 2013 & 3200 & 3379 & 3449 & 3760 & 4184 & \\ & & 460 & 478 & 490 & 533 & 591 & \\ Accident & 2014 & 3800 & 4004 & 4078 & 4454 & & \\ Year & & 500 & 525 & 531 & 580 & & \\ & 2015 & 4520 & 4749 & 4842 & & & \\ & & 520 & 549 & 558 & & & \\ & 2016 & 5340 & 5587 & & & & \\ & & 560 & 589 & & & & \\ & 2017 & 5840 & & & & & \\ & & 570 & & & & & \\ \bottomrule \end{longtable} \item The table below shows the cumulative claims payments and the premium income \(P\) for a portfolio of insurance policies. All claims are assumed to be fully settled by the end of development year 4 and that the effects of claims-cost inflation have been removed from these data. Use the Bornhuetter-Ferguson method to estimate the total reserve required to meet the outstanding claims. You may assume that the ultimate loss ratio for accident years 2014-2017 will be 95\%. \begin{longtable}[]{@{}llllllll@{}} \toprule \endhead & & Development year & & & & & \\ & & 0 & 1 & 2 & 3 & 4 & \(P\) \\ & 2013 & 3597 & 4226 & 4547 & 4807 & 4989 & 5937 \\ Accident & 2014 & 4174 & 4697 & 5317 & 5497 & & 6122 \\ Year & 2015 & 4578 & 5082 & 5753 & & & 6221 \\ & 2016 & 4634 & 5343 & & & & 6365 \\ & 2017 & 5203 & & & & & 6510 \\ \bottomrule \end{longtable} \end{enumerate} \hypertarget{tutorial-9}{% \section{Tutorial 9}\label{tutorial-9}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item (Taken from Gray and Pitts) Suppose the number of claims which arise in a year on a group of policies is modelled as \(X|\lambda \sim Poisson(\lambda)\) and that we observe a total of 14 claims over a six year period. Suppose also we adopt a \(\mathcal{G}(6, 3)\) distribution as a prior distribution for \(\lambda\). \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item State the maximum likelihood estimate of \(\lambda\) and the prior mean. \item State the posterior distribution of \(\lambda\), find the mode of this distribution, and hence state the Bayesian estimate of \(\lambda\) under all or nothing loss. \item Note that if \(Y \sim \mathcal{G}(\alpha,\beta)\) and \(2\alpha\) is an integer, then \(2\beta Y \sim \mathcal{G}(\alpha,1/2)\); that is \(2\beta Y \sim \chi^2\) with \(2 \alpha\) degrees of freedom. \begin{enumerate} \def\labelenumiii{\arabic{enumiii}.} \item Using this fact, find the Bayesian estimate of \(\lambda\) under absolute error loss. \item Find an equal-tailed 95\% Bayesian interval estimate of \(\lambda\), that is an interval \((\lambda_L, \lambda_U)\), such that \(Pr( \lambda > \lambda_U | \underline{x}) = Pr( \lambda < \lambda_L | \underline{x}) = 0.025\). \end{enumerate} \item Find the credibility estimate (the Bayesian estimate under squared-error loss) of \(\lambda\). \end{enumerate} \item Recall that the data \(x_1, x_2, \ldots, x_n\) are available on \(X | \lambda\). Suppose we observe \(\sum x_i = 13\) when \(n = 50\). Based on the Poisson\(-\)Gamma model, the number of claims which arise in a year on a group of policies is modelled as \(X|\lambda \sim Poisson(\lambda)\) and the prior distribution on the claim rate \(\lambda\) is a \(\mathcal{G}(\alpha,\beta)\) distribution. \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Calculate the value of the maximum likelihood estimate of \(\lambda\) \item Calculate the values of prior means and the prior variance in two cases (i) the prior is \(\mathcal{G}(6,30)\) and (ii) the prior is \(\mathcal{G}(2,10)\). Comment on the results. \item For those two prior distributions, calculate the posterior mean of \(\lambda\) given such data. \end{enumerate} \item Suppose the annual claims which arise under a risk, \(X\), in units of 1000THB, as \(X | \theta \sim \mathcal{N}(\theta,0.36)\). From experience with other business, an insurer adopt a \(\mathcal{N}(2, 0.04)\) prior for \(\theta\). The insurer observe claim amounts for the past seven years : \(2369, 2341, 2284, 2347, 2332, 2300, 2267\) THB. Using the normal\(-\)normal model: \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Find the credibility factor and the credibility premium for the risk. \item Find an equal-tailed 95\% Bayesian interval estimate of \(\theta\). \end{enumerate} \item Consider a collective of five separate risks from portfolios of general insurance policies, each of which has been in existence for at least ten years. The mean and variance of the aggregate claims adjusted for inflation over the past ten years are given in the table. Use EBCT Model 1 to calculate the credibility premiums for all five risks. \begin{longtable}[]{@{}ccc@{}} \toprule Risk & Within risk mean & Within risk variance \\ \midrule \endhead 1 & 138 & 259 \\ 2 & 98 & 179 \\ 3 & 120 & 239 \\ 4 & 104 & 168 \\ 5 & 119 & 185 \\ \bottomrule \end{longtable} \item Consider the aggregate claims in five successive years from comparable insurance policies (in units of 1000 THB). \protect\hypertarget{TableRisks}{}{\[TableRisks\]} \begin{longtable}[]{@{}llcllll@{}} \toprule \endhead & & Year \(j\) & & & & \\ & & 1 & 2 & 3 & 4 & 5 \\ Risk \(i\) & 1 & 68 & 65 & 77 & 76 & 74 \\ & 2 & 54 & 59 & 56 & 50 & 62 \\ & 3 & 81 & 95 & 83 & 82 & 89 \\ & 4 & 64 & 70 & 77 & 66 & 73 \\ \bottomrule \end{longtable} \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item Use the EBCT Model 1 to calculate the credibility premium for each risk \(i\). \item Explain why the credibility premiums depend almost entirely on the means for the individual risks. \end{enumerate} \end{enumerate} \hypertarget{project-2021-risk-theory}{% \chapter{Project 2021 Risk Theory}\label{project-2021-risk-theory}} \textbf{Instructions} \begin{itemize} \item Form a group of 2 members. \item You will receive R and Excel files for the project. Use R to generate an array of independent values \(\{\{Z_{i \,j} \}_{j=1}^5 \}_{i=1}^{1000}\), each from a \(U(0,1)\) distribution (details are given below in the question section). Remember to set the argument to set.seed(xxxx) where the argument, xxxx, of the set.seed function is the final four digits of the lower student identification number, e.g.~\textbf{if students' I.D. are 6105389 and 6105395, then use set.seed(5389)}. Then \textbf{use your preferred software}, EXCEL, R or other reasonable software to do the subsequent computation. In case that you choose to use EXCEL, you will need to copy and paste the array \(\{\{Z_{i \,j} \}_{j=1}^5 \}_{i=1}^{1000}\) generated from R into EXCEL by hands. \item Each group submission consists of (1) a report in PDF format, together with print-outs of your calculations (also in PDF) and (2) an R or EXCEL file (one file only) with answers to questions with required numerical answers (i.e.~excluding questions asking for comments). \item The report should not exceed four A4 pages in length (not including the print-outs of the calculations). The report, R (or EXCEL) file names should be in the format project6105389-6105395. \textbf{Submit the group report and R (or Excel) files in Canvas to the student account with the lower student ID}. \item \textbf{Failure to personalize your project with set.seed will result in a project mark of 0.} \item Columns or variable names in your print-out should be clearly labeled. \item When using EXCEL, R or any other reasonable software, to simulate values of the surplus using a translated gamma approximation, use the exact value of the parameters \(\alpha, \beta\) and \(k\) held by the software, not the rounded values presented in your report. For example, suppose you are using EXCEL to simulate \(S(1)\), that a \(U(0,1)\) value is held in cell A1 and your calculated values of \(\alpha, \beta\) and \(k\) are in cells A2, A3 and A4, respectively. Suppose further that the values of these parameters, to 5 decimal places, are 1.12345, 0.12345 and \$-\$123.12345. Note that EXCEL will be holding these values to a far greater degree of accuracy. To simulate a value of \(S(1)\), you should use the command = GAMMAINV(A1,A2,1/A3) + A4 not the command: = GAMMAINV(A1,1.12345,1/0.12345) \(-\) 123.12345. \end{itemize} \textbf{Questions} Suppose the aggregate claims process for a portfolio \(\{ S(t) \}_{t \ge 0}\) is a compound Poisson process with Poisson parameter 10 and individual claim amounts \(X\) have the following distribution \begin{longtable}[]{@{}ccccc@{}} \toprule \(x\) & 10 & 100 & 500 & 1000 \\ \midrule \endhead \(\Pr(X = x)\) & 0.5 & 0.3 & 0.15 & 0.05 \\ \bottomrule \end{longtable} In the simulations below, the seed number has been set to be \texttt{set.seed(5377)}. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Calculate the first three non-central moments of \(X\), i.e. \(\text{E}[X]\), \(\text{E}[X^2]\) and \(\text{E}[X^3]\). \end{enumerate} \[ \begin{aligned} \text{E}[X] &= 10 \cdot 0.5 + 100 \cdot 0.3 + 500 \cdot 0.15 + 1000 \cdot 0.05 = 160 \\ \text{E}[X^2] &= 100 \cdot 0.5 + \ensuremath{10^{4}} \cdot 0.3 + \ensuremath{2.5\times 10^{5}} \cdot 0.15 + \ensuremath{10^{6}} \cdot 0.05 = \ensuremath{9.055\times 10^{4}} \\ \text{E}[X^3] &= 1000 \cdot 0.5 + \ensuremath{10^{6}} \cdot 0.3 + \ensuremath{1.25\times 10^{8}} \cdot 0.15 + \ensuremath{10^{9}} \cdot 0.05 = \ensuremath{6.90505\times 10^{7}}. \\ \end{aligned} \] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \tightlist \item Calculate the mean, variance and coefficient of skewness of the aggregate claims at time \(t = 1\), i.e.~\(\text{E}[S(1)]\), \(\text{Var}[S(1)]\) and \(\text{Sk}[S(1)]\). \end{enumerate} \[ \begin{aligned} \text{E}[S(1)] &= \lambda \text{E}[X] = 1600 \\ \text{Var}[S(1)] &= \lambda \text{E}[X^2] = \ensuremath{9.055\times 10^{5}} \\ \text{Sk}[S(1)] &= \frac{\lambda \text{E}[X^3]}{(\text{Var}[S(1)])^{3/2}} = 0.801372. \\ \end{aligned} \] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item Calculate \(\text{E}[S(5)]\) and \(\text{Var}[S(5)]\). \end{enumerate} \[ \begin{aligned} \text{E}[S(5)] &= 5 \lambda \text{E}[X] = 8000 \\ \text{Var}[S(5)] &= 5 \lambda \text{E}[X^2] = \ensuremath{4.5275\times 10^{6}}. \\ \end{aligned} \] Assume that \(S(1)\) can be approximated by \(Y + k\) where \(Y \sim \mathcal{G}(\alpha,\beta)\) and \(k\) is a constant. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{3} \tightlist \item Calculate \(\alpha\), \(\beta\) and \(k\). \end{enumerate} \[ \begin{aligned} \alpha &= \left(\frac{2}{\text{Sk}[S(1)]}\right)^2 = 6.228618 \\ \beta &= \sqrt{\frac{\alpha}{\text{Var}[S(1)]}} = 0.0026227 \\ k &= \text{E}[S(1)] - \frac{\alpha}{\beta} = -774.8712899. \\ \end{aligned} \] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{4} \tightlist \item Comment on the values obtained in 4. \end{enumerate} The negativity of \(k\) implies that negative aggregate claims can occur in the approximation. This is an unrealistic effect of which we must be aware in what follows. The insurer sets the annual premium to be charged for this portfolio using the expected value principle (EVP) with the relative security loading \(\theta = 0.2\) so that the annual premium before reinsurance is \(c\), where \[c = ( 1 + \theta)\text{E}[S(1)].\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{5} \tightlist \item Calculate the premium rate \(c\). \[c = ( 1 + \theta)\text{E}[S(1)] =( 1 + \theta)\lambda \text{E}[X] = 1920 .\] \end{enumerate} Let \(U(n)\) denote the insurer's surplus at time \(n\), \(n = 1,2, \ldots, 5\) so that \[U(n) = u + n c - S(n),\] where \(u\) is the insurer's initial surplus. Use R to generate an array of independent values \(\{\{Z_{i \,j} \}_{j=1}^5 \}_{i=1}^{1000}\), each from a \(U(0,1)\) distribution. Remember to set the argument to set.seed(xxxx) where the argument, xxxx, of the set.seed function is the final four digits of your student identification number, e.g.~if your student I.D. is 6105389, then use set.seed(5389). Then \textbf{use your preferred software}, EXCEL, R or other reasonable software to do the subsequent computation. In case that you choose to use EXCEL, you will need to copy and paste the array \(\{\{Z_{i \,j} \}_{j=1}^5 \}_{i=1}^{1000}\) generated from R into EXCEL by hands. Let \(\hat{U_i}(5), \, i = 1,2,\ldots, 1000\), denote the simulated surplus after five years calculated using the five values \(Z_{i \,1}, Z_{i \,2}, \ldots, Z_{i \,5}\) and the translated gamma approximations to \(S(1), S(2) - S(1), \ldots, S(5) - S(4)\) as discussed in the lecture. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{6} \tightlist \item Calculate \(E[U(5)]\) and \(\text{Var}[U(5)]\), given that \(u = 3500\). \end{enumerate} \[ \begin{aligned} \text{E}[U(5)] &= \text{E}[3500 + 5c - S(5)] = 5100 \\ \text{Var}[U(5)] &= \text{Var}[3500 + 5c - S(5)] = \text{Var}[S(5)] = \ensuremath{4.5275\times 10^{6}} \\ \end{aligned} \] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{7} \tightlist \item Calculate \(\sum_{i=1}^{1000} \hat{U_i}(5)/1000\), given that \(u = 3500\). \end{enumerate} We have \[\sum_{i=1}^{1000} \hat{U_i}(5)/1000 = 5171.9715293\] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{8} \tightlist \item Comment on your answers to questions 7 and 8. \end{enumerate} \begin{itemize} \item The value in 8 is an estimator for E{[}U(5){]} based on an approximation by simulation and an approximation by the translated gamma distribution. \item The estimation error is relatively small, but not completely negligible, so our simulation results in this coursework must be interpreted with care. \end{itemize} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{9} \tightlist \item Estimate the ruin probabilities \(\psi_1(3500,5)\) and \(\psi_1(4000,5)\) from the simulated surplus paths \((\hat{U_i}(n))_{i = 1,2,\ldots 1000, \, n = 1,2, \ldots, 5}\). \end{enumerate} In the 1000 simulations, we observe 17 and 11 times a ruin. \[ \begin{aligned} \psi_1(3500,5) &= \frac{17}{1000} = 0.017 \\ \psi_1(4000,5) &= \frac{11}{1000} = 0.011. \end{aligned} \] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{10} \tightlist \item Comment on your answers to questions 10, taking into consideration your answer to question 7. \end{enumerate} As expected from the theory, the probability of ruin is lower in the second case, where the initial capital is higher. The direct insurer is considering entering into an excess of loss reinsurance contract with retention \(900\). The reinsurer uses a relative security loading \(\theta_R\) to calculate its reinsurance premium. \textbf{After taking account of reinsurance}, the subscript \(I\), for example \(X_I\), \(S_I(1)\), denotes the relevant quantities for the insurer. The probability of ruin for the insurer, checking for ruin at the end of each year for 5 years and given initial surplus 3500 is \(\psi_{I,1}(3500,5)\). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{11} \tightlist \item Calculate the first three non-central moments of \(X\), i.e. \(\text{E}[X_I]\), \(\text{E}[X_I^2]\) and \(\text{E}[X_I^3]\). \end{enumerate} The distribution of claim amounts paid by the insurer, \(F_{X_I}(x)\) is given by \begin{longtable}[]{@{}ccccc@{}} \toprule \(x\) & 10 & 100 & 500 & 900 \\ \midrule \endhead \(\Pr(X_I = x)\) & 0.5 & 0.3 & 0.15 & 0.05 \\ \bottomrule \end{longtable} where \(X_I = \min(X,M)\). Hence, \[ \begin{aligned} \text{E}[X_I] &= 10 \cdot 0.5 + 100 \cdot 0.3 + 500 \cdot 0.15 + 900 \cdot 0.05 = 155 \\ \text{E}[X_I^2] &= 100 \cdot 0.5 + \ensuremath{10^{4}} \cdot 0.3 + \ensuremath{2.5\times 10^{5}} \cdot 0.15 + \ensuremath{8.1\times 10^{5}} \cdot 0.05 = \ensuremath{8.105\times 10^{4}} \\ \text{E}[X_I^3] &= 1000 \cdot 0.5 + \ensuremath{10^{6}} \cdot 0.3 + \ensuremath{1.25\times 10^{8}} \cdot 0.15 + \ensuremath{7.29\times 10^{8}} \cdot 0.05 = \ensuremath{5.55005\times 10^{7}}. \\ \end{aligned} \] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{12} \tightlist \item Calculate the mean, variance and coefficient of skewness of the aggregate claims \(S_I(1)\) at time \(t = 1\), i.e.~\(\text{E}[S_I(1)]\), \(\text{Var}[S_I(1)]\) and \(\text{Sk}[S_I(1)]\). \end{enumerate} \[ \begin{aligned} \text{E}[S_I(1)] &= \lambda \text{E}[X_I] = 1550 \\ \text{Var}[S_I(1)] &= \lambda \text{E}[X_I^2] = \ensuremath{8.105\times 10^{5}} \\ \text{Sk}[S_I(1)] &= \frac{\lambda \text{E}[X_I^3]}{(\text{Var}[S_I(1)])^{3/2}} = 0.7606193. \\ \end{aligned} \] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{13} \tightlist \item Assume that \(S_I(1)\) can be approximated by \(Y_I + k_I\) where \(Y_I \sim \mathcal{G}(\alpha_I,\beta_I)\) and \(k_I\) is a constant. Calculate \(\alpha_I\), \(\beta_I\) and \(k_I\). \end{enumerate} \[ \begin{aligned} \alpha_I &= \left(\frac{2}{\text{Sk}[S_I(1)]}\right)^2 = 6.9139344 \\ \beta_I &= \sqrt{\frac{\alpha_I}{\text{Var}[S_I(1)]}} = 0.0029207 \\ k_I &= \text{E}[S_I(1)] - \frac{\alpha_I}{\beta_I} = -817.2228178. \\ \end{aligned} \] \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{14} \item Calculate \(E[U_I(5)]\) and \(\text{Var}[U_I(5)]\), given that \(u = 3500\) and assuming: \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item \(\theta_R = 3\theta\). \item \(\theta_R = 2\theta\). \item \(\theta_R = \theta\). \end{enumerate} \end{enumerate} The insurer's surplus process can be calculated from \[ U_I(t) = u + (c - c_r)t - S_I(t),\] where \(c_R = (1 + \theta_R)\text{E}[S - S_I]\) is the reinsurance premium rate. \begin{itemize} \item For \(\theta_R = 3\theta\), \[ \begin{aligned} \text{E}[U_I(5)] &= \text{E}[U(0) + 5(c - c_R) - S_I(5)] = 4950 \\ \text{Var}[U_I(5)] &= \text{Var}[U(0) + 5(c - c_R) - S_I(5)] = \text{Var}[S_I(5)] = \ensuremath{4.0525\times 10^{6}}. \\ \end{aligned} \] \item For \(\theta_R = 2\theta\), \[ \begin{aligned} \text{E}[U_I(5)] &=5000 \\ \text{Var}[U_I(5)] &= \ensuremath{4.0525\times 10^{6}}. \\ \end{aligned} \] \item For \(\theta_R = 1\theta\), \[ \begin{aligned} \text{E}[U_I(5)] &=5050 \\ \text{Var}[U_I(5)] &= \ensuremath{4.0525\times 10^{6}}. \\ \end{aligned} \] \end{itemize} It should be emphasised that the value of \(\text{Var}[U_I(5)]\) is not affected by the (net) premium income and so is not affected by the values of \(\theta\) or \(\theta_R\). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{15} \item Using \(\{\{Z_{i \,j} \}_{j=1}^5 \}_{i=1}^{1000}\) and the translated gamma approximation, estimate \(\psi_{I,1}(3500,5)\) assuming: \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item \(\theta_R = 3\theta\). \item \(\theta_R = 2\theta\). \item \(\theta_R = \theta\). \end{enumerate} \end{enumerate} \begin{itemize} \tightlist \item For \(\theta_R = 3\theta\), \(\theta_R = 2\theta\) and \(\theta_R = 1\theta\), in the 1000 simulations, we observe 14, 13 and 13 times a ruin, respectively. \end{itemize} \[ \begin{aligned} \psi_1(3500,5) &= \begin{cases} 0.014, & \text{ if } \theta_R = 3\theta \\ 0.013, & \text{ if } \theta_R = 2\theta \\ 0.013, & \text{ if } \theta_R = 1 \theta. \end{cases} \end{aligned} \] 0.014, 0.013 \text{ and } 0.013, \text{ respectively.} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{16} \item Comment on your answers to questions 10 and 16, taking into consideration the answers to question 15. \begin{enumerate} \def\labelenumii{\arabic{enumii}.} \item The excess of loss reinsurance has reduced the amount of the largest possible claim. This makes the portfolio much safer for the insurer. However, the standard deviation of the surplus at the end of 5 years has been reduced by only a factor of about 1.0569823. \item This reduced risk is reflected in the ruin probabilities, which are smaller than the value in question 10. \item The more expensive the reinsurance (= the higher the value of \(\theta_R\)), the lower the insurer's net premium income and the higher the (estimated) ruin probability. \item This reduced risk is ``paid for'' by the insurer's expected surplus. This has fallen substantially (from 5100 to 4950 (\(\theta_R = 3 \theta\)) and to 5050 (\(\theta_R = \theta\))). Clearly, the insurer would prefer \(\theta_R = \theta\), as this implies the lowest cost of reinsurance and hence the highest expected net surplus and the lowest probability of ruin. \end{enumerate} \item Suppose that \(\theta_R = 2\theta\). Estimate the probability of ruin \(\psi_{I,1}(3500,5)\) for the reinsurer, given the retention limit is \(M = 550, 600, 650, \ldots, 900\). Identify the largest \(M \in \{550, 600, 650, \ldots, 900 \}\) such that the corresponding estimated probability of ruin is not greater than 1\%. \end{enumerate} The estimates the probability of ruin \(\psi_{I,1}(3500,5)\) for the reinsurer, given the retention limit is \(M = 550, 600, 650, \ldots, 900\) are 0.01, 0.01, 0.01, 0.011, 0.011, 0.011, 0.011, 0.013. Hence, the largest \(M \in \{550, 600, 650, \ldots, 900 \}\) such that the corresponding estimated probability of ruin is not greater than 1\% is \(M = 650\). The results are also illustrated in the figure below. \begin{center}\includegraphics{SCMA470Bookdownproj_files/figure-latex/unnamed-chunk-28-1} \end{center} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{18} \tightlist \item Comment on your answers to questions 18. \end{enumerate} It is possible to get the estimated probability of ruin below 1\% even in the case that the reinsurance is ``expensive'', i.e.~the reinsurance risk loading is higher than the risk loading the insurer charges on the original insurance contracts. While before reinsurance the average profit per year was \(c - \text{E}[S(1)] = ((1+\theta)−1) \times \text{E}[S(1)] = 320\). To run a sustainable business in the long run, the annual cost of capital should not be higher than the average annual profit, i.e.~not higher than \[ \frac{320}{3500} = 9.1428571\%.\] This return looks large enough to attract investors. With the results in 18, adding excess of loss reinsurance with \(\theta_R = 2\theta\) and the retention limit \(M = 650\) reduces the average profit to \[ c - c_R - \text{E}[S_I(1)] = (1+\theta) \text{E}[S(1)] - (1+\theta_R) ( \text{E}[S(1)] - \text{E}[S_I(1)]) - \text{E}[S_I(1)]) = 250.\] The insurer can still make a profit provided the surplus will drift to \(\infty\), but ruin could still occur. The rate at which premium income comes in is greater than the rate at which claims are paid out and reinsurance premium charged. It should be noted that the average profit per year after the excess of loss reinsurance with \(\theta_R = 2\theta\) is an increasing function of \(M\) as shown in the figure below: \begin{center}\includegraphics{SCMA470Bookdownproj_files/figure-latex/unnamed-chunk-31-1} \end{center} \hypertarget{interactive-lecture}{% \chapter{Interactive Lecture}\label{interactive-lecture}} Some \emph{significant} applications are demonstrated in this chapter. \hypertarget{datacamp-light}{% \section{DataCamp Light}\label{datacamp-light}} By default, \texttt{tutorial} will convert all R chunks. eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiJhIDwtIDJcbmIgPC0gM1xuYSArIGIifQ== eyJsYW5ndWFnZSI6InIiLCJwcmVfZXhlcmNpc2VfY29kZSI6ImxpYnJhcnkoc3RhdHMpXG5saWJyYXJ5KE1BU1MpXG5saWJyYXJ5KGdncGxvdDIpXG5kYXQgPC0gYygzMS4wODk0MjE1Niw5MTUuMDI1OTM2MiwzMi4wMjM3OTU2Miw4ODUuODc1NDUxLDkzMTQuMTAwNzk3LDcwNy4xNzM4ODY2LDIxMTQuMzYyNDg2LDYwMS41ODI3ODY2LDQzNS4zNzg4MTM1LDQ5LjgwMTc5NjE5LDE4MDIuMzgzODIsMjExLjYzNjQzOSwxNTMuNTk4NDcxOSw2MC4wNTk2Njk5Miw0OC4xNzE1NzY5Nyw5NDguNzIzNDYyNiwxMzIuNDI3MzEwOSwxNTEuNzEwODE1LDI5NjcuOTYxMDM2LDczNS40MTQ5MzMyLDMwNC41ODA3NTg3LDUwLjUzMzExMDY5LDIyNC43NjY1NTQ2LDM1Ni4xODA5NDM4LDQ3NS43NjY4NDQxLDMwNDQuMTQ5NTEzLDEzLjUwNjY0ODk0LDY3Ljk4Mjc0NDQ5LDMwNC4xNDM5OTY2LDIzOC4wMDAxMDUsMzk5LjA3OTgyNjMsMTQ5LjI4MDc4LDEyOC4yODQ4Mzc5LDIxLjQyMDI3NzE4LDczLjMxNDI2NzMyLDQ5LjQ2ODgyNzkxLDY2Ny41MzI3Mjc1LDQ0LjY3MzYwMTg1LDE0ODk0LjA3ODM5LDY2MC43NjE0MzA3LDEwMC43NjI4NjI4LDYzMi4yODEyMzkxLDQyLjkwODg0Nzc3LDY2LjE3NjkzMTM1LDUwLjY5NDU0MTMyLDE4Ni44NzgxNjY3LDE2OC41NDA4NjE1LDE1MS43Mzk5NzgsMjQxOS41MzIzNTQsNDM0LjQ1NjQwMzIsNjkuMDM0NjAyMzIsMTYyLjY2OTg1OTMsMjI2LjYxMTAzOTUsMzMuNjE1MDM0OTUsMjMzLjAyNzk5NiwzMjQ0Ljk0NTg5MywzNTQuMjUzNDgxMyw3OC42MTA1Mjc0NCwyMzEuNTY2NjE0LDI4My4wMjA2NDkxLDQ1Ny42Nzg1NDI2LDEzNC4yOTMzMDU4LDYxLjM0MjY1MDYzLDM4LjI1NjkxOTEyLDE1NzguOTA5MDQ4LDQ0MS42MTk5ODI2LDc2MS40MTc3Nzc3LDI3NS42OTc4NTg4LDUyMS4wMzU1OTE2LDIxODkuNjI3ODMsMTE3LjI2Njc4ODUsMjQwLjM0MDMxNTMsNjcyLjQ1MTI5MzgsNzUzLjg5NTgwMTksODQuOTg4ODMwNzksMzY2NS40MTc5NzYsNjAuMzU5Nzc1MTUsNC4wMTkzMzMwOSwxNC45NTk1MTM2NiwxOTYuNzE0NjQwMywxNTMuNzEzMzE2NSw5OC4zMTg3NTA1MywxMDQuNDQ4NjMyNCwzNTguOTIwNTg3OCwyLjI1NDk4NjMzMSwyMDU5LjYwMzk1OSwzNy42Mzg5ODYwOCw1Ni40ODk0NDAzNSwxMTQzLjA4Mjk0OCw0MTAuNzU4NTUxNiwxMi42NTQwMjk4NiwxOS44MzMyMjUxNCwxMzA1LjEzNDc5NywyMDE5LjM2MDczNSwxMjg2Ljk4NDc5LDg4OTIuMTgyMTMxLDUuODE0NTE4NzQ5LDI5Ni4xNTUxMjk1LDg2Ljc1MzA4MTYxLDQ4Ni43ODUxNDA1LDcuNDkxMzg5Nzk4LDE4MC4zMjU1MjgxLDE0MTQuMjk3NzQ4LDUyNC40NjI4MjA4LDEwNDIuNjkwMzM0LDEyOTEuNDgxNDc0LDExNS40OTUwOTk4LDM2MC42MzEwNzM3LDMyMzMuNzE2ODM4LDE0OS41MTkxMDM5LDguODQ1ODM3NDczLDgzLjg3Mjk2MzI0LDQyLjk5NjE0NTE3LDYyMy45NzA0ODUzLDQ1Ljc0OTkwMDc0LDE0NC4yNDQ5NzkzLDM2OC41NzU2NDIsODY2LjkyNzI1NDUsNTcuNjE1OTI5MjEsMTgxMi4yMzEzMTUsMjIyOS45OTg3NTQsMzQ0OC4zMzI4ODgsMTEzMTMuMzQ3MjEsMTQ5Mi40OTg4NTYsMTk2LjcyNjI1NzEsNzEuMTE4MTc2MDEsNDI1LjA2MTQ0ODMsMzguMjg2NTMwNDgsNDQuNTA5MDAxNiwzMDguODc4MTY1NywxOTA4MC41MTc0OSw4Mi4wNzYxMzkzLDI1MC4wODM1MjMsNzkuMDc0OTIwNDIsMTgzLjg2OTc2ODYsMzMuODMxNjAzOTEsMjIuNzgyMTgyOTksNjk4Ljk1NDE2NDgsMzIuNzU0MjcwMDMsNDU3LjAxMDQ5MTksMTEyLjE3MTU1NjcsMzk2LjcxNTUyMzQsMTk1LjAyNDA3ODEsMTg2My4xODUzODUsMTgxLjY0NDExMjEsNTkuMDg3MzM2NTUsOTYuMzkwMDQxOTEsODI0LjczMDE4NCwxNTUuODIxNTE2MiwxMS44NTUxMDY2MSw4NzAuODY3OTUwMiw0MjUuMzEzMzA0NSw4NTQuNzI5NjQ3NCwyNTQuMzEwODg5Miw2NjQuMzMyMDEwNyw1Ni4yNjEyMDc1MiwzNzguMjQ0MDE2LDIwNjkuNDMxNDk1LDMxMjEuMDkwMSw4NDQuNDMzNzU5NCw3NDYuMzg1MzY3NSwxODUxLjQ2OTYzMyw0MzEuNzA2MDIzMiwzMzMuMzI1NDgxLDIyLjIzOTcwMTAyLDY2Mi4zODE2ODg4LDExNy43NTkwMDU3LDU3MC40NDEyODI5LDExMjcuMDM1MzA2LDI0Ni4yNjg2MTgzLDE0NjcuNTY4ODY3LDM0Ljg5MTg3MTEyLDIzNy41NzYxNjkzLDM0OS40NTAzOTY3LDIyOS4zMzYyODQ5LDkzNC42OTI2NTYxLDE2Mi45MjU0NDA4LDU4LjI4NDk3MTcyLDEyODAzLjA0NzI2LDE1Ljk0OTA0MTg5LDk1OS45NTQzNDAyLDU4NTMuODc4OTc5LDUzNy4zOTc0MjUzLDc1LjMwNTcyODgzLDcxOC42NTk3NTIxLDYzMy44MjE0NDM4LDM2My4wMzM4MDc2LDk1Ljg0NzYyNjYsODAuMzE3ODY1MywyODYuNzEyMTc2Myw2MzY3LjQ1MzQwOCwzMjEuNTY3NzExLDIxLjUxODM4MDIyLDU5OS40NzEyOTU4LDI0Ni43MDA3MDczLDEzODYzLjc4MTgxLDIxNC43MzQyNTk3LDIzNC4zMjEyOTUyLDk1OC45MTYxNzksMTY1LjI1MjEzODUpXG5kYXQgPC0gZGF0YS5mcmFtZShjbGFpbXMgPSBkYXQpIiwic2FtcGxlIjoiIyBGaWxsIHlvdXIgY29kZSBoZXJlXG5uYW1lcyhkYXQpIiwic29sdXRpb24iOiJnZ3Bsb3QoZGF0KSArIFxuICBnZW9tX2hpc3RvZ3JhbShhZXMoeCA9IGNsYWltcywgeSA9IC4uZGVuc2l0eS4uKSwgYmlucyA9IDkwICwgZmlsbCA9IFwiZ3JleVwiLCBjb2xvciA9IFwiYmxhY2tcIikgKyBcbiAgc3RhdF9mdW5jdGlvbihmdW49ZGV4cCwgZ2VvbSA9XCJsaW5lXCIsIGFyZ3MgPSAocmF0ZSA9IDEvbWVhbihkYXQkY2xhaW1zKSksIGFlcyhjb2xvdXIgPSBcIkV4cFwiKSkgK1xuICBzY2FsZV9jb2xvcl9kaXNjcmV0ZShuYW1lPVwiRml0dGVkIERpc3RyaWJ1dGlvbnNcIikifQ== \begin{table}[h] \begin{center} \caption{Comparision of Bayesian and empirical Bayesian models} \begin{tabular}{| l | l | l | c | } \hline & Normal $-$ normal & Poisson $-$ gamma & EBCT\\ \hline Prior & $\theta \sim \vnormal{\mu_0,\sigma^2_0}$ & $\lambda \sim \vgamma{\alpha,\beta}$ & none \\ Conditional mean of $X_i$ & $\theta$ & $\lambda$ & $m(\theta)$ \\ Conditional variance of $X_i$ & $\sigma^2$ & $\lambda$ & $s^2(\theta)$ \\ \hline \end{tabular} \end{center} \end{table} \bibliography{book.bib,packages.bib} \end{document}
{ "alphanum_fraction": 0.6982178141, "avg_line_length": 44.6016934508, "ext": "tex", "hexsha": "ba377b2801f3652d05319aa6150d2aff975e0a00", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-09-27T10:26:42.000Z", "max_forks_repo_forks_event_min_datetime": "2021-09-27T10:26:42.000Z", "max_forks_repo_head_hexsha": "dba994d9437b17db6112584802f2b16eda3f7c38", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "pairote-sat/SCMA470", "max_forks_repo_path": "docs/SCMA470Bookdownproj.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dba994d9437b17db6112584802f2b16eda3f7c38", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "pairote-sat/SCMA470", "max_issues_repo_path": "docs/SCMA470Bookdownproj.tex", "max_line_length": 4952, "max_stars_count": null, "max_stars_repo_head_hexsha": "dba994d9437b17db6112584802f2b16eda3f7c38", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "pairote-sat/SCMA470", "max_stars_repo_path": "docs/SCMA470Bookdownproj.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 94098, "size": 258110 }
% % SEE 101W: Process, Form & Convention - A Course Overview % Section: Introduction % % Author: Jeffrey Leung % \section{Introduction} \label{sec:introduction} \begin{easylist} & To ensure clear communication: && Consider audience and purpose && Organize content && Follow mechanics of good writing & \textbf{3 Phases of writing:} Inventing, drafting, and revising && May be linear or iterative \end{easylist} \clearpage
{ "alphanum_fraction": 0.7401392111, "avg_line_length": 19.5909090909, "ext": "tex", "hexsha": "b48a8843585ffa9dbe3f03b84bb0813a46ed11da", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-12-27T21:44:56.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-18T09:17:46.000Z", "max_forks_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AmirNaghibi/notes", "max_forks_repo_path": "see-101w-process-form-convention-in-professional-genres/tex/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AmirNaghibi/notes", "max_issues_repo_path": "see-101w-process-form-convention-in-professional-genres/tex/introduction.tex", "max_line_length": 65, "max_stars_count": 25, "max_stars_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AmirNaghibi/notes", "max_stars_repo_path": "see-101w-process-form-convention-in-professional-genres/tex/introduction.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-09T02:37:39.000Z", "max_stars_repo_stars_event_min_datetime": "2019-08-11T08:45:10.000Z", "num_tokens": 116, "size": 431 }
\section{Research Model}
{ "alphanum_fraction": 0.7407407407, "avg_line_length": 6.75, "ext": "tex", "hexsha": "209d0a2e9466502e78ad0baf7ab537aa1207c9cc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cb3dd3d7541e2fecba482a29facb67cbe4aa2edc", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "trahloff/bachelorThesis", "max_forks_repo_path": "content/archive/03researchMethod/02researchModel.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cb3dd3d7541e2fecba482a29facb67cbe4aa2edc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "trahloff/bachelorThesis", "max_issues_repo_path": "content/archive/03researchMethod/02researchModel.tex", "max_line_length": 24, "max_stars_count": null, "max_stars_repo_head_hexsha": "cb3dd3d7541e2fecba482a29facb67cbe4aa2edc", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "trahloff/bachelorThesis", "max_stars_repo_path": "content/archive/03researchMethod/02researchModel.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6, "size": 27 }
% iris.tex % % CS5950 - Machine Learning - Album Covers % % This paper covers the results of the our group's experimentation with the % album covers dataset from the Internet Archive. \documentclass[11pt,a4paper,titlepage]{article} \usepackage{geometry} \geometry{letterpaper} \usepackage{parskip} \parindent=0in \parskip=8pt % make block paragraphs \usepackage{hyperref} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{diagbox} \usepackage{fancyhdr} \pagestyle{fancy} \usepackage{tabularx,ragged2e,booktabs,caption} \usepackage{amsmath,amsthm,amssymb} \usepackage{listings} \usepackage{pdflscape} \begin{document} \title{CS5950 - Machine Learning \\ Identification of Duplicate Album Covers} \author{Arrendondo, Brandon\\ Jenkins, James\\ Jones, Austin\\ \includegraphics[width=3in]{covers.png}\\[1ex] } \date{\today} \maketitle \newpage \section{Introduction} This paper covers the results of our groups' initial experimentation with the album covers dataset from the Internet Archive, located at: \url{https://blog.archive.org/2015/05/27/experiment-with-one-million-album-covers/} Every album cover used in this paper is from The Internet Archive and is copyright of its respective owners. The dataset itself contains 997,131 images, which are an assortment of .gif, .jpg, or .png files. The goal of our project was to identify images within the dataset that were ``similar'' to each other, meaning images that were not exactly identical. \section{First Steps} Initially, our team searched for existing algorithms that we could implement that could be used to detect whether two images were similar. This led us almost immediately to this site: \url{http://www.hackerfactor.com/blog/?/archives/432-Looks-Like-It.html} Most of the ideas in general involved transforming images and converting them ultimately to a number, which could then be compared, or ``hashing'' the images. The two algorithms detailed on the site we refer to as: \begin{itemize} \item The Simple Algorithm \item The DCT (discrete-cosine-transform) Algorithm \end{itemize} We will detail these algorithms in the next section. After having some idea of algorithms that may work, while implementing the algorithms we also downloaded the entire dataset onto a computer. The sizes for each letter directory are detailed in the table below. The download itself took roughly 1.5 days via the torrent download on the site. \begin{minipage}{\linewidth} \centering \begin{tabular}{|l|l|} \hline Directory & Size (GB) \\ \hline a & 8.4 \\ \hline b & 7.5 \\ \hline c & 19 \\ \hline d & 7.2 \\ \hline e & 4.5 \\ \hline f & 5.3 \\ \hline g & 4.0 \\ \hline h & 5.3 \\ \hline i & 5.2 \\ \hline j & 1.6 \\ \hline k & 2.1 \\ \hline l & 7.0 \\ \hline m & 7.4 \\ \hline n & 4.1 \\ \hline o & 3.3 \\ \hline p & 5.9 \\ \hline q & .380 \\ \hline r & 5.0 \\ \hline s & 13 \\ \hline t & 6.5 \\ \hline u & 1.7 \\ \hline v & 2.5 \\ \hline w & 4.3 \\ \hline x & .245 \\ \hline y & .769 \\ \hline the & 9.1 \\ \hline total & 140 \\ \hline \end{tabular} \captionof{table}{Uncompressed Directory Sizes for Album Covers} \end{minipage} \section{Initial Complications} Simply traversing the directory tree was slow and difficult. A simple command: \begin{lstlisting} ls a/*.png \end{lstlisting} Will fail because the size overloads the limit for files returned. This made a traditional command fail, like: \begin{lstlisting} md5sum a/* \end{lstlisting} We tested several different options, which included: \begin{itemize} \item find with exec \item find with xargs \item ls -1 with xargs \item Python script using process pools \end{itemize} Using process pools was the fastest method for running our Python-based implementations of the algorithms. We used ls with xargs for the md5sums we ran. The second major complication was any penalty for bugs. Because running the hash across the entire dataset took on the order of half a day, the first few mistakes took several days to identify and fix. In hindsight, doing extensive testing on a smaller dataset is very necessary before running on the entire dataset. \section{The Algorithms} After some initial testing, we were fairly impressed by the algorithms detailed on: \url{http://www.hackerfactor.com/blog/?/archives/432-Looks-Like-It.html} Once again, we refer to them as: \begin{itemize} \item The Simple Algorithm \item The DCT (discrete-cosine-transform) Algorithm \end{itemize} We now detail these algorithms in pseudo-code. Our Python implementation of these algorithms can be found at: \url{http://github.com/jpypi/dup-image-search} \newpage \subsection{The Simple Algorithm} \begin{lstlisting} simple_hash(image_filepath): img = image.load(image_filepath) img.resize(8, 8) # resize the image to 8x8 img.convert_to_grayscale() arithmetic_mean(img.pixels()) hash = 0 # hash is 64-bit integer index = 0 for each pixel in img.pixels() if pixel > mean hash.set_bit(index) index += 1 \end{lstlisting} \subsection{The DCT Algorithm} \begin{lstlisting} dct_hash(image_filepath): img = image.load(image_filepath) img.resize(32, 32) # resize the image to 32x32 img.convert_to_grayscale() px = img.pixels() transformed_matrix = dctII2d(dctII2d(px.transpose()).transpose()) # take the top-left only top_left = transformed_matrix.subset(8, 8) # leave out [0, 0] arithmetic_mean(top_left - top_left[0, 0]) hash = 0 # hash is 64-bit integer index = 0 for each pixel in img.pixels() if pixel > mean hash.set_bit(index) index += 1 \end{lstlisting} This algorithm uses the 2D Discrete Cosine Transform, detailed here: \url{http://en.wikipedia.org/wiki/Discrete_cosine_transform#DCT-II} Also, per the details found on: \url{http://www.hackerfactor.com/blog/?/archives/432-Looks-Like-It.html} The recommendation was to use the top-left only of the transform and to leave out the [0, 0] term when doing averaging. \section{Initial Run - Exact Duplicates} Prior to running our algorithms, first we wanted to reduce the size of the dataset by removing any corrupt (unloadable) images and any exact duplicates. For this, we calculated the MD5 hash of each image and used the Python Image Library (PIL) to verify whether the image was a valid image. After running this on the dataset (for roughly 10 hours), we were able to identify matching hashes and calculate that, of the 997,131 images: \begin{itemize} \item 189,567 images (19\%, 16.65 GB) were exact duplicates of an image in the remaining set \item 7962 images (about .8\%, 221 MB) images were corrupt \end{itemize} \section{Second Run - Simple and DCT Hashes} After removing the exact duplicates and corrupt images, we then ran each of our hashes on the full dataset. After the runs were complete, we compiled the combined MD5 hash, simple hash, and DCT hash into a SQLite database. We shared this database in a Google Drive folder with the team for use in analysis. We can make this database publicly available, but it is fairly large (the bzip2 compressed database is 63.2 MB). \section{Post-run Analysis} Per the algorithms, any image within hamming distance of five should come close to matching. The hamming distance is the number of bit flips two integers are apart. To more easily identify all the images within hamming distance of 5 of any given image, we calculating the hamming weight (the number of ones in the integer) of each image and stored that in the database as well. From this, we can say that any image within hamming weight of +/- 5 of the select image is a candidate for checking the hamming distance. The problem was with so many images, checking this takes some time - not a lot - but with about 750,000 images the calculation adds up. We tried another approach of building up a database slowly and checking with each insertion if the added image is within 5 of any image already in the database. That too, was fairly slow (manageable, but slow). We decided to refocus, instead, on the determination of what the accuracy for hamming distance zero was. It is very easy to calculate the hamming distance of zero, since we only need to check the database for sets of matching hashes. We then built up a list of DCT hashes and simple hashes that, per the algorithms, should match. All that was left was to manually verify the matches. For this we developed a simple tool that would go through the list and allow the user to select Yes or No if the images matched. \begin{minipage}{\linewidth} \centering \includegraphics[width=360px]{image_verify.png} \captionof{figure}{The Image Verification Tool} \end{minipage} The tool writes the results out to a file, which we could then use to determine the error rate for false positives. We also fed this into a database for any further analysis that would require user input (at least that small subset of manual user validation would already be done). We took the matches for simple and matches for DCT and split them into three for each of us to validate. We immediately noticed some large sets that were clearly not matching. We decided to remove all sets with greater than 4 results as their accuracy was close to 0\%. We make note of this in the final numbers. The final number of sets (between 2 and 5 images each) for the simple algorithm totalled 9031. For DCT, the number of sets was 5302. That was a lot of pictures to sift through using the tool. Further tool enhancements would definitely include: \begin{itemize} \item Indication of current progress \item Keyboard-based entry (in addition to mouse) \item Ability to go-back and modify a result \end{itemize} We all noticed we had a few misclicks here and there, which will contribute to some error. Also, each individual had a differing definition of what a ``similar'' album cover would be, which led to some images being accepted by one person that may not otherwise have been accepted by another. \section{Results} \begin{minipage}{\linewidth} \centering \begin{tabular}{|l|l|l|l|} \hline Images Scanned & Correct & Incorrect & \% Correct \\ \hline First Set & 1663 & 2008 & 45.3\% \\ \hline Second Set & 1315 & 2546 & 34\% \\ \hline Third Set & 1204 & 2378 & 33.6\% \\ \hline \end{tabular} \captionof{table}{Results for Simple Algorithm} \end{minipage} \begin{minipage}{\linewidth} \centering \begin{tabular}{|l|l|l|l|} \hline Images Scanned & Correct & Incorrect & \% Correct \\ \hline First Set & 1049 & 678 & 60.7\% \\ \hline Second Set & 841 & 964 & 46.6\% \\ \hline Third Set & 1280 & 420 & 75.3\% \\ \hline Already Matched & 807 & 49 & 94.2\% \\ \hline \end{tabular} \captionof{table}{Results for DCT} \end{minipage} The ``Already Matched'' column were images from the database that we had already manually matched from the Simple algorithm analysis that we did not have to re-evaluate. Take note of the fact that we pruned the sets greater than 4, which we assume had zero accuracy. \begin{minipage}{\linewidth} \centering \begin{tabular}{|l|l|} \hline Algorithm & Count \\ \hline Simple & 14817 \\ \hline DCT & 2833 \\ \hline \end{tabular} \captionof{table}{Count of Images in Sets Above Length 4} \end{minipage} If we were to introduce other algorithms, we should be able to reduce the sets above 4 to only those that legitimately match. \begin{minipage}{\linewidth} \centering \begin{tabular}{|l|l|} \hline Algorithm & Percentage Correct \\ \hline Simple (sets less than 5) & 37\% \\ \hline DCT (sets less than 5) & 81\% \\ \hline Simple (assuming sets greater than 4 are 0\%) & 16\% \\ \hline DCT (assuming sets greater than 4 are 0\%) & 45\% \\ \hline \end{tabular} \captionof{table}{Combined Results for Each Algorithm} \end{minipage} Finally, we ran the results for first taking the simple hash and then only predicting the files were similar if the DCT hashes also matched. \begin{minipage}{\linewidth} \centering \begin{tabular}{|l|l|l|} \hline Correct & Incorrect & \% Correct \\ \hline 1817 & 100 & 94.7\% \\ \hline \end{tabular} \captionof{table}{Results for Simple Then DCT} \end{minipage} It is worth noting that the accuracy was higher than either algorithm alone, however the number of duplicates found overall was smaller, which would lead to more false-negatives (which we did not measure). \section{Analysis of Results} We gained great insight into what the algorithms did and did not see once seeing the images that had matching hashes. The failing points we saw in the algorithms were: \begin{itemize} \item The algorithms are color-insensitive. As a result, covers that differ only by color were identified (incorrectly) as matches. \item Small details are not picked up in the algorithm. This includes, for the most part, any text. This had three weak points: Many album covers were not covers but rather images of the CD/record. As a result, the round circular image dominated the algorithm and the text to distinguish album covers was the only thing that could differentiate them. A large number of false positives were due to this. There were many ``compilation'' albums, in particular Glee albums and ``The Voice'' albums, whose only differentiating feature was the song name on the album cover. Those were always matching, which led to a large number of false positives. Some albums by the same producer had matching artwork but different artists/songs. The text was, again, a differentiator here and did not get picked up by either algorithm. \end{itemize} Despite the failings, the algorithms were good (at hamming distance zero) of detecting a number of duplicates. \section{Recommendations} Based on what we saw, we would recommend using the DCT algorithm as a first pass (after exact duplicates were removed with MD5), then another algorithm be applied to determine: \begin{itemize} \item Shape - if it is a circular picture it is likely an image of the album itself (and not a cover). If character recognition is expensive, apply it to these hits first. \item Color - Again, if the DCT algorithm indicates the images are close, then some color-matching algorithm would help to weed out false-positives. \end{itemize} \section{Next Steps} If we were to continue working on this, we would recommend first the use of a character-recognition algorithm to help sort out false-positives. We would also tune the image verification program, likely refactoring it as a web page for detecting if the images are equal, to distribute the workload better. We would feed the results directly into a database. This is also a project which lends itself readily to some massive parallelization step - divide and conquer works well on this as many of the steps are independent (like the individual image hashing). Per some of our discussions, offloading much of the image processing work to a graphics processor would likely significantly speed up the algorithms, as they are tailored to doing these kinds of transformations. We would also map out the accuracy for hamming weights 1, 2, 3, and 4, to see how rapid the decline in accuracy is. Honestly, this is what we ran out of time to do - in hindsight we would have worked with much smaller datasets first as a training step before moving to the full dataset for testing. \end{document}
{ "alphanum_fraction": 0.7173886062, "avg_line_length": 32.9529652352, "ext": "tex", "hexsha": "bb96b4bbac905d886ff170f4a2e0c6196b65efad", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-06-05T10:55:27.000Z", "max_forks_repo_forks_event_min_datetime": "2015-06-11T21:37:16.000Z", "max_forks_repo_head_hexsha": "c75100a820c290f33608993e3f9fa13b074801ab", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jpypi/dup-image-search", "max_forks_repo_path": "report/covers.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "c75100a820c290f33608993e3f9fa13b074801ab", "max_issues_repo_issues_event_max_datetime": "2015-06-25T00:06:10.000Z", "max_issues_repo_issues_event_min_datetime": "2015-06-24T23:40:57.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jpypi/dup-image-search", "max_issues_repo_path": "report/covers.tex", "max_line_length": 95, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c75100a820c290f33608993e3f9fa13b074801ab", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jpypi/dup-image-search", "max_stars_repo_path": "report/covers.tex", "max_stars_repo_stars_event_max_datetime": "2015-11-25T10:34:24.000Z", "max_stars_repo_stars_event_min_datetime": "2015-11-25T10:34:24.000Z", "num_tokens": 4254, "size": 16114 }
\chapter*{Appendix} \addcontentsline{toc}{chapter}{Appendix} \label{chap:appendix1} \textbf{Error-seeded code to evaluate ADFD and ADFD+} \label{sec:appendix1} \scriptsize %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \textbf{Program 1} Point domain with One argument \begin{lstlisting} /** * Point Fault Domain example for one argument * @author (Mian and Manuel) */ public class PointDomainOneArgument{ public static void pointErrors (int x){ if (x == -66 ) x = 5/0; if (x == -2 ) x = 5/0; if (x == 51 ) x = 5/0; if (x == 23 ) x = 5/0; } } \end{lstlisting} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \textbf{Program 2} Point domain with two argument \begin{lstlisting} /** * Point Fault Domain example for two arguments * @author (Mian and Manuel) */ public class PointDomainTwoArgument{ public static void pointErrors (int x, int y){ int z = x/y; } } \end{lstlisting} \clearpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \textbf{Program 3} Block domain with one argument \begin{lstlisting} /** * Block Fault Domain example for one arguments * @author (Mian and Manuel) */ public class BlockDomainOneArgument{ public static void blockErrors (int x){ if((x > -2) && (x < 2)) x = 5/0; if((x > -30) && (x < -25)) x = 5/0; if((x > 50) && (x < 55)) x = 5/0; } } \end{lstlisting} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \textbf{Program 4} Block domain with two argument \begin{lstlisting} /** * Block Fault Domain example for two arguments * @author (Mian and Manuel) */ public class BlockDomainTwoArgument{ public static void blockErrors (int x, int y){ if(((x > 0)&&(x < 20)) || ((y > 0) && (y < 20))){ x = 5/0; } } } \end{lstlisting} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \clearpage \textbf{Program 5} Strip domain with One argument \begin{lstlisting} /** * Strip Fault Domain example for one argument * @author (Mian and Manuel) */ public class StripDomainOneArgument{ public static void stripErrors (int x){ if((x > -5) && (x < 35)) x = 5/0; } } \end{lstlisting} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \textbf{Program 6} Strip domain with two argument \begin{lstlisting} /** * Strip Fault Domain example for two arguments * @author (Mian and Manuel) */ public class StripDomainTwoArgument{ public static void stripErrors (int x, int y){ if(((x > 0)&&(x < 40)) || ((y > 0) && (y < 40))){ x = 5/0; } } } \end{lstlisting} \clearpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \textbf{Program generated by ADFD on finding fault in SUT} \begin{lstlisting} /** * Dynamically generated code by ADFD strategy * after a fault is found in the SUT. * @author (Mian and Manuel) */ import java.io.*; import java.util.*; public class C0 { public static ArrayList<Integer> pass = new ArrayList<Integer>(); public static ArrayList<Integer> fail = new ArrayList<Integer>(); public static boolean startedByFailing = false; public static boolean isCurrentlyFailing = false; public static int start = -80; public static int stop = 80; public static void main(String []argv){ checkStartAndStopValue(start); for (int i=start+1;i<stop;i++){ try{ PointDomainOneArgument.pointErrors(i); if (isCurrentlyFailing) { fail.add(i-1); fail.add(0); pass.add(i); pass.add(0); isCurrentlyFailing=false; } } catch(Throwable t) { if (!isCurrentlyFailing) { pass.add(i-1); pass.add(0); fail.add(i); fail.add(0); isCurrentlyFailing = true; } } } checkStartAndStopValue(stop); printRangeFail(); printRangePass(); } public static void printRangeFail() { try { File fw = new File("Fail.txt"); if (fw.exists() == false) { fw.createNewFile(); } PrintWriter pw = new PrintWriter(new FileWriter (fw, true)); for (Integer i1 : fail) { pw.append(i1+"\n"); } pw.close(); } catch(Exception e) { System.err.println(" Error : e.getMessage() "); } } public static void printRangePass() { try { File fw1 = new File("Pass.txt"); if (fw1.exists() == false) { fw1.createNewFile(); } PrintWriter pw1 = new PrintWriter(new FileWriter (fw1, true)); for (Integer i2 : pass) { pw1.append(i2+"\n"); } pw1.close(); } catch(Exception e) { System.err.println(" Error : e.getMessage() "); } } public static void checkStartAndStopValue(int i) { try { PointDomainOneArgument.pointErrors(i); pass.add(i); pass.add(0); } catch (Throwable t) { startedByFailing = true; isCurrentlyFailing = true; fail.add(i); fail.add(0); } } } \end{lstlisting}
{ "alphanum_fraction": 0.546100691, "avg_line_length": 21.6452991453, "ext": "tex", "hexsha": "71768078317ec05dec324e57aa106e7a0fc8b273", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6cf105977c25eb94e641b06cb443bbe1573ef6b1", "max_forks_repo_licenses": [ "BSD-4-Clause" ], "max_forks_repo_name": "maochy/yeti-test", "max_forks_repo_path": "Mian_PhD_Thesis/appendix1/appendix1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6cf105977c25eb94e641b06cb443bbe1573ef6b1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-4-Clause" ], "max_issues_repo_name": "maochy/yeti-test", "max_issues_repo_path": "Mian_PhD_Thesis/appendix1/appendix1.tex", "max_line_length": 94, "max_stars_count": null, "max_stars_repo_head_hexsha": "6cf105977c25eb94e641b06cb443bbe1573ef6b1", "max_stars_repo_licenses": [ "BSD-4-Clause" ], "max_stars_repo_name": "maochy/yeti-test", "max_stars_repo_path": "Mian_PhD_Thesis/appendix1/appendix1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1396, "size": 5065 }
\section{Common} \subsection{Shared} \issue{ Distinction between \class{ICollectingModule}, \class{ITransformingModule} and \class{IReceivingModule} has proven unnecessary. }{ Do not differentiate between different modules and suggest use of general \class{IModule}. }{ Remove interfaces: \class{ICollectingModule}, \class{ITransformingModule} and \class{IReceivingModule}. } \issue{ Design of different event queue strategies: \class{RingBufferStorageStrategy}, \class{RefCountedListStorageStrategy}, \class{KeepAllStorageStrategy} were not suitable for our implementation approach and use of the C\# .NET Core language library. }{ Design new event queue storage strategies for different types of usage: \keyword{Bounded} or \keyword{Unbounded} and \keyword{SingleConsumer} or \keyword{MultiConsumer}. }{ Add abstract storage strategies: \class{SingleConsumerChannelStrategy} and \class{MutliConsumerChannelStrategy}.\\ A \keyword{SingleConsumer} strategy allows only one consumer at a time to read events out of the queue which provides a performance boost in comparison to \keyword{MultiConsumer}. A \keyword{MultiConsumer} strategy allows a finite and infinite amount of consumers to read simultanously. This provides multiplication of references to a specific \class{Event} and supports reference counting for each consumer. Both strategy types have been additionally divided into \keyword{Bounded}, which acts as a RingBuffer and removes the oldest events if the queue is full, and \keyword{Unbounded}, which does not restrict the event count of the queue. } \issue{ Unable to use designed event queue structure in \keyword{MEF} correctly. }{ Add new abstraction of event queues to support distinct imports in MEF and simplify usage for Modules. }{ Add interfaces concerning distinct sections of the event pipeline and read-write access:\\ \class{IReadOnlyEventQueue}: To restrict usage to read only methods.\\ \class{ISupportDeserializationEventQueue}: Allows a strongly typed usage of events in the deserialization process that stores events serialized, which is necessary for any recording or processing session.\\ \class{IDecodableEventQueue}: Defines an strongly typed read only event queue which is used in the decoding process of the pipeline.\\ \class{IEncodableEventQueue}: Defines an strongly typed read only event queue which is used in the encoding process of the pipeline.\\ Additionally multiple default implementation using their correct storage strategies were introduced to simplify usage of queues in a Module. } \issue{ Unable to gather events from unowned processes and windows. }{ Support our additional \package{HookLibrary} for use by our modules. }{ To accomplish this and reduce the amount of multiple hooks per windows process, we had to add \class{GlobalHook} and \class{HookNativeMethods} to our shared project. } \issue{ Handling of directory- or filepaths using \class{string} has proven error-prone and not typesafe. }{ Add shared support for directory- and filepaths using new classes. }{ Add \class{DirectoryPath} and \class{FilePath} classes. } \issue{ Handling of to be serialized configurations using \class{string} has proven error-prone and not typesafe. }{ Add shared support for strongly typed raw configurations. }{ Add \class{RawConfiguration} as a wrapper to the unserialized configuration as a string at \member{RawConfiguration.RawValue}. }
{ "alphanum_fraction": 0.8064136511, "avg_line_length": 56.65, "ext": "tex", "hexsha": "e7baf4e4b7327d003a2400bc5556d2032633a9e0", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-07-24T06:05:52.000Z", "max_forks_repo_forks_event_min_datetime": "2020-07-24T06:05:52.000Z", "max_forks_repo_head_hexsha": "0830f2155fb3b32dc127587e07cbd780deb0e118", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "prtest01/MORR", "max_forks_repo_path": "documents/implementation/sections/Issues/Common.tex", "max_issues_count": 110, "max_issues_repo_head_hexsha": "0830f2155fb3b32dc127587e07cbd780deb0e118", "max_issues_repo_issues_event_max_datetime": "2020-04-05T20:55:05.000Z", "max_issues_repo_issues_event_min_datetime": "2020-01-28T16:49:24.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "prtest01/MORR", "max_issues_repo_path": "documents/implementation/sections/Issues/Common.tex", "max_line_length": 245, "max_stars_count": 5, "max_stars_repo_head_hexsha": "0830f2155fb3b32dc127587e07cbd780deb0e118", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "insightmind/MORR", "max_stars_repo_path": "documents/implementation/sections/Issues/Common.tex", "max_stars_repo_stars_event_max_datetime": "2020-03-26T20:21:13.000Z", "max_stars_repo_stars_event_min_datetime": "2020-02-03T14:52:47.000Z", "num_tokens": 725, "size": 3399 }
\documentclass{article} \usepackage{hyperref} \usepackage{graphics} \title{Laboratory automation in a functional programming language} \author{C. Runciman \and A. Clare \and R. Harkness} \date{Published in Journal of Laboratory Automation\\2014 Dec; 19(6):569-76. doi: 10.1177/2211068214543373.\\ \url{http://jla.sagepub.com/content/19/6/569.abstract}} %% ODER: format == = "\mathrel{==}" %% ODER: format /= = "\neq " % % \makeatletter \@ifundefined{lhs2tex.lhs2tex.sty.read}% {\@namedef{lhs2tex.lhs2tex.sty.read}{}% \newcommand\SkipToFmtEnd{}% \newcommand\EndFmtInput{}% \long\def\SkipToFmtEnd#1\EndFmtInput{}% }\SkipToFmtEnd \newcommand\ReadOnlyOnce[1]{\@ifundefined{#1}{\@namedef{#1}{}}\SkipToFmtEnd} \usepackage{amstext} \usepackage{amssymb} \usepackage{stmaryrd} \DeclareFontFamily{OT1}{cmtex}{} \DeclareFontShape{OT1}{cmtex}{m}{n} {<5><6><7><8>cmtex8 <9>cmtex9 <10><10.95><12><14.4><17.28><20.74><24.88>cmtex10}{} \DeclareFontShape{OT1}{cmtex}{m}{it} {<-> ssub * cmtt/m/it}{} \newcommand{\texfamily}{\fontfamily{cmtex}\selectfont} \DeclareFontShape{OT1}{cmtt}{bx}{n} {<5><6><7><8>cmtt8 <9>cmbtt9 <10><10.95><12><14.4><17.28><20.74><24.88>cmbtt10}{} \DeclareFontShape{OT1}{cmtex}{bx}{n} {<-> ssub * cmtt/bx/n}{} \newcommand{\tex}[1]{\text{\texfamily#1}} % NEU \newcommand{\Sp}{\hskip.33334em\relax} \newcommand{\Conid}[1]{\mathit{#1}} \newcommand{\Varid}[1]{\mathit{#1}} \newcommand{\anonymous}{\kern0.06em \vbox{\hrule\@width.5em}} \newcommand{\plus}{\mathbin{+\!\!\!+}} \newcommand{\bind}{\mathbin{>\!\!\!>\mkern-6.7mu=}} \newcommand{\rbind}{\mathbin{=\mkern-6.7mu<\!\!\!<}}% suggested by Neil Mitchell \newcommand{\sequ}{\mathbin{>\!\!\!>}} \renewcommand{\leq}{\leqslant} \renewcommand{\geq}{\geqslant} \usepackage{polytable} %mathindent has to be defined \@ifundefined{mathindent}% {\newdimen\mathindent\mathindent\leftmargini}% {}% \def\resethooks{% \global\let\SaveRestoreHook\empty \global\let\ColumnHook\empty} \newcommand*{\savecolumns}[1][default]% {\g@addto@macro\SaveRestoreHook{\savecolumns[#1]}} \newcommand*{\restorecolumns}[1][default]% {\g@addto@macro\SaveRestoreHook{\restorecolumns[#1]}} \newcommand*{\aligncolumn}[2]% {\g@addto@macro\ColumnHook{\column{#1}{#2}}} \resethooks \newcommand{\onelinecommentchars}{\quad-{}- } \newcommand{\commentbeginchars}{\enskip\{-} \newcommand{\commentendchars}{-\}\enskip} \newcommand{\visiblecomments}{% \let\onelinecomment=\onelinecommentchars \let\commentbegin=\commentbeginchars \let\commentend=\commentendchars} \newcommand{\invisiblecomments}{% \let\onelinecomment=\empty \let\commentbegin=\empty \let\commentend=\empty} \visiblecomments \newlength{\blanklineskip} \setlength{\blanklineskip}{0.66084ex} \newcommand{\hsindent}[1]{\quad}% default is fixed indentation \let\hspre\empty \let\hspost\empty \newcommand{\NB}{\textbf{NB}} \newcommand{\Todo}[1]{$\langle$\textbf{To do:}~#1$\rangle$} \EndFmtInput \makeatother % % % % % % % This package provides two environments suitable to take the place % of hscode, called "plainhscode" and "arrayhscode". % % The plain environment surrounds each code block by vertical space, % and it uses \abovedisplayskip and \belowdisplayskip to get spacing % similar to formulas. Note that if these dimensions are changed, % the spacing around displayed math formulas changes as well. % All code is indented using \leftskip. % % Changed 19.08.2004 to reflect changes in colorcode. Should work with % CodeGroup.sty. % \ReadOnlyOnce{polycode.fmt}% \makeatletter \newcommand{\hsnewpar}[1]% {{\parskip=0pt\parindent=0pt\par\vskip #1\noindent}} % can be used, for instance, to redefine the code size, by setting the % command to \small or something alike \newcommand{\hscodestyle}{} % The command \sethscode can be used to switch the code formatting % behaviour by mapping the hscode environment in the subst directive % to a new LaTeX environment. \newcommand{\sethscode}[1]% {\expandafter\let\expandafter\hscode\csname #1\endcsname \expandafter\let\expandafter\endhscode\csname end#1\endcsname} % "compatibility" mode restores the non-polycode.fmt layout. \newenvironment{compathscode}% {\par\noindent \advance\leftskip\mathindent \hscodestyle \let\\=\@normalcr \let\hspre\(\let\hspost\)% \pboxed}% {\endpboxed\)% \par\noindent \ignorespacesafterend} \newcommand{\compaths}{\sethscode{compathscode}} % "plain" mode is the proposed default. % It should now work with \centering. % This required some changes. The old version % is still available for reference as oldplainhscode. \newenvironment{plainhscode}% {\hsnewpar\abovedisplayskip \advance\leftskip\mathindent \hscodestyle \let\hspre\(\let\hspost\)% \pboxed}% {\endpboxed% \hsnewpar\belowdisplayskip \ignorespacesafterend} \newenvironment{oldplainhscode}% {\hsnewpar\abovedisplayskip \advance\leftskip\mathindent \hscodestyle \let\\=\@normalcr \(\pboxed}% {\endpboxed\)% \hsnewpar\belowdisplayskip \ignorespacesafterend} % Here, we make plainhscode the default environment. \newcommand{\plainhs}{\sethscode{plainhscode}} \newcommand{\oldplainhs}{\sethscode{oldplainhscode}} \plainhs % The arrayhscode is like plain, but makes use of polytable's % parray environment which disallows page breaks in code blocks. \newenvironment{arrayhscode}% {\hsnewpar\abovedisplayskip \advance\leftskip\mathindent \hscodestyle \let\\=\@normalcr \(\parray}% {\endparray\)% \hsnewpar\belowdisplayskip \ignorespacesafterend} \newcommand{\arrayhs}{\sethscode{arrayhscode}} % The mathhscode environment also makes use of polytable's parray % environment. It is supposed to be used only inside math mode % (I used it to typeset the type rules in my thesis). \newenvironment{mathhscode}% {\parray}{\endparray} \newcommand{\mathhs}{\sethscode{mathhscode}} % texths is similar to mathhs, but works in text mode. \newenvironment{texthscode}% {\(\parray}{\endparray\)} \newcommand{\texths}{\sethscode{texthscode}} % The framed environment places code in a framed box. \def\codeframewidth{\arrayrulewidth} \RequirePackage{calc} \newenvironment{framedhscode}% {\parskip=\abovedisplayskip\par\noindent \hscodestyle \arrayrulewidth=\codeframewidth \tabular{@{}|p{\linewidth-2\arraycolsep-2\arrayrulewidth-2pt}|@{}}% \hline\framedhslinecorrect\\{-1.5ex}% \let\endoflinesave=\\ \let\\=\@normalcr \(\pboxed}% {\endpboxed\)% \framedhslinecorrect\endoflinesave{.5ex}\hline \endtabular \parskip=\belowdisplayskip\par\noindent \ignorespacesafterend} \newcommand{\framedhslinecorrect}[2]% {#1[#2]} \newcommand{\framedhs}{\sethscode{framedhscode}} % The inlinehscode environment is an experimental environment % that can be used to typeset displayed code inline. \newenvironment{inlinehscode}% {\(\def\column##1##2{}% \let\>\undefined\let\<\undefined\let\\\undefined \newcommand\>[1][]{}\newcommand\<[1][]{}\newcommand\\[1][]{}% \def\fromto##1##2##3{##3}% \def\nextline{}}{\) }% \newcommand{\inlinehs}{\sethscode{inlinehscode}} % The joincode environment is a separate environment that % can be used to surround and thereby connect multiple code % blocks. \newenvironment{joincode}% {\let\orighscode=\hscode \let\origendhscode=\endhscode \def\endhscode{\def\hscode{\endgroup\def\@currenvir{hscode}\\}\begingroup} %\let\SaveRestoreHook=\empty %\let\ColumnHook=\empty %\let\resethooks=\empty \orighscode\def\hscode{\endgroup\def\@currenvir{hscode}}}% {\origendhscode \global\let\hscode=\orighscode \global\let\endhscode=\origendhscode}% \makeatother \EndFmtInput % \begin{document} \maketitle \begin{abstract} After some years of use in academic and research settings, functional languages are starting to enter the mainstream as an alternative to more conventional programming languages. This article explores one way to use Haskell, a functional programming language, in the development of control programs for laboratory automation systems. We give code for an example system, discuss some programming concepts that we need for this example, and demonstrate how the use of functional programming allows us to express and verify properties of the resulting code. \end{abstract} \section{Introduction} There are many different types of software applications in the field of laboratory automation. There are stand-alone applications for controlling a simple instrument such as a bulk-reagent dispenser. There are also larger software packages for controlling automated robotic systems with many instruments that are linked to data management systems [1]. These software packages are typically referred to as schedulers. Currently, popular languages for laboratory-automation applications include Java, C, C++ and C\#. In our experience, the majority of such applications have been developed using these languages, along with other .NET variants such as Visual Basic.NET [2]. These languages are commonly known as procedural or imperative languages. They describe the commands to use in a sequential manner, to achieve the intended functionality. They change state, such as the value of variables, along the way and the values of these variables can dictate the flow of execution. Although languages such as Java and C++ use different syntax, the general principles of constructing code remain the same. An imperative approach is the most common way to develop an application. There are though, different programming styles that can be adopted, one of these being the functional approach. Functional languages, such as O'Caml and Haskell are not as well-known as C or C\#, especially among programmers from a non-computer science background. However, since Microsoft released Visual Studio 2010 with the inclusion of F\# and the increased adoption of Scala, there has been an increasing awareness of functional languages among commercial application developers. Originally a research project at Microsoft that based itself upon O’Caml, F\# has developed into a functional language that interacts with the Microsoft .NET library [3]. Indeed, anyone who has used the .NET Framework 3.5 is quite likely to be familiar with some functional language concepts without realizing it. For example, the design of LINQ queries within the .NET Framework was based upon the use of anonymous functions within Haskell. The remainder of this article provides a complete and executable program that implements a scheduler for laboratory automation. Along the way, we gently introduce the Haskell programming language and point out the properties that are declared in the code. We start by defining types, move on to define auxiliary functions, build up the scheduler, and finish with a section on automated testing of properties of interest. \subsection*{The Scheduling Problem} Scheduling is an important component of a laboratory automation software package [4,5]. The benefit of using laboratory automation is that multiple plates within an experiment or assay can be processed automatically. To improve efficiency, the schedule must allow multiple plates to run at the same time so the execution is interleaved, which then maximizes throughput. One simple approach is to use an event-driven scheduling system. This works by assessing the state of the system on a continual cycle. After each event, the scheduler, or processing engine, determines a course of action for the system to run as quickly as possible without breaking constraints such as incubation times and the maximum number of plates allowed on each device. For example, if a plate is due out of an incubator, this task is given priority over adding a new plate into the system. The advantage of an event-driven system is that the assay can follow different processing paths based upon events, such as acquired data or the failure of an instrument in the system. However, with an event driven scheduler, there is the possibility of encountering scheduling deadlocks. A simple example of a deadlock is where: \begin{itemize} \item Plate 1, sitting on Instrument A needs to move to Instrument B and \item Plate 2, sitting on Instrument B wants to move to Instrument A. \end{itemize} It is not possible for either plate to move onto the next step within its respective workflow, so there is deadlock. Deadlock situations can involve more plates and instruments, but the basic problem is the same: it is not possible to unblock key resources to allow the workflow for each plate to be processed. In the remainder of this article we illustrate the use of functional programming as a style of programming that can help in defining control software for laboratory automation. This will bring out many of the distinctive aspects of functional programming as we develop the code. In order to make this illustration we use a particular hardware setup that can be found in many laboratories. The system has an input stack, an output stack, and some number of washers and dispensers, as shown in Figure \ref{fig:system}. \begin{figure} \scalebox{0.8}{\includegraphics{system.png}} \caption{An example system with input stack, output stack, robot arm, washer, and dispenser. This is the simplest type of automated platform whereby more than one plate will be active on the system at the same time. With this platform, an optimal schedule will have the washer and the dispenser occupied simultaneously.} \label{fig:system} \end{figure} Here are some of the properties we want our laboratory scheduler to have: \begin{enumerate} \item each plate has a workflow; \item each device, including the robot, requires a specified period of time to do its job; \item each plate progresses through its workflow in a timely manner; \item the whole system is deadlock-free. \end{enumerate} Using a functional programming language allows us to write in a style that can express and verify such properties rather than just write code. Properties 1 and 2 can be expressed in types for plates and devices, statically checked by a compiler. Properties 3 and 4 can be expressed in property functions and checked by property-based testing tools such as QuickCheck [6] or SmallCheck [7]. By formulating properties in this way, developers can capture general rules about the required behaviour of a system, not just specific cases and fragments represented by unit tests. Computing power is harnessed to search the space of possible test inputs automatically, looking for cases in which one of the specified properties fails. The technique is also known as ``lightweight verification'' as it is the next best thing to a rigorous mathematical verification that all the formulated properties hold in all cases. \section{Method and Results} \subsection*{Devices and Workflows} We begin with some import statements allowing us to use standard library functions. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\mathbf{import}\;\Conid{\Conid{Data}.List}\;((\mathbin{\char92 \char92 }),\Varid{union}){}\<[E]% \\ \>[3]{}\mathbf{import}\;\Conid{\Conid{Test}.QuickCheck}{}\<[E]% \ColumnHook \end{hscode}\resethooks First we must choose how to represent the kinds of devices found in a laboratory, such as washer and dispenser. This choice is reflected in the definition of our first datatype. One can think of a type as a description of a set of possible values, or equivalently a type is a property that any value may or may not have. Our first type is \text{\tt DeviceKind}. It has four possible values for the four kinds of devices in our laboratory. Only these values have the property that they are of type \text{\tt DeviceKind}. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{54}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\mathbf{data}\;\Conid{DeviceKind}\mathrel{=}\Conid{Washer}\mid \Conid{Dispenser}\mid \Conid{InStack}\mid \Conid{OutStack}{}\<[E]% \\ \>[3]{}\hsindent{51}{}\<[54]% \>[54]{}\mathbf{deriving}\;(\Conid{Eq},\Conid{Show}){}\<[E]% \ColumnHook \end{hscode}\resethooks Informally the vertical bar can be read ``or'': so a value of type \text{\tt DeviceKind} is a \text{\tt Washer} or a \text{\tt Dispenser} or an \text{\tt InStack} or an \text{\tt OutStack}. The deriving clause gives us two properties of the \text{\tt DeviceKind} type: it belongs to the type-class \text{\tt Eq} (so its values can be compared for equality) and it belongs to the type-class \text{\tt Show} (so its values can be printed as strings). A type-class is similar to an interface in Java or C\#, for which we must provide implementations of functions. In using the keyword ``\text{\tt deriving}'' we accept the default implementations for this type. With the \text{\tt DeviceKind} type defined, one simple representation of a laboratory workflow, sufficient for the purposes of this article, is a list of devices that a microtitre plate must go to in turn. Lists are a built-in datatype in Haskell. The way that lists are defined guarantees that items in the same list are of the same type. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\mathbf{type}\;\Conid{Workflow}\mathrel{=}[\mskip1.5mu \Conid{DeviceKind}\mskip1.5mu]{}\<[E]% \ColumnHook \end{hscode}\resethooks Here \text{\tt Workflow} is defined as a synonym for a list of \text{\tt DeviceKind}. We define the following example workflow for use in later tests. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{exampleWorkflow}\mathbin{::}\Conid{Workflow}{}\<[E]% \\ \>[3]{}\Varid{exampleWorkflow}\mathrel{=}[\mskip1.5mu \Conid{InStack},\Conid{Washer},\Conid{Dispenser},\Conid{Washer},\Conid{OutStack}\mskip1.5mu]{}\<[E]% \ColumnHook \end{hscode}\resethooks One simple but useful function on lists is \text{\tt null}, which tests its list argument for emptiness. Here’s how we can use it to define \text{\tt nonEmpty}, a function that tests for a non-empty list. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{nonEmpty}\;\Varid{xs}\mathrel{=}\neg \;(\Varid{null}\;\Varid{xs}){}\<[E]% \ColumnHook \end{hscode}\resethooks Function applications are frequent in functional programs so the notation needs to be light. In Haskell, we just write a function name then each of the input arguments in turn. No extra symbols such as brackets or commas are needed. The brackets in \text{\tt not~\char40{}null~xs\char41{}} merely indicate priority: without them, \text{\tt not~null~xs} would apply the function \text{\tt not} to the two arguments \text{\tt null} and \text{\tt xs}. We shall often make use of the infix colon operator for constructing non-empty lists. The list \text{\tt e\char58{}rest} contains a first element \text{\tt e}, followed by a (possibly empty) list `\text{\tt rest}' of other elements. We need some representation of time. For example, we must represent the time needed for processing by each device and for transfer of plates between devices. For the purposes of this article a time value is simply an integer representing a number of ``ticks''. Whether ticks are milliseconds, seconds or something else need not concern us. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\mathbf{type}\;\Conid{Time}\mathrel{=}\Conid{Int}{}\<[E]% \ColumnHook \end{hscode}\resethooks There may be more than one device in the lab of the same kind (for example we may have two washers). So we also define a further type whose values represent specific devices. A specific \text{\tt Device} is represented by a combination of a \text{\tt DeviceKind} value, an integer to distinguish this device from others of the same kind, the length of time for this device to process a plate and the length of time for a robot arm to move a plate between this device and a central safe location. For example if we have two washers in our system, they might be represented by the values \text{\tt Device~Washer~1~3~2} and \text{\tt Device~Washer~2~3~3}. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{27}{@{}>{\hspre}l<{\hspost}@{}}% \column{33}{@{}>{\hspre}l<{\hspost}@{}}% \column{37}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\mathbf{data}\;\Conid{Device}\mathrel{=}\Conid{Device}\;\{\mskip1.5mu {}\<[27]% \>[27]{}\Varid{devKind}{}\<[37]% \>[37]{}\mathbin{::}\Conid{DeviceKind},{}\<[E]% \\ \>[27]{}\Varid{devNo}{}\<[37]% \>[37]{}\mathbin{::}\Conid{Int},{}\<[E]% \\ \>[27]{}\Varid{devProcT}{}\<[37]% \>[37]{}\mathbin{::}\Conid{Time},{}\<[E]% \\ \>[27]{}\Varid{devMoveT}{}\<[37]% \>[37]{}\mathbin{::}\Conid{Time}\mskip1.5mu\}{}\<[E]% \\ \>[27]{}\hsindent{6}{}\<[33]% \>[33]{}\mathbf{deriving}\;\Conid{Eq}{}\<[E]% \ColumnHook \end{hscode}\resethooks The above definition describes the fields of a \text{\tt Device}, giving them names and types. It also provides automatic field accessor functions which can be used to inspect the values or provide new values for the fields. As an example the \text{\tt devProcT} for a device \text{\tt d} could be accessed with the expression \text{\tt devProcT~d} and a copy of a device \text{\tt d} with a new \text{\tt devNo} could be created by \text{\tt d\char39{}~\char61{}~d~\char123{}devNo~\char61{}~4\char125{}}. Rather than deriving an automated default for the printing of \text{\tt Device} values (which would render them as eg \text{\tt \char34{}Device~Washer~6~3~2\char34{}}) we define our own custom instance, omitting the constructor name \text{\tt Device} and also the timing details. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{28}{@{}>{\hspre}c<{\hspost}@{}}% \column{28E}{@{}l@{}}% \column{31}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\mathbf{instance}\;\Conid{Show}\;\Conid{Device}\;\mathbf{where}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\Varid{show}\;(\Conid{Device}\;\Varid{d}\;\Varid{n}\;\Varid{p}\;\Varid{m}){}\<[28]% \>[28]{}\mathrel{=}{}\<[28E]% \>[31]{}\Varid{show}\;\Varid{d}\plus \text{\tt \char34 ~\char34}\plus \Varid{show}\;\Varid{n}{}\<[E]% \ColumnHook \end{hscode}\resethooks When we come to define scheduling, the workflow just specifies a \text{\tt DeviceKind} but the scheduler must allocate a specific \text{\tt Device}. We capture the \text{\tt isA} relationship between devices and device-kinds in the following definition. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{29}{@{}>{\hspre}c<{\hspost}@{}}% \column{29E}{@{}l@{}}% \column{33}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{isA}\mathbin{::}\Conid{Device}\to \Conid{DeviceKind}\to \Conid{Bool}{}\<[E]% \\ \>[3]{}\Varid{isA}\;(\Conid{Device}\;\Varid{sd}\;\Varid{n}\;\Varid{p}\;\Varid{m})\;\Varid{d}{}\<[29]% \>[29]{}\mathrel{=}{}\<[29E]% \>[33]{}\Varid{sd}==\Varid{d}{}\<[E]% \ColumnHook \end{hscode}\resethooks Functions are values too and they have types. The types declare properties about the function which can be statically checked by the compiler before the program is run. The first line describes the type of this function. The arrows can be read as logical implications. If the first argument is a value of type \text{\tt Device} then if the second is a value of type \text{\tt DeviceKind} then the result is a value of type Bool, a predefined type with the two values \text{\tt True} and \text{\tt False}. Although we can choose to declare the type of a function, in most cases a Haskell compiler can automatically derive this information, so the programmer need not provide it. However, we might choose to provide a type declaration in order to check that our understanding of the function's properties agrees with that derived by the compiler, or just to assist with code readability. Note that the infix \text{\tt \char61{}\char61{}} is a function that tests values for equality, not to be confused with the single \text{\tt \char61{}} symbol used to define a function. We can also choose to use infix notation when applying named functions: for example, we can write \text{\tt sd~\char96{}isA\char96{}~d} rather than \text{\tt isA~sd~d}, and the infix version makes the roles of \text{\tt sd} and \text{\tt d} clearer. \subsection*{Plates and Locations} We are working towards a representation of the complete state of the laboratory. So far we have a representation for devices, but not for the plates that are processed by these devices or for the robot arm that moves plates between them. Our next step is to introduce a type to represent the possible locations of plates in the lab. A plate's location is either at a device or it is in transit (by means of the robotic arm) between two devices. Rather than a long-winded constructor name such as \text{\tt InTransitByRoboticArm}, an infix constructor \text{\tt \char58{}\char45{}\char62{}} gives us a more convenient notation and makes the source and destination clearer. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\mathbf{data}\;\Conid{Loc}\mathrel{=}\Conid{At}\;\Conid{Device}\mid \Conid{Device}\mathbin{:->}\Conid{Device}\;\mathbf{deriving}\;(\Conid{Eq},\Conid{Show}){}\<[E]% \ColumnHook \end{hscode}\resethooks Example \text{\tt Loc} values in their printed form include \text{\tt At~Washer~3} and \text{\tt Washer~3~\char58{}\char45{}\char62{}~Dispenser~1}. Now we can define the datatype for Plates. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{25}{@{}>{\hspre}l<{\hspost}@{}}% \column{27}{@{}>{\hspre}l<{\hspost}@{}}% \column{37}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\mathbf{data}\;\Conid{Plate}\mathrel{=}\Conid{Plate}\;\{\mskip1.5mu {}\<[25]% \>[25]{}\Varid{plateNo}{}\<[37]% \>[37]{}\mathbin{::}\Conid{Int},{}\<[E]% \\ \>[25]{}\Varid{plateLoc}{}\<[37]% \>[37]{}\mathbin{::}\Conid{Loc},{}\<[E]% \\ \>[25]{}\Varid{plateSince}{}\<[37]% \>[37]{}\mathbin{::}\Conid{Time},{}\<[E]% \\ \>[25]{}\Varid{plateFlow}{}\<[37]% \>[37]{}\mathbin{::}\Conid{Workflow}\mskip1.5mu\}{}\<[E]% \\ \>[25]{}\hsindent{2}{}\<[27]% \>[27]{}\mathbf{deriving}\;(\Conid{Eq}){}\<[E]% \ColumnHook \end{hscode}\resethooks The \text{\tt plateNo} is a number uniquely identifying this plate: each plate is allocated a number as it enters the system. The \text{\tt plateLoc} specifies the current location. The \text{\tt plateSince} represents the time at which either the plate arrived (for \text{\tt At} locations) or the transfer began (for \text{\tt \char58{}\char45{}\char62{}} locations). The \text{\tt plateFlow} is the remaining list of the kinds of devices this plate must visit. When we show a plate as a string, it is usually more convenient to omit the details of the remaining workflow for the plate. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{7}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\mathbf{instance}\;\Conid{Show}\;\Conid{Plate}\;\mathbf{where}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\Varid{show}\;(\Conid{Plate}\;\Varid{no}\;\Varid{loc}\;\Varid{since}\;\Varid{w})\mathrel{=}{}\<[E]% \\ \>[5]{}\hsindent{2}{}\<[7]% \>[7]{}\text{\tt \char34 plate~\char34}\plus \Varid{show}\;\Varid{no}\plus \text{\tt \char34 ,~\char34}\plus \Varid{show}\;\Varid{loc}\plus \text{\tt \char34 ~since~\char34}\plus \Varid{show}\;\Varid{since}{}\<[E]% \ColumnHook \end{hscode}\resethooks Two simple ``helper'' functions extract information from a \text{\tt Plate} value. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{39}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{inTransfer}\mathbin{::}\Conid{Plate}\to \Conid{Bool}{}\<[E]% \\ \>[3]{}\Varid{inTransfer}\;(\Conid{Plate}\;\anonymous \;(\anonymous \mathbin{:->}\anonymous )\;\anonymous \;\anonymous ){}\<[39]% \>[39]{}\mathrel{=}\Conid{True}{}\<[E]% \\ \>[3]{}\Varid{inTransfer}\;\anonymous {}\<[39]% \>[39]{}\mathrel{=}\Conid{False}{}\<[E]% \ColumnHook \end{hscode}\resethooks \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{plateDestination}\mathbin{::}\Conid{Plate}\to \Conid{Device}{}\<[E]% \\ \>[3]{}\Varid{plateDestination}\;(\Conid{Plate}\;\anonymous \;(\anonymous \mathbin{:->}\Varid{d})\;\anonymous \;\anonymous )\mathrel{=}\Varid{d}{}\<[E]% \ColumnHook \end{hscode}\resethooks Notice that we don't have to give names to every component in the \text{\tt Plate} value. When a function does not need to refer to a component, we write \text{\tt \char95{}} in the argument pattern. With representation types in hand for devices, time and plates, we can complete the data model for the state of the lab-automation system with the following datatype declaration and associated Show instance. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{31}{@{}>{\hspre}l<{\hspost}@{}}% \column{42}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\mathbf{data}\;\Conid{SysState}\mathrel{=}\Conid{SysState}\;\{\mskip1.5mu {}\<[31]% \>[31]{}\Varid{sysPlates}{}\<[42]% \>[42]{}\mathbin{::}[\mskip1.5mu \Conid{Plate}\mskip1.5mu],{}\<[E]% \\ \>[31]{}\Varid{sysDevs}{}\<[42]% \>[42]{}\mathbin{::}[\mskip1.5mu \Conid{Device}\mskip1.5mu],{}\<[E]% \\ \>[31]{}\Varid{sysTime}{}\<[42]% \>[42]{}\mathbin{::}\Conid{Time}\mskip1.5mu\}{}\<[E]% \\ \>[3]{}\mathbf{instance}\;\Conid{Show}\;\Conid{SysState}\;\mathbf{where}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\Varid{show}\;(\Conid{SysState}\;\Varid{ps}\;\Varid{ds}\;\Varid{t})\mathrel{=}\text{\tt \char34 t~=~\char34}\plus \Varid{show}\;\Varid{t}\plus \text{\tt \char34 :~\char34}\plus \Varid{show}\;\Varid{ps}{}\<[E]% \ColumnHook \end{hscode}\resethooks The time in each \text{\tt SysState} is the time at which that state exists. The plates list represents all the plates in the system at that time, including those in the \text{\tt OutStack} for which the workflow has been completed. The devices list represents every device that is in use or available for use at that time. \subsection*{Events and Scheduling Definition} We shall model the laboratory process as an event-driven system. By now it should be no surprise that we want to introduce a new data type, this time to model the four kinds of event that can occur. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\mathbf{data}\;\Conid{InEvent}\mathrel{=}\Conid{Tick}\mid \Conid{NewPlate}\mid \Conid{DeviceUp}\;\Conid{Device}\mid \Conid{DeviceDown}\;\Conid{Device}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\mathbf{deriving}\;\Conid{Show}{}\<[E]% \ColumnHook \end{hscode}\resethooks A \text{\tt Tick} event indicates the passage of time. A \text{\tt NewPlate} event represents the introduction of a new plate into the system. A \text{\tt DeviceUp} event represents the addition of a device, either by initial powering up and initialisation, or by the repair of a previously faulty device. A \text{\tt DeviceDown} event represents the failure or removal of a device, which then becomes unavailable. Now we can define the type of a \text{\tt Scheduler} for the lab as follows. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\mathbf{type}\;\Conid{Scheduler}\mathrel{=}\Conid{InEvent}\to \Conid{SysState}\to \Conid{SysState}{}\<[E]% \ColumnHook \end{hscode}\resethooks \subsection*{Auxilary Functions} We shall work towards the definition of an appropriate function of this type. To prepare the way, we shall first define some auxiliary functions to compute information that any scheduler could be expected to need. We shall then define an example scheduler. Importantly, we can define and compare many different schedulers. One property they must all share is the \text{\tt Scheduler} type, which makes it type-safe to plug in such code. We shall see some examples of the properties that can be analysed and compared later. First a scheduler must be able to determine whether a device is currently free. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{8}{@{}>{\hspre}c<{\hspost}@{}}% \column{8E}{@{}l@{}}% \column{11}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{freeDevice}\mathbin{::}\Conid{SysState}\to \Conid{Device}\to \Conid{Bool}{}\<[E]% \\ \>[3]{}\Varid{freeDevice}\;\Varid{s}\;(\Conid{Device}\;\Conid{OutStack}\;\anonymous \;\anonymous \;\anonymous )\mathrel{=}\Conid{True}{}\<[E]% \\ \>[3]{}\Varid{freeDevice}\;(\Conid{SysState}\;\Varid{ps}\;\Varid{ds}\;\anonymous )\;\Varid{d}{}\<[E]% \\ \>[3]{}\hsindent{5}{}\<[8]% \>[8]{}\mathrel{=}{}\<[8E]% \>[11]{}\Varid{d}\in \Varid{ds}{}\<[E]% \\ \>[11]{}\mathrel{\wedge}\Varid{null}\;[\mskip1.5mu \Varid{p}\mid \Varid{p}\leftarrow \Varid{ps},\Varid{plateLoc}\;\Varid{p}==\Conid{At}\;\Varid{d}\mskip1.5mu]{}\<[E]% \ColumnHook \end{hscode}\resethooks The first equation reflects our assumption that an \text{\tt OutStack} cannot fail. Note that this assumption is not encoded in the type system: a \text{\tt DeviceKindDown} event for an \text{\tt OutStack} would pass the type-checker. We also assume that an \text{\tt OutStack} has infinite capacity, so is always free. Devices other than the \text{\tt OutStack} may fail and the \text{\tt d~\char96{}elem\char96{}~ds} condition checks whether the device is up. The other devices are also assumed to have a capacity of a single plate, so they are only free if there is not already a plate at the device. The expression \text{\tt \char91{}p~\char124{}~p~\char60{}\char45{}~ps\char44{}~plateLoc~p~\char61{}\char61{}~At~d\char93{}} is a list comprehension. An informal reading of this particular comprehension would be ‘the list of all elements \text{\tt p} with two qualifications: first \text{\tt p} is an item from the list \text{\tt ps}, and second \text{\tt p} satisfies the condition \text{\tt plateLoc~p~\char61{}\char61{}~At~d}. The first kind of qualification is termed a generator, and the second a filter, and in general a comprehension may have any number of qualifications of each kind. List comprehensions are a compact and powerful way to express many lists. First introduced in functional languages, they have since been adopted in many others, including Javascript, Python and LINQ within the .NET environment. Since the robot arm has a special status and is not modelled in the same way as other devices, a scheduler also needs a function to check for the availability of the robot arm. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{robotArmFree}\mathbin{::}\Conid{SysState}\to \Conid{Bool}{}\<[E]% \\ \>[3]{}\Varid{robotArmFree}\;(\Conid{SysState}\;\Varid{ps}\;\anonymous \;\anonymous )\mathrel{=}\Varid{null}\;[\mskip1.5mu \Varid{p}\mid \Varid{p}\leftarrow \Varid{ps},\Varid{inTransfer}\;\Varid{p}\mskip1.5mu]{}\<[E]% \ColumnHook \end{hscode}\resethooks This definition reflects a few assumptions. There is always exactly one robot arm. Unlike other devices that are affected by \text{\tt DeviceUp} and \text{\tt DeviceDown} events, the robot arm does not have to be initialised, and it cannot fail. It is free if it is not currently transferring a plate between devices. \subsection*{Scheduling Functions} Having defined auxiliary functions to test whether devices are ready to participate in moves, we next consider the readiness of plates. The following function checks whether a plate is ready to be moved from its current location. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{9}{@{}>{\hspre}l<{\hspost}@{}}% \column{28}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{ready}\mathbin{::}\Conid{Plate}\to \Conid{Time}\to \Conid{Bool}{}\<[E]% \\ \>[3]{}\Varid{ready}\;\Varid{p}\;\Varid{t}\mathrel{=}\Varid{t}\geq \Varid{plateSince}\;\Varid{p}\mathbin{+}\Varid{timing}\;(\Varid{plateLoc}\;\Varid{p}){}\<[E]% \\ \>[3]{}\hsindent{6}{}\<[9]% \>[9]{}\mathbf{where}{}\<[E]% \\ \>[3]{}\hsindent{6}{}\<[9]% \>[9]{}\Varid{timing}\;(\Conid{At}\;\Varid{d}){}\<[28]% \>[28]{}\mathrel{=}\Varid{devProcT}\;\Varid{d}{}\<[E]% \\ \>[3]{}\hsindent{6}{}\<[9]% \>[9]{}\Varid{timing}\;(\Varid{d1}\mathbin{:->}\Varid{d2})\mathrel{=}\Varid{devMoveT}\;\Varid{d1}\mathbin{+}\Varid{devMoveT}\;\Varid{d2}{}\<[E]% \ColumnHook \end{hscode}\resethooks A plate being processed by a device is ready if enough time has elapsed for the device to complete its process. A plate being moved by a robot arm is ready if enough time has elapsed for the required movements to and from the central safe position. An \text{\tt InEvent} determines a two stage transition between current and next system states. The first stage of this transition reflects the unavoidable consequences of the event: time advances, a new plate is added, a device goes down or a device comes up (the \text{\tt effectOf} function); and in addition, if the time required for a robot arm transfer has elapsed then the plate is delivered to the destination device (the \text{\tt putPlateIfReady} function). The second stage of the transition is then determined by the choices of a specific scheduler. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{consequences}\mathbin{::}\Conid{InEvent}\to \Conid{SysState}\to \Conid{SysState}{}\<[E]% \\ \>[3]{}\Varid{consequences}\;\Varid{ie}\;\Varid{s}\mathrel{=}\Varid{putPlateIfReady}\;(\Varid{effectOf}\;\Varid{ie}\;\Varid{s}){}\<[E]% \ColumnHook \end{hscode}\resethooks \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{13}{@{}>{\hspre}l<{\hspost}@{}}% \column{31}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{effectOf}\mathbin{::}\Conid{InEvent}\to \Conid{SysState}\to \Conid{SysState}{}\<[E]% \\ \>[3]{}\Varid{effectOf}\;{}\<[13]% \>[13]{}\Conid{Tick}\;\Varid{s}{}\<[31]% \>[31]{}\mathrel{=}\Varid{s}\;\{\mskip1.5mu \Varid{sysTime}\mathrel{=}\Varid{sysTime}\;\Varid{s}\mathbin{+}\mathrm{1}\mskip1.5mu\}{}\<[E]% \\ \>[3]{}\Varid{effectOf}\;{}\<[13]% \>[13]{}\Conid{NewPlate}\;\Varid{s}{}\<[31]% \>[31]{}\mathrel{=}\Varid{s}\;\{\mskip1.5mu \Varid{sysPlates}\mathrel{=}\Varid{newPlate}\;\Varid{s}\mathbin{:}\Varid{sysPlates}\;\Varid{s}\mskip1.5mu\}{}\<[E]% \\ \>[3]{}\Varid{effectOf}\;{}\<[13]% \>[13]{}(\Conid{DeviceDown}\;\Varid{d})\;\Varid{s}{}\<[31]% \>[31]{}\mathrel{=}\Varid{s}\;\{\mskip1.5mu \Varid{sysDevs}\mathrel{=}(\Varid{sysDevs}\;\Varid{s})\mathbin{\char92 \char92 }[\mskip1.5mu \Varid{d}\mskip1.5mu]\mskip1.5mu\}{}\<[E]% \\ \>[3]{}\Varid{effectOf}\;{}\<[13]% \>[13]{}(\Conid{DeviceUp}\;\Varid{d})\;\Varid{s}{}\<[31]% \>[31]{}\mathrel{=}\Varid{s}\;\{\mskip1.5mu \Varid{sysDevs}\mathrel{=}(\Varid{sysDevs}\;\Varid{s})\mathbin{`\Varid{union}`}[\mskip1.5mu \Varid{d}\mskip1.5mu]\mskip1.5mu\}{}\<[E]% \ColumnHook \end{hscode}\resethooks The infix functions \text{\tt \char92{}\char92{}} and \text{\tt \char96{}union\char96{}} for list difference and list union are defined in the standard Haskell library Data.List. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{6}{@{}>{\hspre}l<{\hspost}@{}}% \column{9}{@{}>{\hspre}l<{\hspost}@{}}% \column{17}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{newPlate}\mathbin{::}\Conid{SysState}\to \Conid{Plate}{}\<[E]% \\ \>[3]{}\Varid{newPlate}\;(\Conid{SysState}\;\Varid{ps}\;\Varid{ds}\;\Varid{t})\mathrel{=}\Conid{Plate}\;(\Varid{length}\;\Varid{ps}\mathbin{+}\mathrm{1})\;\Varid{newloc}\;\Varid{t}\;\Varid{w}{}\<[E]% \\ \>[3]{}\hsindent{3}{}\<[6]% \>[6]{}\mathbf{where}{}\<[E]% \\ \>[6]{}\hsindent{3}{}\<[9]% \>[9]{}\Varid{newloc}{}\<[17]% \>[17]{}\mathrel{=}\Conid{At}\;(\Varid{head}\;[\mskip1.5mu \Varid{d}\mid \Varid{d}\leftarrow \Varid{ds},\Varid{d}\mathbin{`\Varid{isA}`}\Varid{wd}\mskip1.5mu]){}\<[E]% \\ \>[6]{}\hsindent{3}{}\<[9]% \>[9]{}(\Varid{wd}\mathbin{:}\Varid{w}){}\<[17]% \>[17]{}\mathrel{=}\Varid{exampleWorkflow}{}\<[E]% \ColumnHook \end{hscode}\resethooks The pattern \text{\tt \char40{}wd\char58{}w\char41{}} on the left hand side of the last equation indicates that we expect \text{\tt exampleWorkflow} to be a list, with a first item \text{\tt wd}, followed by a possible empty list of other items \text{\tt w}. The \text{\tt putPlateIfReady} function is a little more complex. Its key component is a function \text{\tt relocated} that uses auxiliary functions already defined (\text{\tt inTransfer}, \text{\tt ready}, \text{\tt freeDevice}) to change the location of plates where appropriate. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{23}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{putPlateIfReady}\mathbin{::}\Conid{SysState}\to \Conid{SysState}{}\<[E]% \\ \>[3]{}\Varid{putPlateIfReady}\;\Varid{s}\mathord{@}(\Conid{SysState}\;\Varid{ps}\;\Varid{ds}\;\Varid{t})\mathrel{=}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\Varid{s}\;\{\mskip1.5mu \Varid{sysPlates}\mathrel{=}[\mskip1.5mu \Varid{relocated}\;\Varid{p}\;(\Varid{plateDestination}\;\Varid{p})\mid \Varid{p}\leftarrow \Varid{ps}\mskip1.5mu]\mskip1.5mu\}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\mathbf{where}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\Varid{relocated}\mathbin{::}\Conid{Plate}\to \Conid{Device}\to \Conid{Plate}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\Varid{relocated}\;\Varid{p}\;\Varid{dp}\mathrel{=}{}\<[23]% \>[23]{}\mathbf{if}\;\Varid{inTransfer}\;\Varid{p}\mathrel{\wedge}\Varid{ready}\;\Varid{p}\;\Varid{t}\mathrel{\wedge}\Varid{freeDevice}\;\Varid{s}\;\Varid{dp}{}\<[E]% \\ \>[23]{}\mathbf{then}\;\Conid{Plate}\;(\Varid{plateNo}\;\Varid{p})\;(\Conid{At}\;\Varid{dp})\;\Varid{t}\;(\Varid{tail}\;(\Varid{plateFlow}\;\Varid{p})){}\<[E]% \\ \>[23]{}\mathbf{else}\;\Varid{p}{}\<[E]% \ColumnHook \end{hscode}\resethooks The @-character used in the definition of this function's \text{\tt SysState} parameter allows us to inspect and use the individual components of the \text{\tt SysState} (the \text{\tt ps}, the \text{\tt ds} and the \text{\tt t}), but also to refer to it in its entirety as the variable \text{\tt s}. The expression \text{\tt tail~\char40{}plateFlow~p\char41{}} represents progress in the workflow for a relocated plate. The workflow of a plate is held in memory as a shared list, referenced by all plates undergoing the same workflow. No item in the workflow list is destroyed by the application of \text{\tt tail}; all items remain available in memory for other plates. The \text{\tt tail} function simply returns a pointer to the next portion of the list, which is an efficient operation. The second stage of the transition involves a choice. If there are plates ready to be moved on from one device to another, a scheduler must choose a source device, a ready plate at that device and an appropriate destination for it. The following function lists all the possible options from which to choose. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{8}{@{}>{\hspre}l<{\hspost}@{}}% \column{11}{@{}>{\hspre}l<{\hspost}@{}}% \column{16}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{plateMoveChoices}\mathbin{::}\Conid{SysState}\to [\mskip1.5mu \Conid{SysState}\mskip1.5mu]{}\<[E]% \\ \>[3]{}\Varid{plateMoveChoices}\;\Varid{s}\mathord{@}(\Conid{SysState}\;\Varid{ps}\;\Varid{ds}\;\Varid{t})\mathrel{=}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}[\mskip1.5mu \Varid{s}\;\{\mskip1.5mu \Varid{sysPlates}\mathrel{=}\Varid{p'}\mathbin{:}(\Varid{ps}\mathbin{\char92 \char92 }[\mskip1.5mu \Varid{p}\mskip1.5mu])\mskip1.5mu\}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\mid {}\<[8]% \>[8]{}\Varid{d}{}\<[11]% \>[11]{}\leftarrow \Varid{sysDevs}\;\Varid{s},\neg \;(\Varid{d}\mathbin{`\Varid{isA}`}\Conid{OutStack}),{}\<[E]% \\ \>[8]{}\Varid{p}{}\<[11]% \>[11]{}\leftarrow \Varid{ps},\Varid{plateLoc}\;\Varid{p}==\Conid{At}\;\Varid{d},\Varid{ready}\;\Varid{p}\;\Varid{t},{}\<[E]% \\ \>[8]{}\Varid{d'}\leftarrow \Varid{nextDeviceChoices}\;\Varid{s}\;\Varid{p},{}\<[E]% \\ \>[8]{}\hsindent{8}{}\<[16]% \>[16]{}\mathbf{let}\;\Varid{p'}\mathrel{=}\Varid{p}\;\{\mskip1.5mu \Varid{plateLoc}\mathrel{=}\Varid{d}\mathbin{:->}\Varid{d'},\Varid{plateSince}\mathrel{=}\Varid{t}\mskip1.5mu\}\mskip1.5mu]{}\<[E]% \ColumnHook \end{hscode}\resethooks The \text{\tt plateMoveChoices} function examines all the devices \text{\tt ds} in the system. For each device \text{\tt d}, apart from the \text{\tt OutStack}s, it determines the plates \text{\tt ps} that are at \text{\tt d} and ready to be moved on. For each such plate \text{\tt p} we call \text{\tt nextDeviceChoices} to work out the possible devices \text{\tt d\char39{}} to which \text{\tt p} could be transferred next. There are many potential plate move choices that could be evaluated here. However, Haskell is a lazy language and will only evaluate as many as necessary to find a solution [8]. The primed variable names \text{\tt d\char39{}} and \text{\tt p\char39{}} are appropriate for derived values. This convention for the naming of variables is also widely used in mathematics for the same purpose. The function \text{\tt nextDeviceChoices} works out the list of possible next devices for a plate. It selects from the device list those of the appropriate kind that are currently free. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{12}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{nextDeviceChoices}\mathbin{::}\Conid{SysState}\to \Conid{Plate}\to [\mskip1.5mu \Conid{Device}\mskip1.5mu]{}\<[E]% \\ \>[3]{}\Varid{nextDeviceChoices}\;\Varid{s}\;\Varid{p}\mathrel{=}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}[\mskip1.5mu \Varid{d}\mid \Varid{d}\leftarrow \Varid{ds},\Varid{d}\mathbin{`\Varid{isA}`}\Varid{dk},\Varid{freeDevice}\;\Varid{s}\;\Varid{d}\mskip1.5mu]{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\mathbf{where}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}(\Varid{dk}\mathbin{:\char95 })\mathrel{=}\Varid{plateFlow}\;\Varid{p}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\Varid{ds}{}\<[12]% \>[12]{}\mathrel{=}\Varid{sysDevs}\;\Varid{s}{}\<[E]% \ColumnHook \end{hscode}\resethooks \subsection*{The Scheduler} Now we are ready to define a specific simple \text{\tt Scheduler}. It simply chooses the first of all the available options. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{scheduler}\mathbin{::}\Conid{Scheduler}{}\<[E]% \\ \>[3]{}\Varid{scheduler}\;\Varid{ie}\;\Varid{s}\mathrel{=}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\mathbf{if}\;\Varid{robotArmFree}\;\Varid{s'}\;\mathbf{then}\;\Varid{head}\;(\Varid{plateMoveChoices}\;\Varid{s'}\plus [\mskip1.5mu \Varid{s'}\mskip1.5mu]){}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\mathbf{else}\;\Varid{s'}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\mathbf{where}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\Varid{s'}\mathrel{=}\Varid{consequences}\;\Varid{ie}\;\Varid{s}{}\<[E]% \ColumnHook \end{hscode}\resethooks A more complex scheduler might analyse the workflow and the current state more deeply. It may be necessary, for example, to prioritise moves from devices with plates that have been waiting for the longest time or to relocate plates that require urgent incubation to avoid temperature changes. Although the functions \text{\tt consequences} and \text{\tt effectOf} also have the \text{\tt Scheduler} type, they are too limited to be useful schedulers by themselves. They only deal with the unavoidable consequences of an event, and make no further decisions. The entire laboratory process can now be represented by a function from a sequence of events and an initial system state to a sequence of system states. We can think of state sequences as a representation of the behaviour of the lab-automation system. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{19}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{run}\mathbin{::}\Conid{SysState}\to [\mskip1.5mu \Conid{InEvent}\mskip1.5mu]\to [\mskip1.5mu \Conid{SysState}\mskip1.5mu]{}\<[E]% \\ \>[3]{}\Varid{run}\;\Varid{s}\;[\mskip1.5mu \mskip1.5mu]{}\<[19]% \>[19]{}\mathrel{=}[\mskip1.5mu \Varid{s}\mskip1.5mu]{}\<[E]% \\ \>[3]{}\Varid{run}\;\Varid{s}\;(\Varid{ie}\mathbin{:}\Varid{ies}){}\<[19]% \>[19]{}\mathrel{=}\Varid{s}\mathbin{:}\Varid{run}\;(\Varid{scheduler}\;\Varid{ie}\;\Varid{s})\;\Varid{ies}{}\<[E]% \ColumnHook \end{hscode}\resethooks This function \text{\tt run} is defined recursively. When \text{\tt run} is applied to a state and a list of input events containing a first input event, the result is a list of states, beginning with the original state. The other states in the list are produced by the application of run to the remaining input events, but run must now use the new state that resulted from the scheduler's decisions after dealing with that first input event. The resulting list of \text{\tt SysStates} is produced lazily and will only be extended as the input events occur. The states produced may be logged and immediately discarded or may be retained for further processing. At the moment the output is a plate-centred view, but this could be changed to produce different system views as required. For example, we could derive from the \text{\tt SysState} list either a device view or an event-log view. In the initial system state, no plates have yet been supplied as input, and no devices are initialised. The time is ``zero''. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{initialSysState}\mathrel{=}\Conid{SysState}\;\{\mskip1.5mu \Varid{sysPlates}\mathrel{=}[\mskip1.5mu \mskip1.5mu],\Varid{sysDevs}\mathrel{=}[\mskip1.5mu \mskip1.5mu],\Varid{sysTime}\mathrel{=}\mathrm{0}\mskip1.5mu\}{}\<[E]% \ColumnHook \end{hscode}\resethooks The following example shows the use of the \text{\tt run} function, with the above \text{\tt initialSysState} as one argument, and a sequence of input events as the other. This sequence of input events begins with the devices being initialised, and then consists of an infinite cycle of a new plate addition and then two time ticks. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{13}{@{}>{\hspre}c<{\hspost}@{}}% \column{13E}{@{}l@{}}% \column{16}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{eg}\mathbin{::}[\mskip1.5mu \Conid{SysState}\mskip1.5mu]{}\<[E]% \\ \>[3]{}\Varid{eg}\mathrel{=}\Varid{run}\;\Varid{initialSysState}\;{}\<[E]% \\ \>[3]{}\hsindent{10}{}\<[13]% \>[13]{}({}\<[13E]% \>[16]{}[\mskip1.5mu \Conid{DeviceUp}\;\Varid{d}\mid \Varid{d}\leftarrow \Varid{initialdevices}\mskip1.5mu]{}\<[E]% \\ \>[16]{}\plus \Varid{cycle}\;[\mskip1.5mu \Conid{NewPlate},\Conid{Tick},\Conid{Tick}\mskip1.5mu]){}\<[E]% \ColumnHook \end{hscode}\resethooks Now we give an example list of initial devices. We have six washers and two dispensers, with varying process times and access times. The order in which the devices are listed here influences the order in which available choices are listed by \text{\tt nextDeviceChoices} and \text{\tt plateMoveChoices}. So for the scheduler which simply selects the first choice, it affects the plate moves that are made. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{21}{@{}>{\hspre}l<{\hspost}@{}}% \column{40}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{initialdevices}\mathbin{::}[\mskip1.5mu \Conid{Device}\mskip1.5mu]{}\<[E]% \\ \>[3]{}\Varid{initialdevices}\mathrel{=}{}\<[21]% \>[21]{}[\mskip1.5mu \Conid{Device}\;\Conid{InStack}\;{}\<[40]% \>[40]{}\mathrm{1}\;\mathrm{0}\;\mathrm{1}\mskip1.5mu]\plus {}\<[E]% \\ \>[21]{}[\mskip1.5mu \Conid{Device}\;\Conid{Washer}\;{}\<[40]% \>[40]{}\Varid{n}\;\mathrm{4}\;\Varid{n}\mid \Varid{n}\leftarrow [\mskip1.5mu \mathrm{1}\mathinner{\ldotp\ldotp}\mathrm{6}\mskip1.5mu]\mskip1.5mu]\plus {}\<[E]% \\ \>[21]{}[\mskip1.5mu \Conid{Device}\;\Conid{Dispenser}\;{}\<[40]% \>[40]{}\Varid{n}\;\mathrm{2}\;\Varid{n}\mid \Varid{n}\leftarrow [\mskip1.5mu \mathrm{1}\mathinner{\ldotp\ldotp}\mathrm{2}\mskip1.5mu]\mskip1.5mu]\plus {}\<[E]% \\ \>[21]{}[\mskip1.5mu \Conid{Device}\;\Conid{OutStack}\;{}\<[40]% \>[40]{}\mathrm{1}\;\mathrm{0}\;\mathrm{1}\mskip1.5mu]{}\<[E]% \ColumnHook \end{hscode}\resethooks Now that we have defined a scheduler that can make choices, and chosen some initial devices with particular timings, we want some way to inspect the consequences of making those choices. \subsection*{Properties and Automated Testing} Many intended properties of the component functions of a Haskell program can themselves be defined as functions with \text{\tt Bool} results. Such property functions are, by convention, given names starting \text{\tt prop\char95{}}. They are expected to return \text{\tt True} for all possible choices of correctly typed input arguments. Libraries such as QuickCheck [6] and SmallCheck [7] support automatic property-based testing. They exploit Haskell’s type system to generate many possible values for a property’s input arguments, test the property’s result in each case and report any failing case. To illustrate, recall two important properties a well-designed scheduler should have: \begin{itemize} \item each plate progresses through its workflow in a timely manner -- no state occurs in which a plate has been at a device for too long; \item the whole system is deadlock free -- no state occurs in which at least one plate has an unfinished workflow, but there is nothing the system can do to make progress. \end{itemize} Both properties concern undesirable states that might arise after any possible sequence of events. So lists of \text{\tt InEvents} are suitable input arguments for these properties, and we define them as follows: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{6}{@{}>{\hspre}l<{\hspost}@{}}% \column{18}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{prop\char95 OverstayFree}\mathbin{::}\Conid{Time}\to [\mskip1.5mu \Conid{InEvent}\mskip1.5mu]\to \Conid{Bool}{}\<[E]% \\ \>[3]{}\Varid{prop\char95 OverstayFree}\;\Varid{maxDelay}\;\Varid{ies}\mathrel{=}{}\<[E]% \\ \>[3]{}\hsindent{3}{}\<[6]% \>[6]{}\Varid{null}\;[\mskip1.5mu \Varid{s}\mid {}\<[18]% \>[18]{}\Varid{s}\leftarrow \Varid{run}\;\Varid{initialSysState}\;\Varid{ies},{}\<[E]% \\ \>[18]{}\Varid{hasOverstayedPlate}\;\Varid{maxDelay}\;\Varid{s}\mskip1.5mu]{}\<[E]% \ColumnHook \end{hscode}\resethooks \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{6}{@{}>{\hspre}l<{\hspost}@{}}% \column{18}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{prop\char95 DeadlockFree}\mathbin{::}[\mskip1.5mu \Conid{InEvent}\mskip1.5mu]\to \Conid{Bool}{}\<[E]% \\ \>[3]{}\Varid{prop\char95 DeadlockFree}\;\Varid{ies}\mathrel{=}{}\<[E]% \\ \>[3]{}\hsindent{3}{}\<[6]% \>[6]{}\Varid{null}\;[\mskip1.5mu \Varid{s}\mid {}\<[18]% \>[18]{}\Varid{s}\leftarrow \Varid{run}\;\Varid{initialSysState}\;\Varid{ies},{}\<[E]% \\ \>[18]{}\Varid{isDeadlocked}\;\Varid{s}\mskip1.5mu]{}\<[E]% \ColumnHook \end{hscode}\resethooks In each case, the comprehension expresses a list of undesirable states. These lists should be empty. A simple definition of an overstayed plate is one that has been at a device or in transit for longer than \text{\tt maxDelay} clockticks. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{6}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{hasOverstayedPlate}\;\Varid{maxDelay}\;(\Conid{SysState}\;\Varid{ps}\;\Varid{ds}\;\Varid{t})\mathrel{=}{}\<[E]% \\ \>[3]{}\hsindent{3}{}\<[6]% \>[6]{}\Varid{nonEmpty}\;[\mskip1.5mu \Varid{p}\mid \Varid{p}\leftarrow \Varid{ps},\Varid{t}\mathbin{-}\Varid{plateSince}\;\Varid{p}\geq \Varid{maxDelay}\mskip1.5mu]{}\<[E]% \ColumnHook \end{hscode}\resethooks It is trickier to specify just what we mean by a deadlocked system. A state is deadlocked if there is at least one plate in the system with a workflow not yet completed, but for no such plate is there either (1) more time needed at the current location, or (2) a free destination (for a plate in transfer), or (3) a possible choice of next device (for a plate ready to leave its current device). So we define \text{\tt isDeadlocked} as follows: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{5}{@{}>{\hspre}l<{\hspost}@{}}% \column{16}{@{}>{\hspre}l<{\hspost}@{}}% \column{24}{@{}>{\hspre}l<{\hspost}@{}}% \column{29}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{isDeadlocked}\;\Varid{s}\mathord{@}(\Conid{SysState}\;\Varid{ps}\;\Varid{ds}\;\Varid{t})\mathrel{=}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\Varid{nonEmpty}\;\Varid{activePlates}\mathrel{\wedge}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\Varid{null}\;[\mskip1.5mu \Varid{p}\mid {}\<[16]% \>[16]{}\Varid{p}\leftarrow \Varid{activePlates},{}\<[E]% \\ \>[16]{}\neg \;(\Varid{ready}\;\Varid{p}\;\Varid{t})\mathrel{\vee}{}\<[E]% \\ \>[16]{}\Varid{inTransfer}\;\Varid{p}\mathrel{\wedge}\Varid{freeDevice}\;\Varid{s}\;(\Varid{plateDestination}\;\Varid{p})\mathrel{\vee}{}\<[E]% \\ \>[16]{}\neg \;(\Varid{inTransfer}\;\Varid{p})\mathrel{\wedge}\Varid{nonEmpty}\;(\Varid{nextDeviceChoices}\;\Varid{s}\;\Varid{p})\mskip1.5mu]{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\mathbf{where}{}\<[E]% \\ \>[3]{}\hsindent{2}{}\<[5]% \>[5]{}\Varid{activePlates}\mathrel{=}[\mskip1.5mu \Varid{p}{}\<[24]% \>[24]{}\mid \Varid{p}{}\<[29]% \>[29]{}\leftarrow \Varid{ps},\Varid{nonEmpty}\;(\Varid{plateFlow}\;\Varid{p})\mskip1.5mu]{}\<[E]% \ColumnHook \end{hscode}\resethooks We can now ask QuickCheck to check the properties \text{\tt prop\char95{}OverstayFree} and \text{\tt prop\char95{}DeadlockFree} for any particular system configuration. \section{Discussion and Conclusion} The code we have presented provides a complete and executable scheduler for an example laboratory automation system. The scheduler has various properties that can be verified by the type system and by property-based testing. Along the way, we have also defined many boolean-valued functions (such as \text{\tt isA}, \text{\tt ready}, \text{\tt inTransfer}, and \text{\tt freeDevice}). Beyond their use in the program itself, these functions provide a useful vocabulary when formulating testable properties. Writing properties is not always easy, and requires the programmer to think deeply about the system. Writing properties for QuickCheck gives us two advantages: automating the testing and making the programmer's understanding of the system more explicit. The second advantage is just as important as the first for ensuring the correctness of the code. When writing the isDeadlock property, we first began by stating simpler criteria, but soon discovered that our initial criteria did not capture all of the possible deadlock scenarios, and this forced a better understanding of what might cause deadlock. With the initially simpler deadlock criteria, QuickCheck reported the success of the test, but the action log showed plates that were not moving through the system. Inspecting this log led us to understand further deadlock-causing scenarios, and to improve the description of deadlock. We can use this code not only as a scheduler, but also as a simulator, to test out various equipment configurations, and test for desired properties. If we find that a particular equipment configuration creates a potential deadlock, for example, it is easy to try specifying faster or extra equipment and retest \text{\tt prop\char95{}DeadlockFree}. The system we have described here has deliberately been kept simple, in order to explain the concepts of functional programming with a concrete example. However, one could easily model variations: e.g. a plate capacity for each device, different maximum-delay periods for different devices, different workflows for different plates, or multiple plates per workflow as in a reformatting liquid handling process. Functional programming is a style that encourages high-level thinking about the specification and desired properties of a system, rather than low-level sequential programming of actions to be performed. In return for specifying and declaring properties, the programmer benefits from the guarantees of type safety and automated property-based testing. One of our main purposes in writing this article is to encourage the wider adoption of such practices in laboratory automation. \section{Further Reading} \begin{itemize} \item The Commercial Users of Functional Programming annual conference and website (\url{http://cufp.org}) hosts tutorials, talks and Birds of a Feather sessions for practitioners (accessed October 2013) \item The Haskell in Industry website (\url{http://www.haskell.org/haskellwiki/Haskell_in_industry}) provides further case studies and support (accessed October 2013) \item Programming in Haskell (2007) Graham Hutton, Cambridge Uni Press \item Get started with Haskell (installation and tutorial help) \url{http://learnyouahaskell.com/introduction} (accessed October 2013) \end{itemize} \section{References} \begin{enumerate} \item Delaney N., Echenique J., Marx C. Clarity - An Open-Source Manager for Laboratory Automation. Journal of Laboratory Automation 2013, 18, 171-177. \item Harkness R., Crook M., Povey D. Programming Review of Visual Basic.NET for the Laboratory Automation Industry. Journal of Laboratory Automation. 2007, 12, 25-32. \item Syme D., Granicz A., Cisternino A. Expert F\# 3.0; Apress, 2012, New York. \item Shäfer R. Concepts for Dynamic Scheduling in the Laboratory. Journal of Laboratory Automation 2004, 9, 382-397. \item Harkness R. Novel Software Solutions for Automating Biochemical Assays. PhD Thesis, University of Surrey, Surrey, UK, 2010. \item Claessen K., Hughes J. QuickCheck: A Lightweight Tool for Random Testing of Haskell Programs. In Proceedings of the International Conference on Functional Programming (ICFP). ACM: New York, 2000. \item Runciman C., Naylor M., Lindblad F. SmallCheck and Lazy SmallCheck: Automatic Testing for Small Values. In Proceedings of the Haskell Symposium. ACM: New York, 2008. \item Hudak P., Hughes J., Peyton Jones S., Wadler P. In A History of Haskell: Being Lazy with Class, Third ACM SIGPLAN History of Programming Languages Conference (HOPL-III), San Diego, CA, June 9–10, 2007. \end{enumerate} \end{document}
{ "alphanum_fraction": 0.6973301531, "avg_line_length": 61.5388257576, "ext": "tex", "hexsha": "eacdedbb9a64d2a783bb334ec4432ee5796da556", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d802e5a343c9a7b17eef6b50dd6c5267b1634af9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "amandaclare/lab-auto-in-fp", "max_forks_repo_path": "Lab-auto-in-fp.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d802e5a343c9a7b17eef6b50dd6c5267b1634af9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "amandaclare/lab-auto-in-fp", "max_issues_repo_path": "Lab-auto-in-fp.tex", "max_line_length": 1003, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d802e5a343c9a7b17eef6b50dd6c5267b1634af9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "amandaclare/lab-auto-in-fp", "max_stars_repo_path": "Lab-auto-in-fp.tex", "max_stars_repo_stars_event_max_datetime": "2017-02-19T14:40:59.000Z", "max_stars_repo_stars_event_min_datetime": "2017-02-19T14:40:59.000Z", "num_tokens": 21818, "size": 64985 }
\section{Event Unit} \pulpino features a lightweight event and interrupt unit which supports vectorized interrupts of up to 32 lines and event triggering of up to 32 input lines. The interrupt and event lines are separately masked and buffered, see Figure~\ref{fig:event_unit}. \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{./figures/event_unit} \caption{Event Unit.} \label{fig:event_unit} \end{figure} The current assignment of event and interrupt lines is given in Figure~\ref{fig:event_lines}. Note that \signal{irq\_i} and \signal{event\_i} are bound together. \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{./figures/event_lines} \caption{Event Lines.} \label{fig:event_lines} \end{figure} \regDesc{0x1A10\_4000}{0x0000\_0000}{IER (Interrupt Enable)}{ \begin{bytefield}[rightcurly=.,endianness=big]{32} \bitheader{31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0} \\ \begin{rightwordgroup}{IER} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \end{rightwordgroup}\\ \end{bytefield} }{ \regItem{Bit 31:0}{IER}{Interrupt Enable.\\ Enable interrupts per line. } } \regDesc{0x1A10\_4004}{0x0000\_0000}{IPR (Interrupt Pending)}{ \begin{bytefield}[rightcurly=.,endianness=big]{32} \bitheader{31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0} \\ \begin{rightwordgroup}{IPR} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \end{rightwordgroup}\\ \end{bytefield} }{ \regItem{Bit 31:0}{IPR}{Interrupt Pending.\\ Write/read pending interrupts per line. } } \regDesc{0x1A10\_4008}{0x0000\_0000}{ISP (Interrupt Set Pending)}{ \begin{bytefield}[rightcurly=.,endianness=big]{32} \bitheader{31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0} \\ \begin{rightwordgroup}{ISP} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \end{rightwordgroup}\\ \end{bytefield} }{ \regItem{Bit 31:0}{ISP}{Interrupt Set Pending.\\ Set interrupt pending register per line. By setting a bit here, an interrupt will be triggered on the selected line(s). } } \regDesc{0x1A10\_400C}{0x0000\_0000}{ICP (Interrupt Clear Pending)}{ \begin{bytefield}[rightcurly=.,endianness=big]{32} \bitheader{31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0} \\ \begin{rightwordgroup}{ICP} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \end{rightwordgroup}\\ \end{bytefield} }{ \regItem{Bit 31:0}{ICP}{Interrupt Clear Pending.\\ Clear pending interrupt. By setting a bit here, a pending interrupt will be cleared. } } \regDesc{0x1A10\_4010}{0x0000\_0000}{EER (Event Enable)}{ \begin{bytefield}[rightcurly=.,endianness=big]{32} \bitheader{31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0} \\ \begin{rightwordgroup}{EER} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \bitbox{1}{\tiny E} \end{rightwordgroup}\\ \end{bytefield} }{ \regItem{Bit 31:0}{EER}{Event Enable.\\ Enable events per line. } } \regDesc{0x1A10\_4014}{0x0000\_0000}{EPR (Event Pending)}{ \begin{bytefield}[rightcurly=.,endianness=big]{32} \bitheader{31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0} \\ \begin{rightwordgroup}{EPR} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \bitbox{1}{\tiny P} \end{rightwordgroup}\\ \end{bytefield} }{ \regItem{Bit 31:0}{EPR}{Event Pending.\\ Write/read pending events per line. } } \regDesc{0x1A10\_4018}{0x0000\_0000}{ESP (Event Set Pending)}{ \begin{bytefield}[rightcurly=.,endianness=big]{32} \bitheader{31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0} \\ \begin{rightwordgroup}{ESP} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \bitbox{1}{\tiny S} \end{rightwordgroup}\\ \end{bytefield} }{ \regItem{Bit 31:0}{ESP}{Event Set Pending.\\ Set event pending register per line. By setting a bit here, an event will be set on the selected line(s). } } \regDesc{0x1A10\_401C}{0x0000\_0000}{ECP (Event Clear Pending)}{ \begin{bytefield}[rightcurly=.,endianness=big]{32} \bitheader{31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0} \\ \begin{rightwordgroup}{ECP} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \bitbox{1}{\tiny C} \end{rightwordgroup}\\ \end{bytefield} }{ \regItem{Bit 31:0}{ECP}{Event Clear Pending.\\ Clear pending event. By setting a bit here, a pending event will be cleared. } } \regDesc{0x1A10\_4020}{0x0000\_0000}{SCR (Sleep Control)}{ \begin{bytefield}[rightcurly=.,endianness=big]{32} \bitheader{31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0} \\ \begin{rightwordgroup}{SCR} \bitbox{31}{Unused} \bitbox{1}{\tiny E} \end{rightwordgroup}\\ \end{bytefield} }{ \regItem{Bit 0}{E}{Sleep Enabled.\\ Put the core to sleep. The core will be woken up again when there is an interrupt or event. } } \regDesc{0x1A10\_4024}{0x0000\_0000}{SSR (Sleep Status)}{ \begin{bytefield}[rightcurly=.,endianness=big]{32} \bitheader{31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0} \\ \begin{rightwordgroup}{SSR} \bitbox{31}{Unused} \bitbox{1}{\tiny S} \end{rightwordgroup}\\ \end{bytefield} }{ \regItem{Bit 0}{S}{Sleep Status.\\ Set if the core is currently asleep and has its clock gated. } }
{ "alphanum_fraction": 0.5627746661, "avg_line_length": 28.1674757282, "ext": "tex", "hexsha": "51f41ac78e741e6f4d48223dca6c39ab3dfcee1f", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-04-24T22:28:28.000Z", "max_forks_repo_forks_event_min_datetime": "2019-01-18T15:34:58.000Z", "max_forks_repo_head_hexsha": "e0111a8b63bb7ceccbb5ea0107a89c9ed80d3bad", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "scale-lab/PVTsensors", "max_forks_repo_path": "Microcontroller/FPGA/doc/datasheet/content/peripherals_event_unit.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e0111a8b63bb7ceccbb5ea0107a89c9ed80d3bad", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "scale-lab/PVTsensors", "max_issues_repo_path": "Microcontroller/FPGA/doc/datasheet/content/peripherals_event_unit.tex", "max_line_length": 103, "max_stars_count": 14, "max_stars_repo_head_hexsha": "e0111a8b63bb7ceccbb5ea0107a89c9ed80d3bad", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "scale-lab/PVTsensors", "max_stars_repo_path": "Microcontroller/FPGA/doc/datasheet/content/peripherals_event_unit.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-19T11:31:26.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-18T02:16:54.000Z", "num_tokens": 4908, "size": 11605 }
\documentclass[./\jobname.tex]{subfiles} \begin{document} \section {Experiment 2: Adaptive Number of Kernels} \label{chap:experimet_2} Although the parallel algorithm is effectively faster, the quality of the achieved solution is still not good enough. A common inaccuracy, especially with the testbed \gls{pde} 0A is that not all Gauss ``bumps'' are represented in the approximation. \subsection{Hypotheses} The idea tested here is an adaptive scheme for the number of kernels used. This new concept requires a convergence based halting criterion in the JADE algorithm. The algorithm is extended by a so called ``state detector''. Ideally, the state detector should stop the overall optimisation loop as soon as the algorithm has converged and before the function evaluation budget is exceeded. Generally, this is done by checking if the function value has not changed for a certain amount of generations. The state detector introduces a new parameter. The \gls{dt} represents the number of generations over which the best function value must remain unchanged. It can also be thought of as a buffer-time that allows the \gls{de} parameters F and CR to self-adapt. Further, the minError parameter has a new purpose. This is the minimal difference that the function value is allowed to change over \gls{dt} generations. The new paJADE is wrapped into the memetic framework. A flowchart of the process in shown in figure \ref{fig:uml_flow_adaptive_scheme}. The algorithm always starts with one kernel. From there on, the number of kernels is increased. After the ``state detector'' has stopped the paJADE, the \gls{ds} is employed on the best individual. If the last JADE/DS cycle was able to improve the function value, it is assumed that the best solution for that dimensionality is found. Thus, to further improve the approximation quality, the number of kernels must be increased. If the function value could not be decreased, a restart around the previous best population is performed. \begin{figure}[H] \centering \noindent\adjustbox{max width=0.9\linewidth}{ \includegraphics[width=\textwidth]{../../code/uml_diag/adaptive_kernels_flowchart.pdf} } \unterschrift{Flowchart of the adaptive kernel scheme.}{}{} \label{fig:uml_flow_adaptive_scheme} \end{figure} This adaptive scheme operates under 3 strong assumptions. To reduce their possible negative impact, corresponding counter-strategies are implemented. \begin{itemize} \item \underline{\textbf{Assumption 1:}} The optimisation algorithm (JADE + \gls{ds}) finds (a close approximation to) the global optimum. This would be the best approximation of the solution by $N$ kernels. Obviously, this property is not necessarily true. To counteract this assumption, restarts are performed. \item \underline{\textbf{Assumption 2:}} The theoretically best achievable solution quality increases with the number of kernels. After a maximum number of kernels is reached, the quality can not be surpassed. Based on this assumption, the algorithm starts with one kernel and the dimensionality increases by only one kernel at a time. Generally, the maximum number of kernels is not known except for \gls{pde} 0A and \gls{pde} 6. \item \underline{\textbf{Assumption 3:}} The best approximation of e.g. 3 kernels to a particular problem is independent of the best approximation by 4 kernels. This means that from 3 to 4 kernels simply a new kernel is introduced while not altering the other 3. Again, this is not true for every \gls{pde}. Preliminary experiments on \gls{pde} 0A have confirmed this assumption, while on \gls{pde} 2 the solution can not simply be decomposed into independent kernels. In this algorithm, the first kernels are allowed to change. When introducing a new random kernel, it is simply appended to the ever evolving $\mathbf{p_{apx}}$ vector. Thus, the search for the 4th kernel starts where the best approximation for 3 kernels was found, but since the earlier kernels are allowed to readapt, other solutions can be retrieved. \end{itemize} \subsection{Experiment Setup} Again, as in the experiments before, machine 1 runs at $10^4$ \gls{nfe} and machine 2 performs $10^6$ \gls{nfe}. The number of kernels is adapted, but the algorithm starts with 1 \gls{gak}. Thus, the dimension is 4 and the population size is 8. The population size gets corrected if the number of kernels changes. The two new parameters \gls{dt} and minError must be set. The minError is again set to 0. The delay time \gls{dt} is set to 100. This choice is rather arbitrary and depending on the \gls{pde}, different values might be more successful. However, this property is not analysed in the current experiment. \subsection{Results} \label{chap:results_ex2} The table \ref{tab:compare_mpj_mpja_10^6} shows the L2 norm data obtained by the adaptive JADE and compares them against the results from the parallel JADE. The Wilcoxon test indicates mixed results. The adaptive kernel scheme works fine on the \gls{pde} 0A, but it also produces significantly worse results on the problems \gls{pde} 2, 3, 4 and 7. \begin{table}[h] \centering \noindent\adjustbox{max width=\linewidth}{ \begin{tabular}{|c|c|c|c|c|l|} \hline \rowcolor[HTML]{\farbeTabA} Algorithm & \multicolumn{2}{|c|}{parallel JADE $10^6$ \gls{nfe}} & \multicolumn{2}{|c|}{adaptive JADE $10^6$ \gls{nfe}} & \\ \hline stat & mean & median & mean & median & Wilcoxon Test \\ \hline \hline \gls{pde} 0A & 0.6939 $\pm$ 0.6635 & 0.9243 & 9.694E-16 $\pm$ 1.486E-16 & 9.255E-16 & sig. better \\ \hline \gls{pde} 0B & 0.2809 $\pm$ 0.3071 & 0.2035 & 0.2380 $\pm$ 0.0572 & 0.2607 & unsig. undecided \\ \hline \gls{pde} 1 & 0.0239 $\pm$ 0.0467 & 0.0146 & 0.0116 $\pm$ 0.0061 & 0.0084 & unsig. better \\ \hline \gls{pde} 2 & 0.0300 $\pm$ 0.0157 & 0.0255 & 0.0735 $\pm$ 0.0358 & 0.1034 & sig. worse \\ \hline \gls{pde} 3 & 0.0371 $\pm$ 0.0206 & 0.0295 & 0.1731 $\pm$ 0.0395 & 0.1822 & sig. worse \\ \hline \gls{pde} 4 & 0.0505 $\pm$ 0.0121 & 0.0481 & 0.0707 $\pm$ 0.0053 & 0.0720 & sig. worse\\ \hline \gls{pde} 5 & 1.2030 $\pm$ 0.0465 & 1.2053 & 122.6312 $\pm$ 372.5676 & 1.1643 & unsig. undecided \\ \hline \gls{pde} 6 & 0.5814 $\pm$ 1.3550 & 1.266E-17 & 0.4428 $\pm$ 1.0980 & 1.266E-17 & unsig. undecided \\ \hline \gls{pde} 7 & 0.0228 $\pm$ 0.0025 & 0.0226 & 0.0513 $\pm$ 0.0442 & 0.0231 & sig. worse \\ \hline \gls{pde} 8 & 0.2167 $\pm$ 0.0017 & 0.2169 & 0.2144 $\pm$ 0.0044 & 0.2128 & unsig. better \\ \hline \gls{pde} 9 & 0.0426 $\pm$ 0.0115 & 0.0463 & 0.0483 $\pm$ 0.0149 & 0.0468 & unsig. worse \\ \hline \end{tabular} } \unterschrift{Comparison of the achieved L2 norm by the pJADE and the paJADE at $10^6$ \gls{nfe}.}{}{} \label{tab:compare_mpj_mpja_10^6} \end{table} \subsection{Discussion} \subsubsection{PDE 0A} \label{chap:ex2_discussion_pde0a} As noted before, the testbed \gls{pde} is especially designed to be solved by 5 \gls{gak}. The common problem, that not all kernels are established, is solved by the adaptive strategy. All 20 replications generate at least 5 kernels. However, some solutions are composed of 6 kernels, but this has only a limited effect on the numerical value of the solution quality. Generally, 6 kernels tend to produce worse solutions. The results by the \gls{ci} solver can even compete with the \gls{fem} solver results from table \ref{tab:fem_sol_quality}. \subsubsection{Significantly Worse Quality} \label{chap:pde 2 3 4 7} The Wilcoxon significance test of table \ref{tab:compare_mpj_mpja_10^6} shows that the adaptive scheme is worse for the \gls{pde}s 2, 3, 4 and 7. On these test problems the solver frequently results in a smaller number of kernels, where the majority of runs even produce less than 5 \gls{gak}. This phenomenon points towards a shared problem where the solver does not increase the number of kernels consistently. Figure \ref{fig:pajade_pde2347_kernels_l2norm} plots the solution quality against its number of kernels. It is clearly shown that on these \gls{pde}s, more kernels strongly correlate with a better quality. \begin{figure}[h] \centering \noindent\adjustbox{max width=0.7\linewidth}{ \includegraphics[width=\textwidth]{../../code/experiments/experiment_2/pde_2_3_4_7_kernels_vs_l2norm.pdf} } \unterschrift{Semi-logarithmic plot of the correlation between the L2 norm and the number of kernels. }{}{} \label{fig:pajade_pde2347_kernels_l2norm} \end{figure} It seems that JADE exploits some areas long enough so that it does not terminate due to convergence. Thus, the number of kernels is not increased, which leads to a poor approximation quality. A simple solution to mitigate this issue might be to adjust the parameters of the ``state-detector''. In this experiment, $minError = 0$ is used, however it might be beneficial to allow small changes in the function value and still terminate. \\ \textbf{\underline{Parameter Adaption: $minError$}} \\ In this ``sub-experiment'' the effect of increasing the $minError$ parameter is examined. Therefore, the same algorithm is rerun on the \gls{pde}s 2, 3, 4 and 7 at four different $minError$ levels. Again, 20 replications are done. It is expected that the average number of kernels is increased. Simultaneously, the approximation quality should become better. As expected, the average number of kernels in the solution gets increased. This is confirmed by the plot in figure \ref{fig:subexperiment_pde2347_minerror_kernelNR}. Figure \ref{fig:subexperiment_pde2347_minerror_l2norm} shows the connection between the median L2 norm and the $minError$. The distance to the analytical solution decreases on \gls{pde} 2 and 3. However, this does not improve the results of \gls{pde} 4 and 7, where the quality stays roughly on the same level. This is supported statistically by the Wilcoxon test in table \ref{tab:statistical_test_minError}. \begin{figure}[h] \centering \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\textwidth]{../../code/experiments/misc/pde2347_minError_kernelNR.pdf} \caption{Plot of the mean number of kernels against the $minError$.} \label{fig:subexperiment_pde2347_minerror_kernelNR} \end{subfigure}% % \begin{subfigure}[b]{0.52\linewidth} \centering \includegraphics[width=1\textwidth]{../../code/experiments/misc/pde2347_minError_L2norm.pdf} \caption{Plot of the median L2 norm against the $minError$.} \label{fig:subexperiment_pde2347_minerror_l2norm} \end{subfigure}% \unterschrift{Comparison of $minError$ against the number of kernels and the achieved solution quality. }{}{}% \label{fig:subexperiment_pde2347_minerror} \end{figure} \begin{table}[h] \centering \noindent\adjustbox{max width=\linewidth}{ \begin{tabular}{|c|c|c|c|c|l|} \hline \rowcolor[HTML]{\farbeTabA} Setup & \multicolumn{2}{|c|}{$minError = 0$; $10^6$ \gls{nfe}} & \multicolumn{2}{|c|}{$minError = 10^{-1}$; $10^6$ \gls{nfe}} & \\ \hline stat & mean & median & mean & median & Wilcoxon Test \\ \hline \hline \gls{pde} 2 & 0.0735 $\pm$ 0.0358 & 0.1034 & 0.0418 $\pm$ 0.0156 & 0.0389 & sig. better \\ \hline \gls{pde} 3 & 0.1731 $\pm$ 0.0395 & 0.1822 & 0.0455 $\pm$ 0.0406 & 0.0331 & sig. better \\ \hline \gls{pde} 4 & 0.0707 $\pm$ 0.0053 & 0.0720 & 0.0726 $\pm$ 0.0080 & 0.0744 & unsig. worse \\ \hline \gls{pde} 7 & 0.0513 $\pm$ 0.0442 & 0.0231 & 0.0287 $\pm$ 0.0045 & 0.0279 & unsig. undecided \\ \hline \end{tabular} } \unterschrift{Statistical comparison of the achieved L2 norm by paJADE with $minError = 0$ and $minError = 10^{-1}$ after $10^6$ \gls{nfe}.}{}{} \label{tab:statistical_test_minError} \end{table} Although the results on \gls{pde} 2 and 3 do get significantly better, the adaptive process with greater $minError$ introduces a larger spread of the results - both in the number of kernels and in the reached L2 norm. This can be seen in figure \ref{fig:subexperiment_pde2347_kernels_l2norm}. Compared to the same plot at $minError = 0$, the coefficient of determination $R^2$ is smaller, indicating a poor correlation and a greater spread. \begin{figure}[h] \centering \noindent\adjustbox{max width=0.8\linewidth}{ \includegraphics[width=\textwidth]{../../code/experiments/misc/pde2347_L2norm_kernelNR.pdf} } \unterschrift{Semi-logarithmic plot of the correlation between the L2 norm and the number of kernels. The results are produced with a $minError = 10^-1$ after $10^6$ \gls{nfe}.}{}{} \label{fig:subexperiment_pde2347_kernels_l2norm} \end{figure} \subsubsection{PDE 5} The results presented in table \ref{tab:compare_mpj_mpja_10^6} show an interesting observation for the testbed problem 5. The mean L2 norm of the adaptive scheme is very large, but the median is slightly smaller than the median of the non-adaptive JADE. The Wilcoxon test reveals an insignificant difference, which hints that the adaptive scheme includes some very large outliers. This is demonstrated by comparing the box plots of both L2 norm distributions in figure \ref{fig:paJADE_pde5_l2norm_boxplot}. The same data is shown with and without the outlier. \begin{figure}[h] \centering \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[width=1\textwidth]{../../code/experiments/experiment_2/pde5_L2_norm_boxplot.pdf} \caption{\gls{pde} 5 solution quality with outlier. } \label{fig:paJADE_pde5_l2norm_boxplot} \end{subfigure}% % \begin{subfigure}[b]{0.39\linewidth} \centering \includegraphics[width=1\textwidth]{../../code/experiments/experiment_2/pde5_L2_norm_boxplot_wo_outlier.pdf} \caption{\gls{pde} 5 solution quality without outlier. } \label{fig:paJADE_pde5_l2norm_boxplot_cleared} \end{subfigure}% % \unterschrift{Boxplot of solution quality on \gls{pde} 5 at $10^6$ \gls{nfe} with and without outliers. }{}{}% \label{fig:paJADE_pde5_l2norm_boxplot_comparison} \end{figure} In general, it can be said that the adaptive scheme exhibits a greater spread in the quality of the solution. \end{document}
{ "alphanum_fraction": 0.7502685669, "avg_line_length": 94.3445945946, "ext": "tex", "hexsha": "92fc54df1cae87d92ed36f111cac6c9d85bd73ea", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7857af20c6b233901ab3cedc325bd64704111e16", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "nicolai-schwartze/Masterthesis", "max_forks_repo_path": "master_thesis_paper/tex/Experiment2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7857af20c6b233901ab3cedc325bd64704111e16", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "nicolai-schwartze/Masterthesis", "max_issues_repo_path": "master_thesis_paper/tex/Experiment2.tex", "max_line_length": 1580, "max_stars_count": 1, "max_stars_repo_head_hexsha": "7857af20c6b233901ab3cedc325bd64704111e16", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "nicolai-schwartze/Masterthesis", "max_stars_repo_path": "master_thesis_paper/tex/Experiment2.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-13T10:02:02.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-13T10:02:02.000Z", "num_tokens": 4359, "size": 13963 }
% -*- root: cuthesis_masters.tex -*- In this chapter we present the key contributions of selected publications pertaining to technical debt, which puts the state of the art in focus and contextualizes the aims of this thesis. The studies we present first are concerned primarily with laying out the technical debt metaphor and establishing which cases such an analogy accurately describes, particularly in dealing with those less versed in software development jargon. These studies elaborate on the criteria that characterize sub-varieties of technical debt and how it is currently being implemented. Another collection of studies, presented second, discusses the issue of identifying technical debt in the source code.\\ For the most part, this section incorporates prior work that centers on technical debt generally; information specific to the studies under discussion will accompany their respective chapters. In section~\ref{background}, we provide background information on technical debt generally; section~\ref{related_work} provides a cursory summary of related work divided into six subsections: Leveraging Source Code and Static Analysis Tools, Source Code Comments, Leveraging Source Code Comments (Self-Admitted Technical Debt), Technical Debt, Software Quality and Identifying and Detecting Code Smells. \iffalse \section{The Popularization of the Technical Debt Metaphor} \fi \section{Background} \label{background} In the early days of technical debt, blogs curated by industry professionals circulated the most up-to-date information, but this medium largely left those outside the industry in the dark. In the time since, however, a greater emphasis on collaboration and information sharing has spurred extensive research, undertaken by both the industrial and academic fronts, on what exactly is subsumed under the technical debt metaphor, which includes more and more as its usage gains traction.\\ Ward Cunningham \cite{cunningham1993wycash} originated the technical debt metaphor over twenty years ago as a means of negotiating a common language for the software developers and non-technical staff assigned to the same project. His original conception likened the additional effort incurred to maintain a project in the long term to the interest accrued on debt, such as a loan. Temporary fixes initially accelerate development and thus confer the short-term advantage of meeting deadlines otherwise unreasonable, yet if sufficient debt accumulates, the project grinds to a halt under the burden of incurred interest. It is the metaphor's financial familiarity that makes it effective in explaining how temporarily functional portions of code eventually become unsustainable.\\ Steve McConnell \cite{mcconnell} popularized the metaphor in his taxonomy, as did Martin Fowler \cite{fowler} in devising the four quadrants outlined in Figure~\ref{fig:technical_debt_quadrant}. Due to the effectiveness of these two methods of explaining technical debt to the software engineering community, we devote the two subsections that follow to examining each in turn. \subsection{Intentionally vs. Unintentionally Incurred Technical Debt} Steve McConnell recognizes ``intentionally incurred" (Type I) and ``unintentionally incurred" (Type II) as the two principle classifications of technical debt \cite{mcconnell}. The latter comprises error-prone design techniques and poorly written code by an inexperienced programmer, among others. Unintentionally incurred technical debt results from low-quality work and is sometimes assumed without the recipient's knowledge, as in the case of company acquisitions and mergers.\\ Type I debt, in contrast, is incurred purposefully and in exchange for an immediate payoff. Software development companies, like all companies, make business decisions, strategically opting to accrue debt from time to time so that a deadline can be met. Justifications for incurring technical debt, such as ``If we don't get this release done on time, there won't be a next release," are credible enough that some companies, for instance, use glue code to synchronize multiple databases before proper reconciliation can be conducted, or postpone revisions that would ensure consistency in coding standards \cite{mcconnell}.\\ McConnell further partitions Type I debt into short- and long-term varieties. In keeping with the technical debt metaphor, short-term debt is assumed reactively and ideally paid off quickly and frequently, whereas organizations take on long-term debt proactively and, depending on the risk, sometimes count on expected income generated by an investment to pay it back. %\sultan{Do we want an outline of McConnell's taxonomy here?} \subsection{The Technical Debt Quadrant} Advocating an alternative interpretation of the metaphor, Fowler \cite{fowler} conceptualizes a typology of technical debt in which each of his four quadrants is designated either ``reckless" or ``prudent" and either ``deliberate" or ``inadvertent," allowing for four possibilities total. Prudent deliberate debt is assumed when a market supplier is fully aware of what it is taking on and has conducted an in-depth cost-benefit analysis to determine whether the hypothetical additional revenue an earlier release generates exceeds the expense of repaying the debt later. The polar opposite, so-called ``reckless inadvertent debt," is among the consequences of ``not knowing any better," or being unacquainted with sound design practices \cite{fowler}.\\ As Fowler's quadrant schema demonstrates, reckless debt need not always coincide with inadvertent debt, nor prudent debt with deliberate debt. Companies cognizant of sound design practices, or even ones that ordinarily adhere to them, might opt for the ``quick fix" rather than clean code under pressure. Prudent inadvertent debt arises when all parties are satisfied with the software delivered, which functions smoothly at the time and gives no indication of future issues, but it dawns on a developer afterwards that there was a more optimal solution. Of course, this is to be expected since programming is a learning process, albeit one that does not forgive debt incurred along the way \cite{fowler}.\\ Figure~\ref{fig:technical_debt_quadrant} displays Fowler's technical debt quadrants. Each of these contains a quote that sums up a prototypical scenario in which developers would resort to its particular combination of prudent/reckless and deliberate/inadvertent debt.\\ \begin{figure}[th] \centering \includegraphics[width=90mm]{figures/chapter2/technicalDebtQuadrant} \caption{Technical Debt Quadrant} \label{fig:technical_debt_quadrant} \end{figure} \subsection{Additional Insights on the Technical Debt Metaphor} The technical debt metaphor has found favor with software developers who need to convey to project stakeholders uninitiated in programming terminology similar debts and patchwork repairs that ``kick the can down the road" and put off the effort of isolating a solution viable in the long term. Concepts falling under this umbrella include test, requirement, documentation and generalized software debt \cite{sterling2010managing}. Broadening the metaphor to cover too many varieties of debt, however, might ultimately lessen its effectiveness, as Kruchten \textit{et al.} point out \cite{Kruchten_td_IEEE}. Unimplemented requirements, functions or features do not qualify as requirement debts, just as putting off developing them does not qualify as a planning debt. Heavy reliance on tools alone to detect technical debt is one pitfall that the study highlights, in many cases leading to non-negligible underestimation of the actual technical debt load. This occurs since the majority of technical debt accumulates because of structural choices and technological gaps rather than code quality. Further corroborating the overextension of the metaphor, Spinola \textit{et al.} \cite{spinola2013investigating} compiled statements on technical debt that software developers made both online and in published work and selected 14 of them to use as items in two surveys measuring the level of agreement of 37 participants with software development backgrounds. On the whole, most participants strongly agreed that poorly managed technical debt drives up maintenance costs until they outpace consumer value and disagreed that all technical debt is accrued with a developer's full knowledge. In the same study, the authors speculate that the technical debt metaphor's comprehensibility is what fuels its generalization to phenomena outside the realm of technical debt in the truest sense. This in turn blurs the boundaries between technical debt and other costs or coding flaws and leads to persistent conflation among non-technical project contributors and, all too often, industry specialists, who adopt the metaphor as a rote catchall \cite{spinola2013investigating}. Alves \textit{et al.}~\cite{alves2014towards} have introduced a specialized vocabulary intended to disambiguate the subtleties that an all-purpose term such as \textit{technical debt} overlooks, by sorting concepts extracted from a systematic literature mapping that combed 100 studies published between 2010 and 2014. Their undertaking identified 15 categories of technical debt but remained flexible enough to account for instantiations of technical debt that belonged in multiple categories: design debt, documentation debt, code debt, requirements debt, people debt, process debt, service debt, versioning debt, usability debt, build debt, test automation debt, infrastructure debt, defect debt, test debt and architecture debt. The work of Alves \textit{et al.} and others who have monitored trends in the application of the technical debt metaphor and devised schemata relaying its latest interpretations has allowed developers and their stakeholders to make sense of the dynamic interplay between holdover solutions and deferred expense. \iffalse \section{Technical Debt Indicators and Ramifications} \fi \section{Related Work} \label{related_work} \subsection{Leveraging Source Code and Static Analysis Tools} Lately, there has been a lot of incentive to engineer better strategies for detecting and managing technical debt. Technical debt often gets out of hand and reaches unsustainable levels because a developer fails to realize how quickly it accumulates. Static analysis tools can efficiently pinpoint violations of object-oriented design principles and source code anomalies outside the pre-specified ranges quantifying code quality. Such outliers constitute ``bad smells," which fall under the category of design debt.\\ In a study probing the effects of god classes (another manifestation of design debt) on project maintainability, Zazworka \textit{et al.}~\cite{zazworka2011investigating} examined two commercial applications released by a development company and concluded that god classes are more liable to be defective, and thus higher-maintenance, than non-god classes. For this reason, it is worthwhile for developers to monitor and, where appropriate, mitigate the effect of technical debt on product quality, at all stages in the process.\\ God classes and other bad smells---namely, data class and duplicate code---were extracted from open-source systems and scrutinized by Fontana \textit{et al.}~\cite{fontana2013code} in an effort to prioritize the handling of different types of design debt. Their approach ranks bad smells in descending order with respect to negative impact on software quality and encourages developers to rectify higher-priority design debts first.\\ Zazworka \textit{et al.}~\cite{zazworka2011investigating} elicited an enumeration of technical debt items stored in project artifacts from multiple developers and compared the results with what three static analysis tools identified as fitting the relevant criteria. As different teams reported different technical debt items, counting only the items that all teams recognized as technical debt results in an underestimation of the actual technical debt load and for this reason aggregation proves to be the better method. Similarly, static analysis tools will yield underestimations---some varieties of technical debt going undetected---unless supplemented with human mediation. %\incomplete{need to say this is the majority of the detection and you want to compare to SATD; also, you want to highlight that quality has to be studied} \subsection{Source Code Comments} A number of studies examined the usefulness/quality of comments and showed that comments are valuable for program understanding and software maintenance \cite{TakangGM96,tan07icomment,lawrie2006leveraged}. For example, Storey \emph{et al.}~\cite{Storey:2008} explored how task annotations in source code help developers manage personal and team tasks. Takang {\em et al.} \cite{TakangGM96} empirically investigated the role of comments and identifiers on source code understanding. Their main finding showed that commented programs are more understandable than non-commented programs. Khamis {\em et al.} \cite{Khamis:2010} assessed the quality of source code documentation based on an analysis of the quality of language and consistency between source code and its comments. Other work, by Tan {\em et al.}, has proposed several approaches to identify code-comment inconsistencies. The first, called @iComment, detects lock- and call-related inconsistencies \cite{tan07icomment}. The second approach, @aComment, detects synchronization inconsistencies related to interrupt context \cite{acomment}. A third approach, @tComment, automatically infers properties from Javadoc related to null values and exceptions; it performs test case generation by considering violations of the inferred properties \cite{tcomment}.\\ Other studies have examined the co-evolution of comment updates as well as the reasons behind them. Fluri {\em et al.} \cite{fluri2007code} studied the co-evolution of source code and associated comments and found that 97\% of the comment changes are consistently co-changed. Malik {\em et al.} \cite{malik2008understanding} performed a large empirical study to understand the rationale for updating comments along three dimensions: characteristics of the modified function, characteristics of the change, as well as the time and code ownership. Their findings showed that the most relevant attributes associated with comment updates are the percentage of changed call dependencies and control statements, the age of the modified function and the number of co-changed functions which depend on it. De Lucia {\em et al.} \cite{DeLucia2011} proposed an approach to help developers maintain source code identifiers and consistent comments with high-level artifacts. The main results of their study, based on controlled experiments, confirm that providing developers with similarity between source code and high-level software artifacts helps to enhance the quality of comments and identifiers. Most relevant to our research is the work recently undertaken by Potdar and Shihab~\cite{ICSM_PotdarS14}, which uses source code comments to detect \SATD. Using the identified technical debt, they studied how much SATD exists, the rationale for SATD and the likelihood of its removal after introduction. Another relevant contribution to our study is Maldonado and Shihab's \cite{MTD15p9}, as their work has also leveraged source code comments to detect and quantify different types of SATD. They classified SATD into five types: design debt, defect debt, documentation debt, requirement debt and test debt. Ultimately, they concluded that the most common type is design debt, accounting for anywhere between 42\% and 84\% of a total of 33,000 classified comments. Our study builds on prior work in~\cite{ICSM_PotdarS14,MTD15p9} since we use the comment patterns they produced to detect SATD. However, we differ from these studies in that we examine the relationship between SATD and software quality. \subsection{Leveraging Source Code Comments (Self-Admitted Technical Debt)} While strides have been made in locating sources of technical debt and preventing unsustainable accumulation, such as using static source code analysis tools, new improvements are constantly proposed, debated and adopted for use alongside older, ``tried and tested" methodologies. One such improvement, from Potdar and Shihab~\cite{ICSM_PotdarS14}, enlists source code comments in isolating technical debt, in which the developer \emph{confesses} the debt. At best, analysis tools can only \emph{suppose} debt on the basis of semi-arbitrary cutoffs and thresholds, and stop short of guaranteeing that an implementation is less than optimal, i.e., \emph{self-admitted technical debt}.\\ In their study capturing the state of the art in \SATD identification, Potdar and Shihab~\cite{ICSM_PotdarS14} extracted source code comments from five open-source projects and conducted manual inspections. The authors read and analyzed more than 100,000 comments and in the end isolated 62 different comment patterns that serve as reliable indicators of \SATD, most consisting of simple phrases, e.g. ``fixme," ``workaround" and ``this can be a mess." It was found that: (i) between 2.4\% and 31.0\% of the files analyzed contained these keywords, (ii) the bulk of the \SATD was introduced by more experienced developers and (iii) there is no correlation between time pressures or code complexity and the amount of \SATD.\\ Building on the work of Potdar and Shihab~\cite{ICSM_PotdarS14}, Bavota and Russo~\cite{bavota2016large} examined the growth and evolution of self-admitted technical debt across 159 projects and the effects this has had on software quality, and extracted upwards of 600,000 commits and two billion source code comments. They found that: (i) \SATD is diffused, averaging 51 occurrences per system; (ii) it accumulates over time as new occurrences pile up on top of ones which have not yet been corrected and (iii) the occurrences that are corrected have a mean lifespan of 1,000 commits in the system.\\ %\subsection{Research Leveraging Source Code Comments} %\todo{too similar to the last subsection's title?} \subsection{The Relationship Between Technical Debt and Software Quality} Other work has focused on the identification and examination of technical debt. It is important to note that the technical debt discussed here is \emph{not} SATD: rather, it is technical debt that is detected through source code analysis tools. For example, Zazworka {\em et al.} \cite{Zazworka:2013} attempted to identify technical debt automatically and then compared their automated identification with human elicitation. The results of their study outline potential benefits of developing tools and heuristics for the detection of technical debt. Also, Zazworka {\em et al.} \cite{zazworka2011investigating} investigated how design debt, in the form of god classes, affects software maintainability and correctness of software products. Their study involved two industrial applications and showed that god classes are changed more often than non-god classes and, moreover, that they contain more defects. Their findings suggest that technical debt may negatively influence software quality. Guo {\em et al.}~\cite{GuoSGCTSSS11} analyzed how and to what extent technical debt affects software projects by tracking a single delayed task in a software project throughout its lifecycle. Our work differs from foregoing research by Zazworka {\em et al.}~\cite{zazworka2011investigating,Zazworka:2013} since we focus on the relationship between SATD (and \textit{not} technical debt related to god files) and software quality. However, we believe that our study complements prior studies since it sheds light on the overall consequences of SATD and, in particular, those pertaining to software quality. \subsection{Software Quality} A plethora of prior work has proposed techniques to improve software quality, the majority of this work having concerned itself with understanding and predicting software quality issues (e.g.~\cite{Zimmerman2008Springer}). Several studies have examined the metrics that best indicate software defects, including design and code~\cite{Jiang-promise-2008}, code churn~\cite{Nagappan-icse-2005} and process metrics~\cite{Moser-icse-2008,Rahman-icse-2013}. Other studies have opted to focus on change-level prediction of defects. Sliwerski \emph{et al.} suggested a technique known as SZZ to automatically locate fix-inducing changes by linking a version archive to a bug database \cite{Sliwerski-fse-2005}. Kim \emph{et al.} \cite{Kim-tse-2008} used identifiers in added and deleted source code and the words in change logs to identify changes as defect-prone or not. Similarly, Kamei \cite{Kamei-tse-2013} \textit{et al.} proposed a ``Just-In-Time Quality Assurance'' approach to identify risky software changes in real time. The findings of their study reveal that process metrics outperform product metrics in terms of identifying risky changes. Our study leverages the SZZ algorithm and some of the techniques presented in the aforementioned change-level work to study the defect-proneness of SATD-related commits. Moreover, our study complements existing work by taking up the hypothesized correlation between SATD and software defects. \subsection{Identifying and Detecting Code Smells} Fowler and Beck \cite{fowler1999refactoring} originated the term \textit{code smell} to designate various indicators of object-oriented design flaws that can undermine software maintenance. Code smells respond to the internal and external properties of the system elements they monitor. Though manual code smell detection warns developers of potential vulnerabilities, Marinescu \cite{Marinescu_ICETOOLS} observes that it is time-consuming, non-repeatable and non-scalable. Apart from this, the more familiar the software system is to a developer, the higher the risk of a subjective appraisal of its efficiencies and shortcomings, according to Mäntylä \cite{mantyla2003taxonomy, mantyla2004bad}, and one important corollary of this is that a developer's chances of overlooking design flaws increase. In order to surmount these drawbacks, Marinescu recommends enlisting code metrics to detect system volatilities, and in this spirit, several implementations of this alternative to manual detection have been devised \cite{lanza2007object, marinescu2004detection, Marinescu_PhD, Marinescu_IBM_JRD}. %\incomplete{you need much better organization here} %\incomplete{you need to highlight the uniqueness of the thesis}
{ "alphanum_fraction": 0.8188642586, "avg_line_length": 191.9152542373, "ext": "tex", "hexsha": "6f460f6f565dd3b2baa2de2584ae8a92c8a17f9a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ffb851a5d4a0dc50ef8538964c9f24a535f3ea7b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "xsultan/masters_thesis", "max_forks_repo_path": "literature_review.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ffb851a5d4a0dc50ef8538964c9f24a535f3ea7b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "xsultan/masters_thesis", "max_issues_repo_path": "literature_review.tex", "max_line_length": 1316, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ffb851a5d4a0dc50ef8538964c9f24a535f3ea7b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "xsultan/masters_thesis", "max_stars_repo_path": "literature_review.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-17T06:57:30.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-17T06:57:30.000Z", "num_tokens": 4712, "size": 22646 }
\chapter{Project Build System Makefiles} % All this appendix uses make syntax \lstset{language=make} \section{Makefile.test} \lstset{caption=Testing Targets Makefile (Makefile.test), label=lst:makefile-test} \lstinputlisting{src/code/build/Makefile.test}
{ "alphanum_fraction": 0.7953667954, "avg_line_length": 23.5454545455, "ext": "tex", "hexsha": "7fbb43ac5c15ef376e03ba57eaeb30c77b473ce7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6e8390cacc4356c19b74202d6ef7974d67a187d4", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "cosmin1123/Recommender-System", "max_forks_repo_path": "diplomaProject/src/appendix/build.system.makefiles.tex", "max_issues_count": 60, "max_issues_repo_head_hexsha": "6e8390cacc4356c19b74202d6ef7974d67a187d4", "max_issues_repo_issues_event_max_datetime": "2021-06-21T22:31:17.000Z", "max_issues_repo_issues_event_min_datetime": "2019-11-05T14:12:34.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "cosmin1123/Recommender-System", "max_issues_repo_path": "diplomaProject/src/appendix/build.system.makefiles.tex", "max_line_length": 82, "max_stars_count": 2, "max_stars_repo_head_hexsha": "6e8390cacc4356c19b74202d6ef7974d67a187d4", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "cosmin1123/Recommender-System", "max_stars_repo_path": "diplomaProject/src/appendix/build.system.makefiles.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-03T16:47:01.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-22T19:23:55.000Z", "num_tokens": 68, "size": 259 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % University/School Laboratory Report % LaTeX Template % Version 3.1 (25/3/14) % % This template has been downloaded from: % http://www.LaTeXTemplates.com % % Original author: % Linux and Unix Users Group at Virginia Tech Wiki % (https://vtluug.org/wiki/Example_LaTeX_chem_lab_report) % % License: % CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/) % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %---------------------------------------------------------------------------------------- % PACKAGES AND DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- \documentclass{article} \usepackage[version=3]{mhchem} % Package for chemical equation typesetting \usepackage{siunitx} % Provides the \SI{}{} and \si{} command for typesetting SI units \usepackage{graphicx} % Required for the inclusion of images \usepackage{natbib} % Required to change bibliography style to APA \usepackage{amsmath} % Required for some math elements \usepackage{caption} \usepackage{subcaption} \usepackage{listings} \usepackage{color} \definecolor{codegreen}{rgb}{0,0.6,0} \definecolor{codegray}{rgb}{0.5,0.5,0.5} \definecolor{codepurple}{rgb}{0.58,0,0.82} \definecolor{backcolour}{rgb}{0.95,0.95,0.92} \lstdefinestyle{mystyle}{ backgroundcolor=\color{backcolour}, commentstyle=\color{codegreen}, keywordstyle=\color{codepurple}, numberstyle=\tiny\color{codegray}, stringstyle=\color{codepurple}, basicstyle=\footnotesize, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2 } \lstset{style=mystyle} \setlength\parindent{0pt} % Removes all indentation from paragraphs \renewcommand{\labelenumi}{\alph{enumi}.} % Make numbering in the enumerate environment by letter rather than number (e.g. section 6) \newcommand\tab[1][0.5cm]{\hspace*{#1}} %\usepackage{times} % Uncomment to use the Times New Roman font %---------------------------------------------------------------------------------------- % DOCUMENT INFORMATION %---------------------------------------------------------------------------------------- \title{COMP 429/529: Project 1} % Title \author{Berkay \textsc{Barlas}} % Author name \date{\today} % Date for the report \begin{document} \maketitle % Insert the title, author and date \begin{center} \begin{tabular}{l r} Date Performed: & March 17, 2019 \\ % Date the experiment was performed Instructor: & Didem Unat % Instructor/supervisor \end{tabular} \end{center} % If you wish to include an abstract, uncomment the lines below % \begin{abstract} % Abstract text % \end{abstract} %---------------------------------------------------------------------------------------- % SECTION 1 %---------------------------------------------------------------------------------------- \tab In this assignment I developed my parallel implementations on top of given serial version for two different applications; an image blurring algorithm and sudoku solver using OpenMP. \newline While the first application in data parallelism, the second application in task parallelism. \newline \newline In this assignment I have completed \begin{itemize} \item Parallel Version of Image Blurring \item Performance Study for Part I \item Parallel Version of Sudoku Part A, Part B, Part C \item Performance Study for Part II \end{itemize} \section{Part I: Image Blurring} \tab In the first part of this assignment I implemented a parallel version of a simple image blurring algorithm with OpenMP which takes an input image and outputs a blurred image. \\ \tab I used \#pragma omp for collapse() in getGaussian(), loadImage(), saveImage(), applyFilter(), averageRGB() methods since all of have nested for loops that can be parallelized. The biggest nested for loop is in applyFilter() and it can be parallelizable as below. collapse(5) can not be used because last two for loops depends on previous loops. \begin{lstlisting}[language=C] #pragma omp parallel for collapse(3) for (d=0 ; d<3 ; d++) { for (i=0 ; i<newImageHeight ; i++) { for (j=0 ; j<newImageWidth ; j++) { for (h=0 ; h<filterHeight ; h++) { for (w=0 ; w<filterWidth ; w++) { newImage[d][i][j] += filter[h][w]*image[d][h+i][w+j]; }}}}} \end{lstlisting} \newpage \subsection{Stability Test} \begin{description} \item[Serial version execution time: ] \hfill \\ Coffee Image: 13.64\\ Strawberry Image: 27.56\\ \item[Paralel version with single thread execution time: ] \hfill \\ Coffee Image: 30.88\\ Strawberry Image: 55.93\\ \item[Which thread number gives the best performance?] \hfill \\ 32 thread count gives the best performance for both blurring applications. \\ The reason of serial version performs better than parallel version with 1 thread is Parallization overhead. The difference between them caused by the execution time of parallization. \end{description} \textbf{Results} \begin{figure}[!htb] \centering \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{./img/speedup_part_1_A.png} \caption{Speedup results for the blurring on coffee image.} \end{subfigure} \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{./img/speedup_part_1_A_strawberry.png} \caption{Speedup results for the blurring on strawberry image.} \end{subfigure} \caption{Speedup figures for image blurring application} \end{figure} \textbf{Explanation of Speedup Curve} \\ \tab Due to parallization overhead a speedup with value bigger than 1 is observed after 4 threads for both applications. \\ \tab Even though, a linear/perfect speedup is not expected the results are actually worse than what is expected. Even the parallel version with 32 thread on 32 core cluster gives only around 2x speedup.\\ One of reasons of that is intel compiler optimizations in serial version which is already very fast.\\ \newpage \subsection{Thread Binding Test} \tab In the compact mapping, multiple threads are mapped as close as possible to each other, while in the scatter mapping, the threads are distributed as evenly as possible across all cores. \begin{description} \item[Different mapping strategies; Compact and Scatter] \end{description} \textbf{Results} \begin{figure}[!htb] \centering \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{./img/binding_part_1_B_coffee.png} \caption{Results for the blurring on coffee image.} \end{subfigure} \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{./img/binding_part_1_B_strawberry.png} \caption{Results for the blurring on strawberry image.} \end{subfigure} \caption{Speedup figures for image blurring application} \end{figure} \textbf{Which Mapping Gives Better Performance, Why?}\\ \\ \tab Compact gives better performance for both images because when neighbouring threads are accessing the same(Temporal Locality) or nearby data(Spatial Locality); the data which is brought into the cache by one thread can be used by the other, avoiding a costly memory access. If the tasks were longer scatter could perform better. %---------------------------------------------------------------------------------------- % SECTION 2 %---------------------------------------------------------------------------------------- \newpage \section{PART II: Parallel Sudoku Solver} \tab In the second part of this assignment, I parallelized a serial sudoku solver with OpenMP which takes a sudoku problem as an input and finds all possible solutions from it using a brute force search for searching by all possible solutions to the problem. \\ \textbf{Part A} \\ \tab In this Part, I defined parallel section while calling solveSudoku() method. I used \#pragma omp task in different places inside solveSudoku() method, after several run and experiments using I selected the one that gives best performance. \begin{lstlisting}[language=C] if(matrix[row][col] != EMPTY) { //#pragma omp task firstprivate(col, row) if (solveSudoku(row, col+1, matrix, box_sz, grid_sz)) { printMatrix(matrix, box_sz); } } else { int num; for (num = 1; num <= box_sz; num++) { if (canBeFilled(matrix, row, col, num, box_sz, grid_sz)) { #pragma omp task firstprivate(num, col, row) { int tempMatrix[MAX_SIZE][MAX_SIZE]; int i; int j; for(i=0; i<box_sz; i++) { for( j=0; j<box_sz; j++){ tempMatrix[i][j] = matrix[i][j]; } } tempMatrix[row][col] = num; if (solveSudoku(row, col+1, tempMatrix, box_sz, grid_sz)) printMatrix(tempMatrix, box_sz); } } } } \end{lstlisting} \newpage \textbf{Part B} \\ \tab In this Part, I changed funtion signature to pass the task depth. Every time, a new task is created the depth value is incereased by 1 and passed to recursive method as parameter. \\ \tab To find a good cutoff value I run the program with different cutoff parameters. Overall cutoff value 30 gived the best result. \begin{lstlisting}[language=C] if (canBeFilled(matrix, row, col, num, box_sz, grid_sz)) { #pragma omp task firstprivate(num, col, row, depth) if (depth < MAX_DEPTH) { depth++; int tempMatrix[MAX_SIZE][MAX_SIZE]; int i; int j; for(i=0; i<box_sz; i++) { for( j=0; j<box_sz; j++){ tempMatrix[i][j] = matrix[i][j]; } } tempMatrix[row][col] = num; if (solveSudoku(row, col+1, tempMatrix, box_sz, grid_sz, depth)) printMatrix(tempMatrix, box_sz); } } \end{lstlisting} \textbf{Part C} \\ \tab In this Part, I created a shared variable found and passed it as parameter to solveSudoku function which stops priting solution and creation of new tasks. \begin{lstlisting}[language=C] #pragma omp parallel shared(found) { #pragma omp single { solveSudoku(0, 0, matrix, box_sz, grid_sz, &found); } } \end{lstlisting} I also used \#pragma omp critical when I changing value of found. \begin{lstlisting}[language=C] ... #pragma omp critical *found = 1; ... \end{lstlisting} \newpage \subsection{Scalability Test} \subsubsection{Part A} \begin{description} \item[Serial version execution time: ] 48.10 \item[Paralel version with single thread execution time: ] 78.69 \item[Which thread number gives the best performance?]\hfill \\ 32 thread count gives the best performance. \end{description} \textbf{Results} \begin{figure}[!htb] \centering \includegraphics[width=0.8\linewidth]{./img/speedup_part_2_A.png} \caption{\small Results for the Sudoku 4x4hard3 using algorithm in.} \end{figure} \textbf{Explanation of Speedup Curve}\\ \tab Due to parallization overhead a speedup with value bigger than 1 is observed after 2 threads. \\ \tab The task Parallization of serial version results in the creation of different task causes to a great overhead. Therefore, the speedup results are lower than expected. Even with 32 thread speedup is just around 11. \newpage \subsubsection{Part B} \begin{description} \item[Serial version execution time: ] 48.10 \item[Paralel version with single thread execution time: ] 74.50 \item[Which thread number gives the best performance?]\hfill \\ 32 thread count gives the best performance. \end{description} \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{./img/speedup_part_2_B.png} \caption{Results for the blurring on strawberry image.} \end{figure} \textbf{Explanation of Speedup Curve}\\ \tab Due to parallization overhead a speedup with value bigger than 1 is observed after 2 threads. \\ \tab In order to improve the performance of the previous parallel version, a cutoff parameter to limit the number of parallel tasks is needed.\\ \tab I defined a variable called depth and passed it as parameter to recursive method to prevent task creation after certain depth in the call-path tree. After that depth switch to the serial execution and do not generate more tasks. To determine that cut off parameter, I executed parallel program with several different values.\\ \tab The speedup curve is linear which is expected. \newpage \subsubsection{Part C} \begin{description} \item[Serial version execution time: ] 0.33 \item[Paralel version with single thread execution time: ] 0.57 \item[Which thread number gives the best performance?]\hfill \\ 32 thread count gives the best performance. \end{description} \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{./img/speedup_part_2_C.png} \caption{Results for the blurring on strawberry image.} \end{figure} \textbf{Explanation of Speedup Curve} \\ \tab Stopping the execution after finding a solution is very easy for serial version which can be done by returning a different value inside of for loops when a solution is found. In order to guraantee single solution in parallel version a shared variable 'found' which will stop further task creation and execution can be defined. In this application, parallized application results are really poor because a solution might be found by one of the tasks but other tasks that are created previously will continue execution and this creates a great overhead compared to serial version. \\ \tab Also, from 1 thread to 16 thread, increasing the number of threads decreases to speedup. This is most likely to be caused by increased number of total task created and needs to be executed, even though, one of the tasks already find a solution. \newpage \subsection{Thread Binding Test} \subsubsection{Part A} \begin{description} \item[Different mapping strategies; Compact and Scatter] \end{description} \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{./img/binding_part_2_A.png} \caption{Results for the blurring on strawberry image.} \end{figure} \textbf{Which Mapping Gives Better Performance, Why?}\\ \\ \tab Compact gives better performance for both images because when neighbouring threads are accessing the same(Temporal Locality) or nearby data(Spatial Locality); the data which is brought into the cache by one thread can be used by the other, avoiding a costly memory access. If the tasks were longer scatter could perform better. \newpage \subsubsection{Part B} \begin{description} \item[Different mapping strategies; Compact and Scatter] \end{description} \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{./img/binding_part_2_B.png} \caption{Results for the blurring on strawberry image.} \end{figure} \textbf{Which Mapping Gives Better Performance, Why?}\\ \\ \tab Compact gives better performance for both images because when neighbouring threads are accessing the same(Temporal Locality) or nearby data(Spatial Locality); the data which is brought into the cache by one thread can be used by the other, avoiding a costly memory access. If the tasks were longer scatter could perform better. \newpage \subsubsection{Part C} \begin{description} \item[Different mapping strategies; Compact and Scatter] \end{description} \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{./img/binding_part_2_C.png} \caption{Results for the blurring on strawberry image.} \end{figure} \textbf{Which Mapping Gives Better Performance, Why?}\\ \\ \tab Compact gives better performance for both images because when neighbouring threads are accessing the same(Temporal Locality) or nearby data(Spatial Locality); the data which is brought into the cache by one thread can be used by the other, avoiding a costly memory access. If the tasks were longer scatter could perform better. \newpage \subsection{Tests on Sudoku Problems of Different Grids} \begin{description} \item[Part-B] \end{description} \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{./img/grids_part_2_B.png} \caption{Results for the 32 Thread Parallel Sudoku solver in Part B with different sizes and difficulties.} \end{figure} \tab When the difficulties of sudoku problem increases parallized algorithm in Part B performs better over serial serial version. Effectiveness of parallization increases beacuse when the task difficulty increased the execution time ratio of parallelizable partion over non-parallelizable partion increases. %---------------------------------------------------------------------------------------- % SECTION 6 %---------------------------------------------------------------------------------------- \section{Formulas Used} \begin{enumerate} \begin{item} \emph{Speedup} \begin{equation*} Speedup = \frac{\mathrm{T1}}{\mathrm{Tp}} %\begin{center}\ce{}\end{center} \end{equation*} \end{item} \begin{item} \emph{Amdahl's Law} \begin{equation*} Tp \geq Wserial + \frac{\mathrm{Wparallel}}{\mathrm{P}} %\begin{center}\ce{}\end{center} \end{equation*} \end{item} \end{enumerate} \end{document}
{ "alphanum_fraction": 0.6870347039, "avg_line_length": 37.2324093817, "ext": "tex", "hexsha": "0cbc145161b5d92a0f9680d3c6f524bb5654bb91", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "084c21de2dc3626ff4fc3d8912a92d748051af8c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "berkaybarlas/Parallel-Programming", "max_forks_repo_path": "A1/A1-report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "084c21de2dc3626ff4fc3d8912a92d748051af8c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "berkaybarlas/Parallel-Programming", "max_issues_repo_path": "A1/A1-report.tex", "max_line_length": 246, "max_stars_count": null, "max_stars_repo_head_hexsha": "084c21de2dc3626ff4fc3d8912a92d748051af8c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "berkaybarlas/Parallel-Programming", "max_stars_repo_path": "A1/A1-report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4343, "size": 17462 }
\section{Introduction} The cloud is changing how users interact with data. This is true for legacy applications that are migrating on-site data to cloud storage, and for emerging applications that are augmenting cloud storage and dataset repositories with edge caches. In both cases, leveraging multiple cloud storage systems, edge caches, and dataset repositories allows applications to harness their already-deployed infrastructure, instantly gaining a global footprint. However, doing so introduces several storage design challenges relating to functional requirements, data consistency, access control, and fault tolerance. This paper describes \Syndicate, a wide-area storage system that addresses them in a coherent manner. \begin{figure}[h!] \centering \includegraphics[width=0.47\textwidth]{figures/overview} \caption{\it Application design with and without Syndicate.} \label{fig:overview} \end{figure} Each type of component system offers well-understood {\it functional} benefits. Cloud storage offers an ``always-on'' repository for hosting a scalable amount of data, while keeping it consistent and enforcing access controls. Dataset repositories host and curate a scalable amount of read-only data on behalf of many applications. CDNs, caching Web proxies, and HTTP object caches (``edge caches'') help under-provisioned origin servers scale up the volume of requests they can handle. In all cases, instances of these systems ({\it providers}) make their functional benefits available through instance-specific APIs. In contrast, the providers' infrastructure offers two key \textit{utility} benefits transparently ---data durability and access locality. Cloud storage and dataset providers improve durability by replicating data to geographically distributed datacenters, and edge caching providers improve locality by placing temporary copies of data at sites closer to readers than the origin servers (lowering latency and increasing bandwidth). Unlike functional benefits, utility benefits can be aggregated. Leveraging multiple providers yields more utility than leveraging any single one, and improvements to one provider improve overall utility. However, doing so is non-trivial because each provider has a different API, with different functional semantics. The developer must design the application around this, thereby coupling its storage logic to provider implementations (Figure~\ref{fig:overview}, left). Our key insight is that this coupling can be avoided by leveraging providers not for their functional benefits, but for the utility they offer---cloud storage and dataset providers offer durability, and edge caching providers offer locality. While this strategy ultimately makes the developers responsible for storage functionality, doing so lets them implement exactly the functionality they need while aggregating provider utility. \Syndicate\ helps them implement and deploy this functionality at scale, while minimizing provider coupling and addressing common storage concerns on their behalf (Figure~\ref{fig:overview}, right). The key contribution of \Syndicate\ is wide-area software-defined storage service that runs on top of unmodified providers. It provides an extensible interface for implementing domain-specific storage functionality in a provider-agnostic way, while addressing common cross-provider consistency, security, and fault-tolerance requirements automatically. Using \Syndicate\ lets developers create a storage service for their applications that has the aggregate utility of multiple underlying providers, but without having to build and deploy a whole storage service from the ground up. That is, \Syndicate\ creates virtual cloud storage through provider composition.
{ "alphanum_fraction": 0.8238272921, "avg_line_length": 54.3768115942, "ext": "tex", "hexsha": "9e7dbf8daea4bb9782dfe86d6c07704de6189a36", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2016-03-04T05:56:24.000Z", "max_forks_repo_forks_event_min_datetime": "2015-04-08T02:26:03.000Z", "max_forks_repo_head_hexsha": "4837265be3e0aa18cdf4ee50316dbfc2d1f06e5b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "jcnelson/syndicate", "max_forks_repo_path": "papers/paper-bigsystem2014/introduction.tex", "max_issues_count": 37, "max_issues_repo_head_hexsha": "4837265be3e0aa18cdf4ee50316dbfc2d1f06e5b", "max_issues_repo_issues_event_max_datetime": "2016-03-22T04:01:32.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-28T20:58:05.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "jcnelson/syndicate", "max_issues_repo_path": "papers/paper-bigsystem2014/introduction.tex", "max_line_length": 120, "max_stars_count": 16, "max_stars_repo_head_hexsha": "4837265be3e0aa18cdf4ee50316dbfc2d1f06e5b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "jcnelson/syndicate", "max_stars_repo_path": "papers/paper-bigsystem2014/introduction.tex", "max_stars_repo_stars_event_max_datetime": "2016-03-17T06:38:46.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-02T15:39:04.000Z", "num_tokens": 732, "size": 3752 }
% This is part of the TFTB Reference Manual. % Copyright (C) 1996 CNRS (France) and Rice University (US). % See the file refguide.tex for copying conditions. \markright{anastep} \section*{\hspace*{-1.6cm} anastep} \vspace*{-.4cm} \hspace*{-1.6cm}\rule[0in]{16.5cm}{.02cm} \vspace*{.2cm} {\bf \large \sf Purpose}\\ \hspace*{1.5cm} \begin{minipage}[t]{13.5cm} Analytic projection of unit step signal. \end{minipage} \vspace*{.5cm} {\bf \large \sf Synopsis}\\ \hspace*{1.5cm} \begin{minipage}[t]{13.5cm} \begin{verbatim} y = anastep(N) y = anastep(N,ti) \end{verbatim} \end{minipage} \vspace*{.5cm} {\bf \large \sf Description}\\ \hspace*{1.5cm} \begin{minipage}[t]{13.5cm} {\ty anastep} generates the analytic projection of a unit step signal : \centerline{$y(t)=0$ for $t<t_i$, and $y(t)=1$ for $t\geq t_i$.}\\ \hspace*{-.5cm}\begin{tabular*}{14cm}{p{1.5cm} p{8.5cm} c} Name & Description & Default value\\ \hline {\ty N} & number of points\\ {\ty ti} & starting position of the unit step & {\ty N/2}\\ \hline {\ty y} & output signal\\ \hline \end{tabular*} \end{minipage} \vspace*{1cm} {\bf \large \sf Examples} \begin{verbatim} signal=anastep(256,128); plot(real(signal)); signal=-2.5*anastep(512,301); plot(real(signal)); \end{verbatim} \vspace*{.5cm} {\bf \large \sf See Also}\\ \hspace*{1.5cm} \begin{minipage}[t]{13.5cm} \begin{verbatim} anasing, anafsk, anabpsk, anaqpsk, anaask. \end{verbatim} \end{minipage}
{ "alphanum_fraction": 0.6382978723, "avg_line_length": 20.3243243243, "ext": "tex", "hexsha": "274d76d7f903d56f0114ecff3b46398e79524efa", "lang": "TeX", "max_forks_count": 21, "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_path": "tftb/refguide/anastep.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_path": "tftb/refguide/anastep.tex", "max_line_length": 74, "max_stars_count": 50, "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_path": "tftb/refguide/anastep.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "num_tokens": 572, "size": 1504 }
\chapter{The Fundamental Theory For Spectral Methods} \label{Chapter_2} \vspace{0.3cm} In this chapter, we will present the elements necessary to solve partial differential equations using spectral methods given as follows \begin{align} \label{general_problem} \left \lbrace \begin{array}{ll} &\frac{\partial u}{\partial t} = \mathcal{L} u, \hspace{3mm} x \in I, \hspace{3mm} t > 0,\\ \\ &u(x, 0) = g(x), \hspace{9mm} x \in I, \end{array} \right . \end{align} where $u$ is defined in some Hilbert space $\mathcal{H}$, with initial condition $g(x) \in \mathcal{H}$ and $\mathcal{L}$ is some spatial differential operator, which allows us to represent the previous problem in another whose solution $u$ will be given by a linear combination of already known functions. \\ To do this, suppose that $\mathcal{H}$ is a separable Hilbert space with the inner product $\langle \cdot, \cdot \rangle$. Therefore, we can represent the function $u$ in terms of a known orthonormal base of $\mathcal{H}$, which we will denote as $\{\phi_k \}_{k \in I}$, given as follows \begin{align*} \displaystyle u = \sum_{k \in I} \langle \phi_k, u \rangle \phi_k. \end{align*} There is a wide variety of families of base functions, which define different spectral methods. In this chapter, we will consider the well-known Fourier basis given by \begin{align} \label{base_phi} \phi_n (x) = e^{inx}. \end{align} that form an orthogonal set with the standard interior product $L^2$ in the interval $(0, 2 \pi)$, that is, \begin{align} \label{ortho_phi} \displaystyle \int_{0}^{2\pi} \phi_k (x) \overline{\phi_l (x)} dx = 2 \pi \delta{kl} = \left \lbrace \begin{array}{ll} 0 \hspace{3mm} &\text{if} \hspace{3mm} k \neq l, \\ 2 \pi &\text{if} \hspace{3mm} k = l. \end{array} \right. \end{align} We will denote as $B = span\{e^{inx}: |n| \leq \infty \}$ the set containing the Fourier bases. Therefore, we can define the Fourier series $F[u]$ for $u(x) \in L^2 [0, 2\pi]$ as follows \begin{equation} \label{fourier_series} F[u] \equiv \displaystyle \sum_{ |n| \leq \infty} \hat{u}_{n} e^{inx}, \end{equation} where \begin{align} \label{coeff_fourier} \hat{u}_n = \frac{1}{2 \pi} \displaystyle \int_{0}^{2 \pi} u(x) e^{-inx} dx, \hspace{3mm} k = 0, \pm 1, \pm 2, \dots. \end{align} which is known as the classical continuous series of trigonometric polynomials, where $\hat{u}_{n}$ are the Fourier coefficients. \\ It is important noted that the integrals in (\ref{coeff_fourier}) exist if $u$ is Riemann-integrable, i.e., if $u$ is bounded and piecewise continuous in $(0, 2 \pi)$. More generally, the Fourier coefficients are defined for any function that is integrable in the Lebesgue sense. Also the relation (\ref{coeff_fourier}) associates with $u$ a sequence of complex numbers called the Fourier transform of $u$. It is possible as well to introduce a Fourier cosine transform and a Fourier sine transform of $u$, respectively, through the formulas \begin{align} \label{coeff_a_n} a_n = \frac{1}{2 \pi} \displaystyle \int_{0}^{2 \pi} u(x) \cos(nx) dx, \hspace{3mm} n = 0, \pm 1, \pm 2, \dots, \end{align} and \begin{align} \label{coeff_b_n} b_n = \frac{1}{2 \pi} \displaystyle \int_{0}^{2 \pi} u(x) \sin(nx) dx, \hspace{3mm} n = 0, \pm 1, \pm 2, \dots. \end{align} The three Fourier transforms of $u$ are related by the formula $\hat{u}_n = a_n - ib_n$ for $n = 0, \pm 1, \pm 2, \dots$. Moreover, if $u$ is a real valued function, $a_n$ and $b_n$ are real numbers, and $\hat{u}_{-n} = \hat{u}_n$. \\ Based on the above, we will present two tools to build the methods that will be used in Chapter \ref{Chapter_3}, and that will be developed independently in the following two sections. In the first section, we will see that with the continuous Fourier expansion we can define a projection operator on a space of finite dimension that will allow us to approximate a function and its derivatives. In the second and last section, due to the complexity of the calculation of the previous integrals, we will see that it is possible to use quadrature rules to approximate them, and thus define an interpolation operator that will give us a discrete representation for a function and its derivatives. For these two operators, at the end of each section we will discuss the factors that determine the behavior of the series when used to approximate smooth functions, showing how fast they approach, when, and in what sense they are convergent. \newpage \input{preliminaries/Projection_Operator} \newpage \input{preliminaries/Interpolation_Operator}
{ "alphanum_fraction": 0.7059076262, "avg_line_length": 68.4558823529, "ext": "tex", "hexsha": "cfbe1ba85660559d954e504c610d0370654331d8", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-03-04T13:29:56.000Z", "max_forks_repo_forks_event_min_datetime": "2022-03-04T13:29:56.000Z", "max_forks_repo_head_hexsha": "c5e2a019312fb8f9bc193b04b07b7815e6ed4032", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alanmatzumiya/spectral-methods", "max_forks_repo_path": "docs/preliminaries/Fundamental_Theory.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c5e2a019312fb8f9bc193b04b07b7815e6ed4032", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alanmatzumiya/spectral-methods", "max_issues_repo_path": "docs/preliminaries/Fundamental_Theory.tex", "max_line_length": 694, "max_stars_count": 5, "max_stars_repo_head_hexsha": "c5e2a019312fb8f9bc193b04b07b7815e6ed4032", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alanmatzumiya/Maestria", "max_stars_repo_path": "docs/preliminaries/Fundamental_Theory.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-12T11:18:45.000Z", "max_stars_repo_stars_event_min_datetime": "2020-12-29T10:44:02.000Z", "num_tokens": 1447, "size": 4655 }
\section*{\scshape Summary}\label{sec:summary} \begin{frame}{Summary} Text. \end{frame}
{ "alphanum_fraction": 0.7222222222, "avg_line_length": 15, "ext": "tex", "hexsha": "f2076bc6132c522dc0728720db914c885469d196", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0e484c357575eaab9a3e825bab10031ab87b57e5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "carlosmccosta/latex-template-beamer", "max_forks_repo_path": "tex/sections/summary.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0e484c357575eaab9a3e825bab10031ab87b57e5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "carlosmccosta/latex-template-beamer", "max_issues_repo_path": "tex/sections/summary.tex", "max_line_length": 46, "max_stars_count": null, "max_stars_repo_head_hexsha": "0e484c357575eaab9a3e825bab10031ab87b57e5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "carlosmccosta/latex-template-beamer", "max_stars_repo_path": "tex/sections/summary.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 28, "size": 90 }
\documentclass[10pt]{beamer} \usetheme[progressbar=frametitle]{metropolis} \usepackage{appendixnumberbeamer} \usepackage{amssymb} \usepackage{booktabs} \usepackage[scale=2]{ccicons} \usepackage{tikz} \usepackage{pgfplots} \usepgfplotslibrary{dateplot} \usepackage{wrapfig} \usepackage{xspace} \newcommand{\themename}{\textbf{\textsc{metropolis}}\xspace} \usepackage{graphicx} \usepackage{subcaption} \usepackage[export]{adjustbox} \title{EE5327 : Optimization} % \date{\today} \date{} \author{Harsh Raj - MA17BTECH11003 \newline Aravind Reddy K V - MA17BTECH11010} \institute{Mathematics and Computing, IIT-Hyderabad} % \titlegraphic{\hfill\includegraphics[height=1.5cm]{logo.pdf}} \begin{document} \maketitle %begin{frame}{Table of contents} % \setbeamertemplate{section in toc}[sections numbered] % \tableofcontents[hideallsubsections] %\end{frame} %\section{Recommender Systems Strategies} %\begin{frame}[fragile]{Recommender Systems Strategies} % \begin{itemize} % \item Electronic retailers and content providers offer a huge selection of products to meet a variety of special needs and tastes. % \item Matching consumers with the most appropriate products is key to enhancing user satisfaction and loyalty. % \item Therefore, more retailers and e-commerce leaders like Amazon and Netflix have become interested in recommender systems, which analyze patterns of %user interest in products to provide personalized recommendations that suit a user's taste. % \end{itemize} %\end{frame} \section{Recommender Systems Strategies} \begin{frame}[fragile]{Recommender Systems Strategies} \begin{center} \begin{tikzpicture}[sibling distance=10em, every node/.style = {shape=rectangle, rounded corners, draw, align=center, top color=white, bottom color=blue!20}]] \node {Recommender System Strategies} child { node {Neighbour Method} } child { node {Latent Factor Methods} }; \end{tikzpicture} \end{center} \\ %\begin{itemize} %\item The \textbf{content filtering} approach creates a profile for each user or product to characterize its nature. %\item The profiles allow programs to associate users with matching products. %\item Content-based strategies require gathering external information that might not be available or easy to collect. %\end{itemize} %\end{frame} %\begin{frame}[fragile]{Collaborative Filtering} % \textbf{Collaborative filtering} relies only on past user behavior—for example, previous transactions or product ratings—without requiring the creation of explicit profiles. % \\ % \vspace{3mm} % The two primary areas of collaborative filtering are the \textbf{neighborhood methods} and \textbf{latent factor models}. \begin{center} \graphicspath{ {./images/} } \includegraphics [scale=0.2] {3} \end{center} \end{frame} \begin{frame}[fragile]{Neighbourhood Method} This method involves finding $K$-nearest neighbours (\textbf{K-NN} algorithm and it's variants).\\ \vspace{3mm} \begin{center} \graphicspath{ {./images/} } \includegraphics [scale=0.2] {1} \end{center} Why is this method very slow and also not accurate always? \newline - If predicting among all possibilities, requires \textbf{O(n)} iterations for each prediction. \newline - Only predicts within predetermined cluster if time reduced to \textbf{O(1)}. \newline - Cannot Incorporate Item/User based Bias \end{frame} \begin{frame}[fragile]{Latent Factor Models} \textbf{Latent factor models} try to explain the ratings by characterizing both items and users on factors inferred from the ratings patterns. \begin{center} \graphicspath{ {./images/} } \includegraphics [scale=0.2] {2} \end{center} User's predicted relative rating for a movie = \newline \textbf{Dot product} of the movie's and user's location vectors on \text\textbf{Latent Space}. \end{frame} %\begin{frame}[fragile]{Matrix Factorization Meth%ods} % \begin{item%ize} % \item Some of the most successful realizations of latent factor models are based on \textbf{matrix factorizati%on}. % \item In its basic form, matrix factorization characterizes both items and users by vectors of factors inferred from item rating patte%rns. % \end{item%ize} % Recommender systems rely on different types of \textbf{input data}, which are often placed in a matrix with one dimension representing users and the other %dimension representing items of interes%t.\\ % \vspace{%3mm} % The input form could be either \textbf{explicit feedback} or \textbf{implicit feedb%ack} %\end{frame} \begin{frame}[fragile]{Input Data} \begin{enumerate} \item \textbf{\textit{Explicit Feedback}} \vspace{3mm} \begin{itemize} \item Explicit input by users regarding their interest in products. \vspace{3mm} \item Comprises a \textbf{Sparse Matrix}, since any single user is likely to have rated only a small percentage of possible i \vspace{3mm} \item \textbf{High confidence} on this data. \end{itemize} \vspace{3mm} \item \textbf{\textit{Implicit Feedback}} \vspace{3mm} \begin{itemize} \item Observing user behavior, including purchase history, browsing history, search patterns etc. \vspace{3mm} \item Denotes the presence or absence of an event, so it is typically represented by a \textbf{Dense Matrix} \vspace{3mm} \item \textbf{Low confidence} on this data. \end{itemize} \end{enumerate} \end{frame} \begin{frame}[fragile]{Matrix Factorization Model} Matrix factorization models map both users and items to a joint \textbf{Latent Factor Space} of dimensionality \boldsymbol{f}.\\ \vspace{3mm} Each item \boldsymbol{i} is associated with a vector \boldsymbol{q_i} $\in \mathbb{R}^{f} $, quantizing the amount of each attribute present in item {i}. \newline Each user \boldsymbol{u} is associated with a vector \boldsymbol{p_u} $\in \mathbb{R}^{f}$, quantizing the weightage of each attribute in the user's final decision.\\ \vspace{3mm} The resulting dot product, \boldsymbol{q_i^{T} p_u}, captures the user \boldsymbol{u}’s overall interest in the item \boldsymbol{i}.\\ \vspace{3mm} This approximates user \boldsymbol{u}'s estimated rating of item \boldsymbol{i}, denoted by \boldsymbol{\hat{r}_{ui}}: \begin{equation} \boldsymbol{\hat{r}_{ui}=q_{i}^{T}p_{u}}. \end{equation} \end{frame} \begin{frame}{Example} For 5 movies, 7 latent attributes, we get : \newline \newline \begin{bmatrix} q_{11} & q_{12}& q_{13}& q_{14}& q_{15}& q_{16}& q_{17}& \\ q_{21} & q_{22}& q_{23}& q_{24}& q_{25}& q_{26}& q_{27}& \\ q_{31} & q_{32}& q_{33}& q_{34}& q_{35}& q_{36}& q_{37}& \\ q_{41} & q_{42}& q_{43}& q_{44}& q_{45}& q_{46}& q_{47}& \\ q_{51} & q_{52}& q_{53}& q_{54}& q_{55}& q_{56}& q_{57}& \\ \end{bmatrix} \begin{bmatrix} p_{1} &\\ p_{2} &\\ p_{3} &\\ p_{4} &\\ p_{5} &\\ p_{6} &\\ p_{7} &\\ \end{bmatrix} = \begin{bmatrix} \hat{r}_{1} &\\ \hat{r}_{2} &\\ \hat{r}_{3} &\\ \hat{r}_{4} &\\ \hat{r}_{5} &\\ \end{bmatrix} \end{frame} \begin{frame}{Optimization Problem} %Minimizes the regularized squared error on the set of known ratings Introduce Regularization Parameter to avoid overfitting. To learn the factor vectors \boldsymbol{p_u} and \boldsymbol{q_i}, the system minimizes the regularized squared error on the set of known ratings: \begin{center} {\min\limits_{q^{\star}, p^{\star}}$\sum\limits_{(u,i)\in \kappa}(r_{ui}-q_{i}^{T} p_{u})^{2}+\lambda(\Vert q_{i}\Vert^{2}+\Vert p_{u}\Vert^{2})$} \end{center} The constant $\lambda$ controls the extent of regularization, by keeping each attribute close to zero. $\lambda$ is determined by cross-validation. \end{frame} \begin{frame}{Learning Algorithm : SGD} One option is to use Stochastic Gradient Descent Algorithm, i.e., $e_{ui} = r_{ui}- {q_i}^T p_u$ \newline $q_i \longleftarrow q_i + \gamma(e_{ui}p_u - \lambda q_i)$ \newline $p_u \longleftarrow p_u + \gamma(e_{ui}q_i - \lambda p_u)$ \newline \newline \textbf{Problems :} \newline \newline - Requires $\textbf{O(n)}$ operations for each iteration. \newline. \hspace{2mm} Feasible only for $\textbf{Sparse Matrix}$. \newline. \hspace{3mm}$\implies$ Cannot Use $\textbf{Implicit Feedback}$ Data. \newline - All operations must be performed in serial order. \end{frame} \begin{frame}{Learning Algorithm : ALS} \textbf{Alternating Least Squares:} \newline \newline As both \boldsymbol{p_u} and \boldsymbol{q_i} are unknown, the objective is not Convex. \begin{equation} \sum\limits_{(u,i)\in \kappa}(r_{ui}-q_{i}^{T} p_{u})^{2}+\lambda(\Vert q_{i}\Vert^{2}+\Vert p_{u}\Vert^{2}) \end{equation} \newline If we fix one of the unknowns, the optimization problem becomes Quadratic Convex (QCP) and can be solved optimally. \newline \newline ALS technique rotates between fixing \boldsymbol{q_i}’s and \boldsymbol{p_u}’s. \newline When all \boldsymbol{p_u}’s are fixed, the system recomputes the \boldsymbol{q_i}’s by Directly solving a least-squares problem, and vice-versa. \newline Each step decreases objective function until convergence. \end{frame} \begin{frame}{Learning Algorithm : ALS} \textbf{Repeat Until Convergence: } \begin{center} {(i) \min\limits_{q^{\star}}$\sum\limits_{(u,i)\in \kappa}(r_{ui}-q_{i}^{T} p_{u})^{2}+\lambda(\Vert q_{i}\Vert^{2}+\Vert p_{u}\Vert^{2})$} \end{center} \begin{center} {(ii) \min\limits_{p^{\star}}$\sum\limits_{(u,i)\in \kappa}(r_{ui}-q_{i}^{T} p_{u})^{2}+\lambda(\Vert q_{i}\Vert^{2}+\Vert p_{u}\Vert^{2})$} \end{center} \textbf{Advantages :} \newline \newline - Feasible for $\textbf{Dense Matrix}$. \newline. \hspace{2mm}$\implies$ Can Use $\textbf{Implicit Feedback}$ Data. \newline - All \boldsymbol{p_i} are computed independent of other factors (same for all \boldsymbol{q_i}). \newline. \hspace{2mm}$\implies$ Parallelization can be done here. \end{frame} \begin{frame}{Adding Biases and Confidence} Incorporate \textbf{Bias} in this model - \newline \newline (i) \boldsymbol{\mu} : Shifts the Prediction Mean from 0 to \boldsymbol{\mu} \newline . \hspace{7mm} where \boldsymbol{\mu}= Overall Average Rating \newline (ii) \boldsymbol{b_i} : Item Based Bias \newline . \hspace{7mm} where \boldsymbol{b_i}= Average Rating of Item i - Overall Average Rating \newline (iii) \boldsymbol{b_u} : User Based Bias \newline . \hspace{7mm} where \boldsymbol{b_u}= Average Rating by User u - Overall Average Rating \newline \newline Incorporate \textbf{Confidence} in this model - \newline \newline (iv) \boldsymbol{c_{ui}} : Confidence in observing \boldsymbol{r_{ui}} %(iv) $\boldsymbol{{\mid N(u) \mid}}^{-0.5}$ $(\sum\limits_{i \in N(u)}\boldsymbol{x_i} )$: Normalized Implicit Feedback %\vspace{2mm} %\newline . \hspace{7mm} where \boldsymbol{\mid N(u)\mid}= Items with Implicit Feedback from User u %\newline . \hspace{7mm} and \boldsymbol{x_i} = Implicit Feedback Vector for i $\in \boldsymbol{N(u)}$ %\newline \end{frame} \begin{frame}{Final Recommender} \newline Final Prediction is : \newline $ \boldsymbol{\hat{r}_{ui}}=c_{ui}(\mu + b_u + b_i + {p_u}^T q_i) $ \newline \newline Final form of Recommender: \begin{center} {\min\limits_{q^{\star}, p^{\star},b^{\star}}$\sum\limits_{(u,i)}c_{ui}(r_{ui}- \mu -b_u -b_i - q_{i}^{T} p_{u})^{2}+\lambda(\Vert q_{i}\Vert^{2}+\Vert p_{u}\Vert^{2} +\Vert b_{u}\Vert^{2} +\Vert b_{i}\Vert^{2} )$ } \end{center} subject to : $c_{ui} \geqslant 0 $ $\forall (u,i) $ \newline. \hspace{1.53cm} $ \lambda \geqslant 0 $ \end{frame} \begin{frame}{Proof of Convexity} \textbf{Claim:} For a fixed $p_u$, \newline $\sum\limits_{(u,i)}c_{ui}(r_{ui}- \mu -b_u -b_i - q_{i}^{T} p_{u})^{2}+\lambda(\Vert q_{i}\Vert^{2}+\Vert p_{u}\Vert^{2} +\Vert b_{u}\Vert^{2} +\Vert b_{i}\Vert^{2} )$ \newline is convex in q_{i}. \newline \newline \textbf{Proof:}\newline i) $(r_{ui}- \mu -b_u -b_i - q_{i}^{T} p_{u})$ is affine in q_{i} \newline ii) $(r_{ui}- \mu -b_u -b_i - q_{i}^{T} p_{u})^{2}$ is convex in $q_{i}$ as it is square of affine function \newline iii) $\lambda(\Vert q_{i}\Vert^{2} )$ is convex in $q_{i}$ because it is a norm. \newline iv) As sum of convex functions is convex, adding ii) and ii), we get, \newline $\sum\limits_{(u,i)}c_{ui}(r_{ui}- \mu -b_u -b_i - q_{i}^{T} p_{u})^{2}+\lambda(\Vert q_{i}\Vert^{2}+\Vert p_{u}\Vert^{2} +\Vert b_{u}\Vert^{2} +\Vert b_{i}\Vert^{2} )$ is convex. \newline(Proved) \end{frame} \begin{frame}{Accuracy Improvement} Original Netflix system : \newline RMSE = 0.9514 \newline \newline Plain Matrix Factorization Model : \newline RMSE = 0.9025 \newline \newline Included User and Item Biases : \newline RMSE = 0.9000 \newline \newline Included Implicit Feedback and Confidence Parameter : \newline RMSE = 0.8925 \end{frame} \end{document}
{ "alphanum_fraction": 0.7001088816, "avg_line_length": 41.6116504854, "ext": "tex", "hexsha": "247860fdd875ebb544aa92b5635dd80db5de2c8b", "lang": "TeX", "max_forks_count": 20, "max_forks_repo_forks_event_max_datetime": "2022-01-30T13:40:20.000Z", "max_forks_repo_forks_event_min_datetime": "2019-03-09T06:04:09.000Z", "max_forks_repo_head_hexsha": "752452296cbee241df0100a82b90e885c9ef6ec7", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "harshraj11584/-PaperImplementation-_RecommenderSystems_Netflix", "max_forks_repo_path": "presentation.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "752452296cbee241df0100a82b90e885c9ef6ec7", "max_issues_repo_issues_event_max_datetime": "2020-05-23T03:46:59.000Z", "max_issues_repo_issues_event_min_datetime": "2020-05-23T03:46:59.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "harshraj11584/-PaperImplementation-_RecommenderSystems_Netflix", "max_issues_repo_path": "presentation.tex", "max_line_length": 275, "max_stars_count": 26, "max_stars_repo_head_hexsha": "752452296cbee241df0100a82b90e885c9ef6ec7", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "harshraj11584/-PaperImplementation-_RecommenderSystems_Netflix", "max_stars_repo_path": "presentation.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-18T01:05:01.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-07T07:42:00.000Z", "num_tokens": 4087, "size": 12858 }
\documentclass[12pt]{book} \usepackage{version} \usepackage[T1]{fontenc} \usepackage{longtable} \usepackage{graphicx} \usepackage{index} \usepackage[colorlinks=true,hyperindex=true]{hyperref} \makeindex \begin{document} \author{\href{http://www.freecol.org/team-and-credits.html}{The FreeCol Team}} \title{FreeCol Documentation\\Developer Guide for Version \fcversion} \maketitle{} \tableofcontents \newpage \hypertarget{How to become a FreeCol developer} {\chapter{How to become a FreeCol developer}} \hypertarget{The goal of our project}{\section{The goal of our project}} We are aiming towards making a clone of the old computer classic "Sid Meier's Colonization". New features should in general not be added before we reach FreeCol 1.0.0, unless they are implemented as optional features using the class "GameOptions". (Note that if you add a game option, add a default value setting in FreeColServer.fixGameOptions so that use of the option does not break old games. The same applies to client-options, in ClientOptions.fixClientOptions.) The big exception to this rule is the our client-server model that will allow players from all over the world to compete in a game of FreeCol. Read more \href{http://www.freecol.org/about.html}{here}. \hypertarget{SourceForge project site}{\section{SourceForge project site}} You should visit and get familiar with our project site at \href{https://sourceforge.net/projects/freecol/}{SourceForge}. This site contains trackers for bugs and features requests, a task manager and lots of other important services. We expect developers to use this site regularly when making changes to the codebase. \hypertarget{How to get tasks}{\section{How to get tasks}} You may find available tasks in the bug and feature request trackers --- just grab any task you are planning to do (in the immediate future) by posting a comment telling that you intend to do this task. Please create a new bug report/feature request for tasks that are not listed here (and before you start working on them). Please remember to post a comment when you are done, or if you are unable to complete the work. Major changes to the code should be discussed on the developer's mailing list before they are implemented. This is to ensure that your work will not be in vain if somebody else knows a better way of doing it. % [Currently moot, we have no working roadmap] % It is also a good idea to discuss changes that is not directly % related to the next release on our % \href{http://www.freecol.org/index.php?section=18}{roadmap}. \hypertarget{How to use the trackers}{\section{How to use the trackers}} If you are a full developer (i.e. with write privileges), this is how a bug tracker item should be updated while you are working: \begin{enumerate} \item Assign yourself to the tracker item before you start working on it. \item Please verify that no duplicate entry has been posted. \begin{itemize} \item If you can find a duplicate and the bug has \emph{not} been fixed, please set its status to ``Closed-Duplicate'', its milestone to ``Unspecified'', and post a comment with the ID of the other tracker item (add a comment to the other tracker item as well). \item If you can find a duplicate and the bug has been fixed, use the ``Closed-out-of-date'' status to close the tracker item. \end{itemize} \item Set the item's status to ``Open-Needs-Info'' if you require input from the person originally submitting the item. \item Set the item's status to ``Open-WWC1D'' if the fix requires determining what Col1 did (``What Would Col1 Do?''). \item If you are unable to complete the item: assign it back to ``None'' and make a comment describing any problems relevant for another developer. The milestone of open bugs should be ``Current''. \item If you successfully commit a fix for a bug, set the milestone to ``Fixed-trunk'', the status to ``Closed'' if you are certain of the fix, or ``Pending-Fixed'' if there is some uncertainty and/or further comment is welcome. Please also write a comment telling that the work is done, and it is helpful to refer to any commit/s where relevant changes occurred. \end{enumerate} This is how a feature request item should be updated while you are working:\emph{(Needs updating since the sourceforge migration)} \begin{enumerate} \item Assign yourself to the tracker item before you start working on it. \item Please verify that no duplicate entry has been posted. \begin{itemize} \item If you can find a duplicate and the feature has NOT been implemented: Please use the group "Duplicated", set the status to "Closed" and post a comment with the ID of the other tracker item (add a comment to the other tracker item as well). \item If you can find a duplicate and the feature has been implemented: Use the group "Out of date" and close the tracker item. \end{itemize} \item Set the item's status to pending if you require input from the person originally submitting the item. \item Set the group to "Accepted" if you decide that this feature request should be added before the next release (please discuss on the mailing list if there are any reasons not to include this feature). \item If you are unable to complete the item (or you think someone else should be implementing it): Assign it to "None" and make a comment describing any issues relevant for another developer. \item After you have completed the work: Set the group to "Added" and the status to "Closed". Please also write a comment telling that the work is done. \end{enumerate} You can use any suitable canned response instead of writing a comment. \hypertarget{Mailing lists}{\section{Mailing lists}} Our primary means of communication is the developers mailing list: [email protected] You can (and should) subscribe to this list \href{http://lists.sourceforge.net/lists/listinfo/freecol-developers}{here}. \hypertarget{Git}{\section{Git}} Git is the tool we are using to manage the changes within our source code tree. This system makes it possible for all developers to have their own full copy of the project, and supports synchronization between the central version of the code (`the repository') and the local copies. Git also makes it possible to undo changes that were previously committed to the repository. \href{http://www.freecol.org/documentation/git.html}{This page} describes how you can start using Git and get a working copy of the code (without commit privileges). You can use \verb+git pull+ for updating an existing working copy. Changes can only be applied by those who have write-access, so you may need to either send the changes to the developer mailing list or use the patch tracking system. \hypertarget{Compiling the code}{\section{Compiling the code}} We use the \textit{Apache Ant} build system for compiling the game. You can get a copy of this program \href{http://ant.apache.org}{here}. After it has been installed, you simply type \verb+ant+ in the top directory of your "working copy" in order to compile the game. The file \verb+FreeCol.jar+ will then be generated, and you can start the game simply by writing: \verb+java -Xmx256M -jar FreeCol.jar+ \hypertarget{Using an IDE}{\section{Using an IDE}} Most FreeCol developers don't seem to use an IDE, so there is no ``official'' setup available. However, in the config folder, you can find contributed configuration files for both NetBeans and Eclipse. The following information has also been contributed by players that do use an IDE. \hypertarget{Using Eclipse}{\subsection{Using Eclipse (thanks to ``nobody'')}} \emph{This section is out of date since we migrated from svn to git. Leaving as-is for now in the hope the procedure is similar.} Since I'm quite a fan of the Eclipse IDE, I thought I would share my experience with building FreeCol in Eclipse on the Windows platform. I assume that you have installed JDK, Eclipse and SVN (both Eclipse plug-in and stand alone client). Make sure that your path environment variable contains both the JDK and SVN client directories. First, add a new repository location in Eclipse in the SVN Repositories view, for the \href{https://svn.freecol.org/svnroot/freecol/freecol/}{FreeCol repository}. Leave all the other settings unchanged and click Finish. Select either 'trunk' or the branch you want to build, right-click on it and select 'Find/Check out as...' In the Check Out dialog, make sure the option 'Check out as a project configured using the New Project Wizard' is selected, and click Finish. Select 'Java Project' in the 'Java' category, and click Next. Name your project (FreeCol is an obvious choice). Leave all the other options as is, and click Finish. Eclipse starts to copy all the files from the repository. Depending on the server and your connection, this may take from 1-10 minutes. Get a cup of coffee or a glass of cold milk while you wait. After downloading has finished, you should see your new project in the Project Explorer. Right-click on the project and select 'Configure Build Path...' from the 'Build Path' sub-menu. First off, let's make sure that Eclipse has detected the source file folder 'src'. Select the 'Source' tab, and make sure there is an entry with the name [project name]/src. If not, add it. Don't close the window, as we need to make other changes here. Next, we have to add the external jar files to the project, so Eclipse can properly verify the code. Select the 'Libraries' tab, and click the 'Add JARs...' button. Browse to the 'jars' subfolder in the FreeCol project, and select all the jar-files by holding down the CTRL-key. Click OK, and OK again. Eclipse should now be able to properly build the project without any errors. If not, fix it. However, we don't actually want Eclipse to do this, since we instead want to use the Ant build file from the repository. Right-click on the project, and select 'Properties...' all the way at the bottom of the menu. Select 'Builders' in the menu to the left. You should now see one entry in the list, named 'Java Builder'. This is the default, built-in java builder in Eclipse. Click 'New...' to create our Ant builder instead. Select 'Ant Builder' from the list and click OK. In the configuration dialog, click the 'Browse Workspace...' button the 'Buildfile' section. Click on the FreeCol project, and select 'build.xml' from the list on the right. Click OK. Click OK again, and the Ant builder is created. You can keep both builders active at the same time, but if you want to save processing power, you can uncheck the 'Java Builder'. Eclipse will warn you about doing this, but don't be alarmed, you can always turn it on again. Click OK. If you have activated Automatic building in Eclipse, Ant should start building the project right away. Possible errors could be, that Ant cannot access either the java compiler or a stand-alone svn client. In either of these cases, make sure you added the right directories to your path environment variable. If the build went succesfull; congratulations. Open the project folder in the file system, and you will see 'FreeCol.jar' in the root folder. Since this is an executable jar file, you can double click it and launch the game right away. Enjoy. \hypertarget{Using NetBeans}{\subsection{Using NetBeans (thanks to ``wintertime'')}} We have a NetBeans project with updated settings, but it is not at the standard location the IDE expects it at, as the IDE gets slower while a large project, like FreeCol, is open. NetBeans (currently on version 8.0.2) also, for a long time, contains a bug in that it wont read or write the editor setting for the Java version, if it was correctly set to 1.8. It is therefore set to 1.5 in the project file and causes the Editor to show a huge amount of wrong compatibility warnings, though compilation is not affected. You have the option of just clicking on ``build.xml'' in ``Files'' pane and starting the build commands easily through the ``Navigator'' pane inside the IDE that way. If you opt for using the provided project, copy the ``FreeCol/config/nbproject'' folder to ``FreeCol/nbproject'' once. Without this step it will not work! Open the project inside NetBeans once, it will remember this, as long as you do not close the project manually. Each time you open the IDE right click on the ``FreeCol'' project name inside the ``Projects'' panel, choose ``Properties'', then from the opened dialog choose ``Java Sources'' on the tree part, then change the ``Source Level'' setting to ``JDK 1.8'' and click the ``OK'' button. This is necessary every time, until the NetBeans bug got fixed! \hypertarget{Creating a new NetBeans project}{\subsection{Creating a new NetBeans project (thanks to ``xsainnz'')}} \emph{This section is out of date and incomplete. Please, use the existing NetBeans project! Leaving as-is for now in the hope its still educational and updated someday.} \begin{itemize} \item In Netbeans, Select File > New Project \item New Project Window \begin{itemize} \item Select Java Category, Java Free-Form Project \end{itemize} \item Name and Location Panel \begin{itemize} \item In the Location box, browse to where ever you put the source (.../freecol/) \item It should auto detect the build file location, project name and folder \end{itemize} \item Build and Run Actions Panel \begin{itemize} \item Leave the settings as they are \end{itemize} \item Source Package Folders Panel \begin{itemize} \item Add the 'src' folder as Source packages and 'tests' as Test packages \end{itemize} \item Java Sources Panel \begin{itemize} \item Click 'Add JAR/Folder, \item browse into the jars folder \item select all of the jars \item click open. \end{itemize} \item Click Finish \end{itemize} \hypertarget{Code documentation}{\section{Code documentation}} Our primary code documentation is the Javadoc generated documentation. You can convert this documentation to HTML by typing \verb+ant javadoc+. The directory "javadoc" will then be created and you can start watching the documentation by opening "index.html" from that directory. There is also some additional documentation \href{http://www.freecol.org/documentation/}{here}. \hypertarget{Quality of code}{\section{Quality of code}} First of all, your code will be read and modified by several different developers. Therefore it is important to create a block of JavaDoc documentation with all methods/classes/packages you implement. You should also spend more time thinking about the overall structure than when you are working alone. Please read the \href{http://java.sun.com/docs/codeconv/}{Java Code Conventions}. This will only take about 15 minutes and will really help you write beautiful code. And one more thing. Please configure your editor in such a way that code indentations result in the insertion of 4 spaces, and avoid using tabs. \hypertarget{How to build a FreeCol release}{\chapter{How to make a FreeCol release}} You will obviously need to have installed \texttt{ant} to do the build, and \texttt{git} to make the commits, however this will be normal for a developer. To generate the online manual you will also need \texttt{htlatex}, and \texttt{pdflatex} for the print manual. You can avoid these requirements by setting the ant properties \texttt{online.manual.is.up.to.date} and \texttt{print.manual.is.up.to.date} respectively, however this is not recommended for a release. Uploads to sourceforge use \texttt{sftp}. \begin{itemize} \item Make sure that all relevant changes have been committed to the branch you are about to release. If you plan to upload the manual and/or JavaDoc, regenerate it (with \verb+ant manual+ and \verb+ant javadoc+), and fix any JavaDoc errors. \item Merge translations from trunk if the release is not made from trunk, with \verb+ant merge-translations+. Skip this step if the localization files are essentially the same as the ones in trunk (we try to ensure this). Make sure they are up to date, however. \item Lately we release from the git master and continue to work from there, so there is no urgent need to create a special release branch. \item Start a clean compile, run all tests and verify the specification(s). You can do that by calling \verb+ant prepare-commit+. \item Call \verb+ant dist+ in order to build all packages. You will be prompted for the version of this release. Alternatively, you can specify the version on the command line, by with something like: \verb+ant -Dfreecol.version=0.9.0-alpha2 dist+ instead (replace 0.9.0-alpha2 with the correct version, of course). It might be necessary to increase the memory available for ant, for example by setting the environment variable \verb|ANT_OPTS="-Xms256m -Xmx256m"|. Errors in language packs only apply to the installer and need not delay the overall release. \item Install one of the generated packages and verify that you can play normally for at least five turns (the java installer can be run from the command line with \texttt{java -cp freecol-\emph{version}-installer.jar com.izforge.izpack.installer.Installer}). Other good tests include loading a saved game and running the game in debug mode for a hundred turns or so. It might also be a good idea to compile the game from one of the source packages. \item \verb+ant dist+ will change the \verb+FREECOL_VERSION+ constant in FreeCol.java and the \verb+fcversion+ macro in \texttt{doc/version.sty} to the release version. Commit these changes. \item Upload the packages to \verb+sftp://frs.sourceforge.net/+. A script \verb|bin/release.sh| is provided that does this job, including rebuilding and uploading the manual and JavaDoc. \item Write a release announcement (see previous versions for comparison). Include information on savegame compatibility with previous FreeCol versions on all messages. Recent practice is to keep the announcement brief but refer to a detailed ``Release Notes'' page on the sourceforge freecol wiki. \item The \texttt{freecol.org} website is in transition to a new format, and the master copy lives in the git tree in the \texttt{www.freecol.org} top-level directory. To update the website, for now, you will need to add a new \texttt{www.freecol.org/news/freecol-}\emph{version}\texttt{-released.html} file and edit the following files: \begin{description} \item[\texttt{www.freecol.org/download.html}] Update the release version number. \item[\texttt{www.freecol.org/index.html}] Add a reference to the new release. \item[\texttt{www.freecol.org/sitemap.html}] Add a reference to the new release. \item[\texttt{www.freecol.org/status.html}] Update the release version number. \item[\texttt{www.freecol.org/news/index.html}] Add a section for the new release containing the release announcement, and move the download button from the previous release to the new one. \item[\texttt{www.freecol.org/news/releases.html}] Copy in the same section from the previous file, these two are nearly identical. \end{description} Use \texttt{sftp} to log in at \emph{username}\texttt{,[email protected]} to upload changes (the website lives in the \texttt{htdocs} subdirectory), or use the \verb|bin/website.sh| script which uploads everything that has changed in the git website directory. \item Post the release announcement to: \begin{itemize} \item Our mailing lists: developers, translators and users. Beware that you may have to be subscribed to some lists to be able to post. The addresses are: \texttt{freecol-\{developers,translators,users\}@lists.sourceforge.net} \item The FreeCol \href{https://sourceforge.net/p/freecol/discussion/141200/}{forum}. \item The project \href{https://sourceforge.net/p/freecol/news/}{news} page on sourceforge. \end{itemize} \item Consider revising the preamble to the ``Bugs'' page, particularly the ``Known Common Problems'' section. Go to ``Admin / Tools / Bugs / Options'' to make changes. \item Go to the bug tracker and create a new milestone (``Edit Milestones'') called ``Fixed\_\emph{release}'' (e.g. ``Fixed\_0.10.2''). Select the ``Fixed\_trunk'' group and do a mass edit (the pencil icon in the top right of the bug tracker list) and move all bugs to the new milestone. Repeat for the ``Pending\_Fixed'' group. Do the corresponding action in other ticket categories that have distinct release milestones. \item Start a new wiki page for the release notes for the next release. \end{itemize} \hypertarget{Missing features}{\chapter{Missing features}} We know that FreeCol does not yet emulate all features of the original game. However there are several features which are inevitably different to Colonization due to FreeCol being a multiplayer game --- interactions with other European players being the obvious example. Similarly we do not attempt to exactly emulate the graphical look and feel of Colonization, although no displayed information should be lost. Otherwise, any missing feature from Colonization is considered to be a bug. Please report such omissions on the \href{https://sourceforge.net/p/freecol/pending-features-for-freecol/}{pending features} tracker. Since we do not have access to the source code of the original game, we can only guess at the algorithms used. This is particularly true in complex, detailed areas such as production and combat. In some cases, players have reverse-engineered the algorithm. If you know of some calcuation in FreeCol that differs from that of the original game, please tell us about it. There is an effort underway to completely document Colonization's production amounts \href{https://sourceforge.net/p/freecol/wiki/WWC1D\%20-\%20resource\%20output\%20v2/}{here}. \hypertarget{Changing the Rules}{\chapter{Changing the Rules}} We would like to make FreeCol configurable, so that the game engine becomes capable of emulating many similar games. For this purpose, we have made many of the game's features configurable. At some point in the future, we will probably add a special rule set editor, but at the moment, the most effective option is to edit the file specification.xml directly. This file defines the abilities of units, founding fathers, buildings, terrain types, goods and equipment, for example. You can find this file in the \textit{data/freecol} directory. Try to avoid overriding the base Colonization-compatible rules in \textit{data/classic}. For a small self-contained rule change, another option is to build a \emph{mod} --- see the Mods section following. This is still work in progress, however, and the schema for the rule set certain to change again in the future. If you wish to develop your own rule set, you will have to monitor FreeCol development closely. This having been said, we are particularly interested in hearing about problems caused by your changes to the rule set. Some dialogs might be unable to display more types of goods than are currently defined, for example. Or other dialogs might not recognize your new Minuteman unit as an armed unit. Please help us improve FreeCol by telling us about such problems. If you have a working rule set that adds a new flavour to the game, we will gladly distribute it along with our default rule set. If you have ideas that can not currently be implemented, we will probably try to remove these limitations. If you try to modify the rule set, you are strongly encouraged to check whether the result is still valid. You can do this by validating the result with the command \verb$ant validate$. \hypertarget{Modifiers and Abilities}{\section{Modifiers and Abilities}} Most of the objects defined by the rule set can be customized via modifiers and abilities. Abilities are usually boolean values (``true'' or ``false''). If the value is not explicitly stated, it defaults to true. If an ability is not present, it defaults to false. Modifiers define a bonus or penalty to be applied to a numeric value, such as the number of goods produced by a unit. The modifier may be an additive, multiplicative or a percentage modifier. Modifiers default to ``identity'', which means they have no effect. The code also checks that all abilities and modifiers it uses are defined by the specification. Therefore, you must define all of them, even if you do not use them. You can do this by setting their value to the default value, e.g. ``false'' in the case of an ability, or ``0'' in the case of an additive modifier. \newcommand{\ability}[1]{\index{#1}\index{Ability!#1}\hypertarget{#1}{\vspace{1em}\noindent\textbf{#1}}} \newcommand{\modifier}[1]{\index{#1}\index{Modifier!#1}\hypertarget{#1}{\vspace{1em}\noindent\textbf{#1}}} \newcommand{\affectsPlayer}{\\\textit{Affects: Player\\Provided by: Nation, Nation Type, Founding Father}} \newcommand{\affectsUnit}{\\\textit{Affects: Unit\\Provided by: Nation, Nation Type, Founding Father, Unit Type, Equipment Type}} \newcommand{\affectsBuilding}{\\\textit{Affects: Building\\Provided by: Building Type}} \newcommand{\affectsColony}{\\\textit{Affects: Colony\\Provided by: Map}} \newcommand{\affectsColonyTwo}{\\\textit{Affects: Colony\\Provided by: Building Type, Nation, Nation Type, Founding Father}} \newcommand{\affectsTile}{\\\textit{Affects: Tile\\Provided by: Tile Type}} TODO: This section is out of date (version 0.11.2). \ability{model.ability.addTaxToBells} \affectsPlayer The player adds the current tax rate as a bonus to bells production. The bonus is modified every time the tax increases or decreases. \ability{model.ability.alwaysOfferedPeace} \affectsPlayer The player is always offered peace in negotiations with AI players. \ability{model.ability.ambushBonus} \affectsUnit The unit is granted an ambush bonus equal to the terrain's defence value. \ability{model.ability.ambushPenalty} \affectsUnit The unit suffers an ambush penalty equal to the terrain's defence value. \ability{model.ability.autoProduction} \affectsBuilding The building needs no units to produce goods, and will never produce more goods than can be stored in the colony. \ability{model.ability.automaticEquipment} \affectsUnit The unit automatically picks up equipment if attacked. \ability{model.ability.automaticPromotion} \affectsUnit A unit that can be promoted will always be promoted when successful in battle. \ability{model.ability.betterForeignAffairsReport} \affectsPlayer The player is provided with more information about foreign powers. \ability{model.ability.bombard} \affectsUnit The unit is able to bombard other units. \ability{model.ability.bombardShips} \affectsBuilding The building has the ability to bombard enemy ships on adjacent tiles. \ability{model.ability.bornInColony} \affectsUnit The unit can be born in a colony, provided that enough food is available. \ability{model.ability.bornInIndianSettlement} \affectsUnit The unit can be born in an Indian settlement, provided that enough food is available. \ability{model.ability.build} \affectsBuilding The building can build units or equipment. \ability{model.ability.buildCustomHouse} \affectsPlayer The player can build custom houses. \ability{model.ability.buildFactory} \affectsPlayer The player can build factories. \ability{model.ability.canBeCaptured} \affectsUnit The unit can be captured. Land units that can not be captured are destroyed, naval units that can not be captured are either sunk or damaged. \ability{model.ability.canBeEquipped} \affectsUnit The unit can be equipped. \ability{model.ability.canNotRecruitUnit} \affectsPlayer The player can not recruit specified units. \ability{model.ability.captureEquipment} \affectsUnit The unit can capture equipment from another unit. \ability{model.ability.captureGoods} \affectsUnit The unit can capture goods from another unit. \ability{model.ability.captureUnits} \affectsUnit The unit can capture enemy units. \ability{model.ability.carryGoods} \affectsUnit The unit can transport goods. \ability{model.ability.carryTreasure} \affectsUnit The unit can transport treasures, not treasure trains. \ability{model.ability.carryUnits} \affectsUnit The unit can transport other units. \ability{model.ability.convert} \affectsUnit The unit is a native convert. \ability{model.ability.dressMissionary} \affectsBuilding The building can commission missionaries. \ability{model.ability.electFoundingFather} \affectsPlayer The player can elect Founding Fathers. \ability{model.ability.expertMissionary} \affectsUnit The unit is an expert missionary, but not necessarily commissioned. \ability{model.ability.expertPioneer} \affectsUnit The unit is an expert pioneer, but not necessarily equipped with tools. \ability{model.ability.expertScout} \affectsUnit The unit is an expert scout, but not necessarily equipped with horses. \ability{model.ability.expertSoldier} \affectsUnit The unit is an expert soldier, but not necessarily equipped with muskets. \ability{model.ability.expertsUseConnections} \affectsPlayer Experts working in factories can produce a small amount of goods even if the raw materials are not available in the colony. \ability{model.ability.export} \affectsBuilding The building can export goods to Europe directly. \ability{model.ability.foundColony} \affectsUnit The unit can found new colonies. \ability{model.ability.foundInLostCity} \affectsUnit The unit may be generated as the result of exploring a Lost City Rumour. \ability{model.ability.hasPort} \affectsColony The colony has access to at least one water tile. This ability can not be set by the specification, but it can be used as a required ability. \ability{model.ability.ignoreEuropeanWars} \affectsPlayer The player will not be affected by the Monarch's declarations of war. \ability{model.ability.improveTerrain} \affectsUnit The unit is able to improve terrain. \ability{model.ability.independenceDeclared} \affectsPlayer The player has declared independence. \ability{model.ability.mercenaryUnit} \affectsUnit The unit may be offered as a mercenary unit. \ability{model.ability.missionary} \affectsUnit The unit is able to establish missions and incite unrest in native settlements. \ability{model.ability.moveToEurope} \affectsTile Units on the tile are able to move to Europe. \ability{model.ability.multipleAttacks} \affectsUnit The unit can attack more than once. \ability{model.ability.native} \affectsUnit The unit is a native unit. \ability{model.ability.navalUnit} \affectsUnit The unit is a naval unit. \ability{model.ability.pillageUnprotectedColony} \affectsUnit The unit is able to steal goods from and destroy buildings in an unprotected colony. \ability{model.ability.piracy} \affectsUnit The unit is a privateer. \ability{model.ability.produceInWater} \affectsBuilding The building enables units to produce on water tiles. \ability{model.ability.refUnit} \affectsUnit The unit can be part of the Royal Expeditionary Force. \ability{model.ability.repairUnits} \affectsBuilding The building can repair units. \ability{model.ability.royalExpeditionaryForce} \affectsPlayer The player is a Royal Expeditionary Force. \ability{model.ability.rumoursAlwaysPositive} \affectsPlayer The player will always get positive results from exploring Lost City Rumours. \ability{model.ability.scoutForeignColony} \affectsUnit The unit can scout out foreign colonies. \ability{model.ability.scoutIndianSettlement} \affectsUnit The unit can scout out native settlements. \ability{model.ability.selectRecruit} \affectsPlayer The player can select a unit to recruit in Europe. This also applies to units generated as a result of finding a Fountain of Youth. \ability{model.ability.teach} \affectsBuilding The building enables experts to teach other units. However, the building may place limits on the experience level of teachers. \ability{model.ability.tradeWithForeignColonies} \affectsPlayer The player may trade goods in foreign colonies. \ability{model.ability.undead} \affectsUnit The unit is an undead unit (used only in revenge mode). \modifier{model.modifier.bombardBonus} \affectsPlayer The player's units are granted a bombard bonus when attacking. \modifier{model.modifier.buildingPriceBonus} \affectsPlayer The player can build or buy buildings at a reduced price. \modifier{model.modifier.defence} \affectsUnit The unit has a defence bonus or penalty. \modifier{model.modifier.immigration} \textit{Affects: Player\\Provided by: Goods Type} Goods of this type contribute to the player's immigration points. \modifier{model.modifier.landPaymentModifier} \affectsPlayer The player can buy Indian land at a reduced price. \modifier{model.modifier.liberty} \textit{Affects: Player\\Provided by: Goods Type} Goods of this type contribute to the colony's and the owning player's liberty points. \modifier{model.modifier.lineOfSightBonus} \affectsUnit The unit has an increased line of sight. \modifier{model.modifier.minimumColonySize} \affectsColonyTwo The population of the colony can not be voluntarily reduced below this number. The modifier does not in any way affect a population reduction due to starvation or other events. \modifier{model.modifier.movementBonus} \affectsUnit The unit has an increased movement range. \modifier{model.modifier.nativeAlarmModifier} \affectsPlayer The player generates less native alarm. \modifier{model.modifier.nativeConvertBonus} \affectsPlayer The player has a greater chance of converting natives. \modifier{model.modifier.nativeTreasureModifier} \affectsPlayer The player generates greater treasures when destroying native settlements. \modifier{model.modifier.offence} \affectsUnit The unit has an offence bonus or penalty. \modifier{model.modifier.religiousUnrestBonus} \affectsPlayer The player generates greater religious unrest in Europe. \modifier{model.modifier.sailHighSeas} \affectsUnit The unit's travel time between Europe and the New World is reduced. \modifier{model.modifier.tradeBonus} \affectsPlayer Prices in the player's market remain stable for longer. \modifier{model.modifier.treasureTransportFee} \affectsPlayer The player pays a smaller fee for transporting treasures to Europe. \modifier{model.modifier.warehouseStorage} \affectsBuilding The building increases the capacity of the warehouse. \hypertarget{Mods}{\chapter{Mods}} FreeCol packages a number of simple modifications to the FreeCol rules and resources, generally known as \emph{mods}. The standard mods live in \texttt{.../data/mods}. Users can add their own mods to a \texttt{mods} directory under their main data directory. A mod consists of a directory that includes at minimum a file \texttt{mod.xml}, a file \texttt{FreeColMessages.properties}, and other files that implement the required changes. \texttt{mod.xml} simply contains: \texttt{<mod id="}\emph{identifier}\texttt{" />} where the identifier is some unique name (distinct from other existing mods). The \texttt{FreeColMessages.properties} file should contain at minimum entries for \texttt{mod.}\emph{identifier}\texttt{.name} and \texttt{mod.}\emph{identifier}\texttt{.shortDescription}, so that the mod selection dialog can display the mod correctly. Other messages needed by the mod belong in \texttt{ModMessages.properties}. The difference between these is that \texttt{FreeColMessages.properties} is always loaded so that the mod name and description can appear in the mod selection dialog, but \texttt{ModMessages.properties} is only loaded if the mod is selected. Many mods define extra resources. If so, a \texttt{resources.properties} file will be needed, and similarly files for the resources involved. For example, the ``example'' mod defines a ``milkmaid'' unit, which needs an image, so the example mod directory contains an image file for the milkmaid, and a reference to this file in \texttt{resources.properties}. Most mods change the specification. This is done in a \texttt{specification.xml} file. This file is in the same general format as the existing freecol rule sets, but does not attempt to provide a comprehensive set. The first non-comment line should be: \texttt{<freecol-specification id="}\emph{identifier}\texttt{">}. Note that mods typically do not specify which ruleset they extend. When modifying a specification, expect additional elements you provide to be applied additively, but attributes of an existing element are cleared unless a special \texttt{preserve="true"} attribute is present. If you need to delete or redefine an element, respecify it in context with just its \texttt{id} and \texttt{delete="true"} attributes. So for example, in the freecol ruleset we need to remove the \texttt{model.modifier.minimumColonySize} modifier from the stockade building, which is done as follows: \begin{verbatim} <building-type id="model.building.stockade" preserve="true"> <modifier id="model.modifier.minimumColonySize" delete="true" /> </building-type> \end{verbatim} Note that not all elements take a \texttt{delete} tag yet. You may need to read or modify the source to be certain a mod will work. \hypertarget{Resources}{\chapter{Resources}} Various links pointing to more or less reliable information about the original Colonization game: \begin{itemize} \item \href{http://strategywiki.org/wiki/Sid_Meier%27s_Colonization} {Strategy Wiki} \item \href{http://www.colonization.biz/}{The Unofficial Microprose Colonization Home Page} \item \href{http://dledgard0.tripod.com/FAQs/play_col_at_viceroy.htm} {Play Colonization at Viceroy level} \item \href{http://www.colonizationfans.com/}{Colonization Fan Page} \item \href{http://www.ibiblio.org/GameBytes/issue21/misc/colstrat.html} {Bill Cranston's Strategy guide} \item \href{http://civilization.wikia.com/wiki/Colonization_tips} {Tomasz Wegrzanowski' Strategy Guide}, contains very valuable material on the number of bells required to elect a Founding Father, among other things \end{itemize} \printindex \end{document}
{ "alphanum_fraction": 0.7798289883, "avg_line_length": 36.0442978322, "ext": "tex", "hexsha": "c59d1df724a209807a9ac5be807f3a0fd2bfa7cd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2ab596f78072bb8daa1689774d5b687668494aee", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "bangra1/final", "max_forks_repo_path": "A_Team-finalproject-3c34732ca41925ce8300b7051cc05707ac4c330d/doc/developer.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2ab596f78072bb8daa1689774d5b687668494aee", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "bangra1/final", "max_issues_repo_path": "A_Team-finalproject-3c34732ca41925ce8300b7051cc05707ac4c330d/doc/developer.tex", "max_line_length": 115, "max_stars_count": 1, "max_stars_repo_head_hexsha": "2ab596f78072bb8daa1689774d5b687668494aee", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "bangra1/final", "max_stars_repo_path": "A_Team-finalproject-3c34732ca41925ce8300b7051cc05707ac4c330d/doc/developer.tex", "max_stars_repo_stars_event_max_datetime": "2017-05-16T19:29:52.000Z", "max_stars_repo_stars_event_min_datetime": "2017-05-16T19:29:52.000Z", "num_tokens": 9303, "size": 38243 }
\chapter{Congruence} Look at this picture of two geometric figures. \begin{tikzpicture} \draw [thick] (-1,-4) -- (0, -2); \draw [thick] (0,-2) -- (-1, 0) ; \draw [thick] (-1,0) -- (-3, -1); \draw [thick] (-3,-1) -- (-1, -4); \draw [thick] (-1,1) -- (1, 0); \draw [thick] (1,0) -- (3, 1) ; \draw [thick] (3,1) -- (2, 3); \draw [thick] (2,3) -- (-1, 1); \draw[help lines, step = 1cm] (-4, -4) grid (4, 4); \end{tikzpicture} They are the same shape, right? If you cut one out with scissors, it would lay perfectly on top of the other. In geometry, we say they are \emph{congruent}. What is the official definition of ``congruent''? Two geometric figures are congruent if you can transform one into the other using only rigid transformations. So, what are rigid transformations? A transformation is \emph{Rigid} if it doesn't change the distances between the points or the measure of the angles between the lines they form. These are all rigid transformations: \begin{itemize} \item Translations \item Rotations \item Reflection \end{itemize} If you once again imagine cutting one figure out with scissors and trying to match it with the second: \begin{itemize} \item Translations - sliding the cutout left and right and up and down \item Rotations - rotating the cutout clockwise and counterclockwise \item Reflection - flipping the piece of paper over \end{itemize} A transformation is rigid if and only if it is some combination of translations, rotations, and reflections. \section{Triangle Congruency} If the sides of two triangles have the same length, the triangles must be congruent: \begin{tikzpicture} \draw [thick] (-2,0) -- node[outer sep = 1pt, right]{4 cm} (-2, 4) ; \draw [thick] (-2,4) -- node[outer sep = 3pt, above]{2 cm} (-4, 3); \draw [thick] (-4,3) -- node[outer sep = 2pt, left]{3 cm} (-2, 0); \draw [thick] (-1,1) -- node[outer sep = 2pt, below]{4 cm} (3, 1) ; \draw [thick] (3,1) -- node[outer sep = 2pt, right]{2 cm} (2, 3); \draw [thick] (2,3) -- node[outer sep = 4pt, above]{3 cm} (-1, 1); \end{tikzpicture} To be precise, the Side-Side-Side Congruency Test says that two triangles are congruent if three sides in one triangle are the same length as the corresponding sides in the other. We usually refer to this as the SSS test. Note that two triangles with all three angles equal are not necessarily congruent. For example, here are two triangles with the same interior angles, but they are different sizes: \begin{tikzpicture} \coordinate [circle, fill, inner sep=1pt] (a1) at (0,0) ; \coordinate [circle, fill, inner sep=1pt] (b1) at (4,0) ; \coordinate [circle, fill, inner sep=1pt] (c1) at (3,2) ; \coordinate [circle, fill, inner sep=1pt] (a2) at (5,0) ; \coordinate [circle, fill, inner sep=1pt] (b2) at (11,0) ; \coordinate [circle, fill, inner sep=1pt] (c2) at (9.5,3) ; \draw (a1) -- (b1); \draw (b1) -- (c1); \draw (c1)-- (a1); \pic [draw, "$63^\circ$", angle eccentricity=1.5] {angle = c1--b1--a1}; \pic [draw, "$34^\circ$", angle eccentricity=2.0] {angle = b1--a1--c1}; \pic [draw, "$83^\circ$", angle eccentricity=1.5] {angle = a1--c1--b1}; \draw (a2) -- (b2); \draw (b2) -- (c2); \draw (c2)-- (a2); \pic [draw, "$63^\circ$", angle eccentricity=1.5] {angle = c2--b2--a2}; \pic [draw, "$34^\circ$", angle eccentricity=2.0] {angle = b2--a2--c2}; \pic [draw, "$83^\circ$", angle eccentricity=1.5] {angle = a2--c2--b2}; \end{tikzpicture} These triangles are not congruent, but they are \emph{similar}. That is, they have the same shape, but are not necessarily the same scale. If you know two angles, you can calculate the third. So it makes sense to say ``If two triangles have two angles that are equal, they are similar triangles.'' And if two similar triangles have one side that is equal in length, they must be the same scale -- so they are congruent. Thus, the Side-Angle-Angle Congruency Test says that two triangles are congruent if two angles and side match. What if you know that two triangles have two sides that are the same length and that the angle between them is also equal? \begin{tikzpicture} \coordinate [circle, fill, inner sep=1pt] (a1) at (0,0) ; \coordinate [circle, fill, inner sep=1pt] (b1) at (4,0) ; \coordinate [circle, fill, inner sep=1pt] (c1) at (3,2) ; \coordinate [circle, fill, inner sep=1pt] (a2) at (5,0) ; \coordinate [circle, fill, inner sep=1pt] (b2) at (9,0) ; \coordinate [circle, fill, inner sep=1pt] (c2) at (8,2) ; \draw (a1) -- node[outer sep = 0.5pt, below]{4} (b1); \draw [dashed] (b1) -- (c1); \draw (c1)-- node[outer sep = 5pt, above]{3.5} (a1); \pic [draw, "$34^\circ$", angle eccentricity=2.0] {angle = b1--a1--c1}; \draw (a2) -- node[outer sep = 0.5pt, below]{4} (b2); \draw [dashed] (b2) -- (c2); \draw (c2)-- node[outer sep = 5pt, above]{3.5} (a2); \pic [draw, "$34^\circ$", angle eccentricity=2.0] {angle = b2--a2--c2}; \end{tikzpicture} Yes, they must be congruent. This is the Side-Angle-Size Congruency Test. What if the angle isn't the one between the two known sides? If it is a right angle, you can be certain the two triangles are congruent. (How do I know? Because the Pythagorean Theorem tells us that we can calculate the length of the third side. There is only one possibility, thus all three sides must be the same length.) \begin{tikzpicture} \coordinate [circle, fill, inner sep=1pt] (a1) at (0,0) ; \coordinate [circle, fill, inner sep=1pt] (b1) at (4,0) ; \coordinate [circle, fill, inner sep=1pt] (c1) at (4,2) ; \coordinate [circle, fill, inner sep=1pt] (a2) at (5,0) ; \coordinate [circle, fill, inner sep=1pt] (b2) at (9,0) ; \coordinate [circle, fill, inner sep=1pt] (c2) at (9,2) ; \draw (a1) -- node[outer sep = 0.5pt, below]{3.5} (b1); \draw [dashed] (b1) -- (c1); \draw (c1)-- node[outer sep = 5pt, above]{4} (a1); \pic [draw] {right angle = a1--b1--c1}; \draw (a2) -- node[outer sep = 0.5pt, below]{3.5} (b2); \draw [dashed] (b2) -- (c2); \draw (c2)-- node[outer sep = 5pt, above]{4} (a2); \pic [draw] {right angle = a2--b2--c2}; \end{tikzpicture} In this case, the third side of each triangle must be $\sqrt{4^2 - 3.5^2} \approx 1.9$. What if the know angle is less than $90^\circ$? \emph{The triangles are not necessarily congruent.} For example, lets say that there are two triangles with sides of length 5 and 7 and that the corresponding angle (at the end of the side of length 7) on each is $45^\circ$. Two different triangles satisfy this: \begin{tikzpicture} \coordinate [circle, fill, inner sep=1pt] (a1) at (0,0) ; \coordinate [circle, fill, inner sep=1pt] (b1) at (7,0) ; \coordinate [circle, fill, inner sep=1pt] (c1) at (4,3) ; \coordinate [circle, fill, inner sep=1pt] (a2) at (8,0) ; \coordinate [circle, fill, inner sep=1pt] (b2) at (15,0) ; \coordinate [circle, fill, inner sep=1pt] (c2) at (11,4) ; \draw (a1) -- node[outer sep = 0.5pt, below]{7} (b1); \draw [dashed] (b1) -- (c1); \draw (c1)-- node[outer sep = 5pt, above]{5}(a1); \pic [draw, "$45^\circ$", angle eccentricity=1.5] {angle = c1--b1--a1}; \draw (a2) -- node[outer sep = 0.5pt, below]{7} (b2); \draw [dashed] (b2) -- (c2); \draw (c2)-- node[outer sep = 5pt, above]{5} (a2); \pic [draw, "$45^\circ$", angle eccentricity=1.5] {angle = c2--b2--a2}; \end{tikzpicture} I think it will help to see how this happens if I lay one triangle on top of the other: \begin{tikzpicture} \coordinate [circle, fill, inner sep=1pt] (a1) at (0,0) ; \coordinate [circle, fill, inner sep=1pt] (b1) at (7,0) ; \coordinate [circle, fill, inner sep=1pt] (c1) at (4,3) ; \coordinate [circle, fill, inner sep=1pt] (c2) at (3,4) ; \coordinate (d) at (2,5); \coordinate (e) at (3.5,3.5); \draw (a1) -- node[outer sep = 0.5pt, below]{7} (b1); \draw [dashed,->] (b1) -- (d); \draw [dashed] (a1) -- (e); \draw (c1)-- node[outer sep = 5pt, below]{5}(a1); \draw (c2)-- node[outer sep = 5pt, above]{5}(a1); \pic [draw, "$45^\circ$", angle eccentricity=1.5] {angle = c1--b1--a1}; \pic [draw, angle radius=8] {right angle = b1--e--a1}; \end{tikzpicture} So there is \emph{not} a general Side-Side-Angle Congruency Test. Here, then, is the list of common congruency tests: \begin{itemize} \item Side-Side-Side: All three sides have the same measure \item Side-Angle-Angle: Two angles and one side have the same measure \item Side-Angle-Side: Two sides and the angle between them have the same measure \item Side-Side-Right: They are right triangles and two sides have the same measure \end{itemize} \begin{Exercise}[title={Congruent Triangles}, label=con_triangles] Ted is terrible at drawing triangles: he always draws them exactly the same. Fortunately, he has marked these diagrams with the sides and angles that he measured. For each pair of triangles, write if you know them to be congruent and which congruency test proves it. For example: \begin{tikzpicture}[scale=0.7] \coordinate [circle, fill, inner sep=1pt] (a1) at (0,1) ; \coordinate [circle, fill, inner sep=1pt] (b1) at (5,0) ; \coordinate [circle, fill, inner sep=1pt] (c1) at (4,2) ; \coordinate [circle, fill, inner sep=1pt] (a2) at (6,1) ; \coordinate [circle, fill, inner sep=1pt] (b2) at (11,0) ; \coordinate [circle, fill, inner sep=1pt] (c2) at (10,2) ; \draw (a1) -- node[outer sep = 0.5pt, below]{3.5} (b1); \draw (b1) -- node[outer sep = 2pt, right]{4} (c1); \draw (c1)-- node[outer sep = 2pt, above]{} (a1); \pic [draw, "$120^\circ$", angle eccentricity=2.0] {angle = c1--b1--a1}; \draw (a2) -- node[outer sep = 0.5pt, below]{3.5} (b2); \draw (b2) -- node[outer sep = 2pt, right]{4} (c2); \draw (c2) -- node[outer sep = 2pt, above]{} (a2); \pic [draw, "$120^\circ$",, angle eccentricity=2.0] {angle = c2--b2--a2}; \end{tikzpicture} (These drawings are clearly not accurate, but you are told the measurements are.) The answer is ``Congruent by the Side-Angle-Side test.'' \begin{multicols}{2} \begin{tikzpicture}[scale=0.6] \coordinate [circle, fill, inner sep=1pt] (a1) at (0,1) ; \coordinate [circle, fill, inner sep=1pt] (b1) at (5,0) ; \coordinate [circle, fill, inner sep=1pt] (c1) at (4,2) ; \coordinate [circle, fill, inner sep=1pt] (a2) at (6,1) ; \coordinate [circle, fill, inner sep=1pt] (b2) at (11,0) ; \coordinate [circle, fill, inner sep=1pt] (c2) at (10,2) ; \draw (a1) -- node[outer sep = 0.5pt, below]{3.5} (b1); \draw (b1) -- node[outer sep = 2pt, right]{} (c1); \draw (c1)-- node[outer sep = 2pt, above]{6} (a1); \pic [draw, "$90^\circ$", angle eccentricity=2.0] {angle = c1--b1--a1}; \draw (a2) -- node[outer sep = 0.5pt, below]{3.5} (b2); \draw (b2) -- node[outer sep = 2pt, right]{} (c2); \draw (c2) -- node[outer sep = 2pt, above]{6} (a2); \pic [draw, "$90^\circ$",, angle eccentricity=2.0] {angle = c2--b2--a2}; \end{tikzpicture} \hspace{3cm} \begin{tikzpicture}[scale=0.6] \coordinate [circle, fill, inner sep=1pt] (a1) at (0,1) ; \coordinate [circle, fill, inner sep=1pt] (b1) at (5,0) ; \coordinate [circle, fill, inner sep=1pt] (c1) at (4,2) ; \coordinate [circle, fill, inner sep=1pt] (a2) at (6,1) ; \coordinate [circle, fill, inner sep=1pt] (b2) at (11,0) ; \coordinate [circle, fill, inner sep=1pt] (c2) at (10,2) ; \draw (a1) -- node[outer sep = 0.5pt, below]{7} (b1); \draw (b1) -- node[outer sep = 2pt, right]{4} (c1); \draw (c1)-- node[outer sep = 2pt, above]{9} (a1); \draw (a2) -- node[outer sep = 0.5pt, below]{7} (b2); \draw (b2) -- node[outer sep = 2pt, right]{4} (c2); \draw (c2) -- node[outer sep = 2pt, above]{9} (a2); \end{tikzpicture} \begin{tikzpicture}[scale=0.6] \coordinate [circle, fill, inner sep=1pt] (a1) at (0,1) ; \coordinate [circle, fill, inner sep=1pt] (b1) at (5,0) ; \coordinate [circle, fill, inner sep=1pt] (c1) at (4,2) ; \coordinate [circle, fill, inner sep=1pt] (a2) at (6,1) ; \coordinate [circle, fill, inner sep=1pt] (b2) at (11,0) ; \coordinate [circle, fill, inner sep=1pt] (c2) at (10,2) ; \draw (a1) -- node[outer sep = 0.5pt, below]{7} (b1); \draw (b1) -- node[outer sep = 2pt, right]{} (c1); \draw (c1)-- node[outer sep = 2pt, above]{} (a1); \pic [draw, "$35^\circ$",, angle eccentricity=2.0] {angle = c1--b1--a1}; \pic [draw, "$62^\circ$",, angle eccentricity=2.0] {angle = b1--a1--c1}; \draw (a2) -- node[outer sep = 0.5pt, below]{7} (b2); \draw (b2) -- node[outer sep = 2pt, right]{} (c2); \draw (c2) -- node[outer sep = 2pt, above]{} (a2); \pic [draw, "$35^\circ$",, angle eccentricity=2.0] {angle = c2--b2--a2}; \pic [draw, "$62^\circ$",, angle eccentricity=2.0] {angle = b2--a2--c2}; \end{tikzpicture} \hspace{3cm} \begin{tikzpicture}[scale=0.6] \coordinate [circle, fill, inner sep=1pt] (a1) at (0,1) ; \coordinate [circle, fill, inner sep=1pt] (b1) at (5,0) ; \coordinate [circle, fill, inner sep=1pt] (c1) at (4,2) ; \coordinate [circle, fill, inner sep=1pt] (a2) at (6,1) ; \coordinate [circle, fill, inner sep=1pt] (b2) at (11,0) ; \coordinate [circle, fill, inner sep=1pt] (c2) at (10,2) ; \draw (a1) -- node[outer sep = 0.5pt, below]{8} (b1); \draw (b1) -- node[outer sep = 2pt, right]{6} (c1); \draw (c1)-- node[outer sep = 2pt, above]{} (a1); \pic [draw, "$28^\circ$",, angle eccentricity=2.0] {angle = b1--a1--c1}; \draw (a2) -- node[outer sep = 0.5pt, below]{8} (b2); \draw (b2) -- node[outer sep = 2pt, right]{6} (c2); \draw (c2) -- node[outer sep = 2pt, above]{} (a2); \pic [draw, "$28^\circ$",, angle eccentricity=2.0] {angle = b2--a2--c2}; \end{tikzpicture} \end{multicols} \end{Exercise} \begin{Answer}[ref=con_triangles] \begin{multicols}{2} Congruent by the Side-Side-Right Congruency Test. Congruent by the Side-Side-Side Congruency Test. Congruent by the Side-Angle-Angle Congruency Test. We don't know if they are congruent. The measured angle is not between the measured sides. \end{multicols} \end{Answer}
{ "alphanum_fraction": 0.6307475317, "avg_line_length": 41.3411078717, "ext": "tex", "hexsha": "95ca715811c3bddeb41fa02ad9b7e021bc5f9907", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5b39f09b6350922867c3f88beaf3683425715676", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "rajivjhoomuck/sequence", "max_forks_repo_path": "Modules/TrianglesCircles/congruence-en_US.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5b39f09b6350922867c3f88beaf3683425715676", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "rajivjhoomuck/sequence", "max_issues_repo_path": "Modules/TrianglesCircles/congruence-en_US.tex", "max_line_length": 140, "max_stars_count": null, "max_stars_repo_head_hexsha": "5b39f09b6350922867c3f88beaf3683425715676", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "rajivjhoomuck/sequence", "max_stars_repo_path": "Modules/TrianglesCircles/congruence-en_US.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5392, "size": 14180 }
\FloatBarrier \section{Combined Dynamics} \label{sec:combined} It is possible to define a set of dynamics to run a combination of the dynamics. The resulting dynamic is defined as \begin{equation} \dot{ x } = \sum_{d\in \mathcal{D}} \gamma_d V_d( x ), \end{equation} where $\mathcal{D}=\{ Logit, RD, Smith, BNN \}$ denotes the set of available dynamics, $V_d()$ is the differential equation of the $d\th$ dynamic and $\gamma_d$ is the weight assigned to it. The dynamics should be defined in a cell array, e.g., \begin{lstlisting} dynamics = {'bnn', 'rd'}; \end{lstlisting} The combination is made making a linear combination between each dynamic listed in the cell array. The weight assigned to each dynamic is defined in the vector \verb|gamma|. In this case we assign \begin{lstlisting} gamma = [.25, .75]; \end{lstlisting} Fig. \ref{fig:rps_combined} shows an example of the combined dynamics for the rock-paper-scissors game. Note that the evolution of the system is not confined to a limit cycle, as happened with the replicator dynamics in Fig. \ref{fig:finite1}. \begin{figure}[h] \centering \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{./images/test_combined.eps} \caption{Simplex.} \label{fig:test_combined_simplex} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./images/test_combined_ev.eps} \caption{Evolution of the strategies in time.} \label{fig:test_combined_ev} \end{subfigure} \caption{Evolution of the combination of replicator dynamics and BNN dynamics.} \label{fig:rps_combined} \end{figure}
{ "alphanum_fraction": 0.7404907975, "avg_line_length": 40.75, "ext": "tex", "hexsha": "d0f73d2dc62a1c9a1c1adf5b37d5d122343551ad", "lang": "TeX", "max_forks_count": 17, "max_forks_repo_forks_event_max_datetime": "2021-12-26T10:20:34.000Z", "max_forks_repo_forks_event_min_datetime": "2015-07-16T00:40:13.000Z", "max_forks_repo_head_hexsha": "fea827a80aaa0150932e6e146907f71a83b7829b", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "sjtudh/PDToolbox_matlab", "max_forks_repo_path": "docs/combined_dynamics.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "fea827a80aaa0150932e6e146907f71a83b7829b", "max_issues_repo_issues_event_max_datetime": "2020-07-03T21:16:17.000Z", "max_issues_repo_issues_event_min_datetime": "2018-07-25T13:04:08.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "sjtudh/PDToolbox_matlab", "max_issues_repo_path": "docs/combined_dynamics.tex", "max_line_length": 244, "max_stars_count": 28, "max_stars_repo_head_hexsha": "fea827a80aaa0150932e6e146907f71a83b7829b", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "carlobar/PDToolbox_matlab", "max_stars_repo_path": "docs/combined_dynamics.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-08T09:22:42.000Z", "max_stars_repo_stars_event_min_datetime": "2017-08-13T09:50:02.000Z", "num_tokens": 459, "size": 1630 }
% Template for Cogsci submission with R Markdown % Stuff changed from original Markdown PLOS Template \documentclass[10pt, letterpaper]{article} \usepackage{cogsci} \usepackage{pslatex} \usepackage{float} \usepackage{caption} % amsmath package, useful for mathematical formulas \usepackage{amsmath} % amssymb package, useful for mathematical symbols \usepackage{amssymb} % hyperref package, useful for hyperlinks \usepackage{hyperref} % graphicx package, useful for including eps and pdf graphics % include graphics with the command \includegraphics \usepackage{graphicx} % Sweave(-like) \usepackage{fancyvrb} \DefineVerbatimEnvironment{Sinput}{Verbatim}{fontshape=sl} \DefineVerbatimEnvironment{Soutput}{Verbatim}{} \DefineVerbatimEnvironment{Scode}{Verbatim}{fontshape=sl} \newenvironment{Schunk}{}{} \DefineVerbatimEnvironment{Code}{Verbatim}{} \DefineVerbatimEnvironment{CodeInput}{Verbatim}{fontshape=sl} \DefineVerbatimEnvironment{CodeOutput}{Verbatim}{} \newenvironment{CodeChunk}{}{} % cite package, to clean up citations in the main text. Do not remove. \usepackage{apacite} % KM added 1/4/18 to allow control of blind submission \cogscifinalcopy \usepackage{color} % Use doublespacing - comment out for single spacing %\usepackage{setspace} %\doublespacing % % Text layout % \topmargin 0.0cm % \oddsidemargin 0.5cm % \evensidemargin 0.5cm % \textwidth 16cm % \textheight 21cm \title{Selection of goal-consistent acoustic environments by adults and preschool-aged children} \begin{document} \maketitle \begin{abstract} Children are navigating a world with massive amounts of auditory input, sometimes relevant while other times purely noise, and must somehow make sense of it all. The early auditory environment is critical for speech perception and recognition, auditory discrimination, and word learning, all of which support language outcomes. What strategies do children use to learn in noisy environments? One potential strategy is environmental selection, which allows children to seek environments that align with particular goals. In the current paper, we examined whether children and adults make decisions about their environments by integrating auditory information and goal-states. While 3- and 4-year olds struggle with discrmininating the level of noise in noisy speech streams (and likely do not use this information for environmental selection), 5-year-old children and adults can. Further, we show initial evidence that they can use this information to reason about acoustic environments that are consistent with specific goals. \textbf{Keywords:} active learning; auditory discrimination; auditory noise; cognitive development \end{abstract} \hypertarget{introduction}{% \section{Introduction}\label{introduction}} Children's auditory environment supports language development, but this environment can also be noisy and chaotic. Acoustic noise is ubiquitous and unavoidable, from sounds as low as a whisper (30dB) to as high as crowded restaurants (90dB) (Erickson \& Newman, 2017). Children struggle with speech perception and word recognition in noisy environments, and often require signal-to-noise (SNR) levels of 5-7dB higher than adults listening to the same stimulus (Bjorklund \& Harnishfeger, 1990; Klatte, Bergström, \& Lachmann, 2013). Despite this, children manage to make sense of such a noisy world. More than 20 million children living in the United States are exposed to dangerous noise levels daily, and 5 million of those children suffer from noise-induced hearing loss as a result (Viet, Dellarco, Dearborn, \& Neitzel, 2014). Unfortunately, children of color living in urban regions are over-represented in these numbers (Casey et al., 2017). Chronic exposure to noise has been correlated with poorer reading performance, reduced short term and episodic memory, and smaller expressive vocabularies in elementary school children (Clark, Sörqvist, \& others, 2012; Hygge, 2019; Riley \& McGregor, 2012). Yet despite suboptimal conditions, language acquisition, cognitive development, and full engagement with the environment is still possible, albeit more difficult. What strategies do children use in these conditions? One observation is that children's attention or discrimination abilities may shift when faced with suboptimal auditory patterns, even if this causes deleterious long- term outcomes. For example, Cohen, Glass, \& Singer (1973) measured the sound pressure levels in and around a noisy Manhattan high-rise apartment complex where 8- and 9-year-old middle class students lived, and then asked how this chronic noise exposure related to reading performance. Auditory discrimination mediated the relationship between reading comprehension/ability and auditory noise, such that children exposed to higher levels of auditory noise in the home not only filtered out the noise, but also filtered out important information that may support reading ability. Because children were indiscriminately filtering out both acoustic signal and noise, this strategy might be considered maladaptive over time- one that primarily affects children exposed to chronic noise. It is possible, however, that children can and do make use of adaptive strategies under acoustic constraints. Consider a problem space in which children learn to optimize their auditory environments to successfully complete certain goals. For example, a child might find that reading is best done in a library, not just because of its convention (because libraries function as places to read/check out books), but because it is a quiet space. Such a strategy might allow children to exploit environmental variation in noise to maximize their ability to learn in suboptimal or variable conditions. In the current paper, we asked whether preschool children can reason about their auditory environment and how it relates to specific goals. Environmental selection of this type is a type of active learning, in which an agent makes choices to shape its own learning. The dominant approach to studying active learning has emphasized how learners approach individual stimuli (e.g., Settles, 2009). When faced with uncertainty, both human and machine systems can learn actively by choosing new stimuli to query that are informative with respect to the learner's current knowledge state (Castro et al., 2008). Infants, too, have been shown to use active learning strategies (Ruggeri, Swaboda, Sim, \& Gopnik, 2019; see Xu, 2019 for review). Although most active learning research has focused on stimulus selection, perhaps children and adults are engaging in active learning by also making decisions about the environments in which they learn. In practice, this behavior may present itself as moving to a different room to study for an upcoming exam or playing in a room with other children who seem to be having the kind of fun you desire. We might expect humans to seek out environments that best support their goals, and observe this strategy even in young children. In the current paper, we took a first step towards investigating whether children and adults actively select their auditory environment to achieve their goals. We conducted two experiments with both children and adults. Although our primary interest is whether and how children engage in environmental selection, we also collected adult samples to offer comparisons of how cognitively mature individuals might respond to these tasks. To ensure that the stimuli we use can be discriminated by children in our target ages, Experiments 1a and 1b investigate children and adults' auditory discrimination of noise in long speech streams. Experiments 2a and 2b then examine whether children and adults can select auditory environments that match a goal. \hypertarget{experiment-1a}{% \section{Experiment 1a}\label{experiment-1a}} Previous research has consistently shown that adults can discriminate when two different sounds are at or below 5db apart, and children as young as four perform similarly to adults in discriminating contrasts as low as 5db (Jensen \& Neff, 1993). However, the stimuli commonly used to measure intensity discrimination tend to be short tonal bursts. These differ considerably from children's real-world auditory experiences, which are not always transient and can reflect more sustained noise. Additionally, noise exposure is not limited to non-speech noise (e.g., white noise). Multi-talker noise is one initial example of a kind of noise that occurs in children's natural environments and that has been used across other studies as a more ecological noise stimulus (Fallon, Trehub, \& Schneider, 2000; McMillan \& Saffran, 2016). Thus, in our first experiment, we aimed to build on previous discrimination studies by creating a intensity discrimination and preference paradigm that used longer audio streams (up to 25s) and naturalistic multi-talker noise. This experiment (and its counterpart with children, Experiment 1b) sets the stage for further experiments on environmental selection. \hypertarget{methods}{% \subsection{Methods}\label{methods}} \hypertarget{participants}{% \subsubsection{Participants}\label{participants}} A total of 40 adults (mean age = 27.68 years; 52.5\% Caucasian/White) living in the United States at the time of test were recruited to participate via the online platform Prolific. Testing was restricted to a laptop, desktop, or tablet. All participants were fluent in English and had no severe visual or cognitive impairments. To preserve the quality of the data, participants also completed two attention check questions and were excluded if they failed one or more of the attention checks. For this reason, an additional 6 participants were excluded from analysis. Informed consent was collected from each participant before the experiment began. \hypertarget{materials-and-procedure}{% \subsubsection{Materials and Procedure}\label{materials-and-procedure}} \begin{CodeChunk} \begin{figure}[t] {\centering \includegraphics{figs/e1-stimuli-1} } \caption[One of 10 animated classrooms participants viewed during the session]{One of 10 animated classrooms participants viewed during the session.}\label{fig:e1-stimuli} \end{figure} \end{CodeChunk} Participants were told that they would watch 25s animated videos from each of the ten classrooms in The Alphabet School, a fictional preschool program in which each class learns one letter of the alphabet from A--J. Classrooms were created with Vyond animation software. Each classroom was depicted in the videos as having 5--6 preschool children and one adult teacher with stereotypical male or female presentation. The wall colors of each classroom identified which classroom participants were viewing. In each video, the teacher would tell the students which letter of the alphabet they would be learning, followed by three images on a whiteboard of animals or objects that begin with that letter. Figure 1 illustrates one of the ten classrooms shown during the session. Participants viewed two videos per trial, for a total of five trials. Importantly, the classrooms differed in their signal-to-noise ratios (SNR), which ranged from 5--25dB. Each teacher's speech was registered at 65dB, and the background noise, a recording of live preschool classrooms collected by the first author, were equalized on speech subtracting any silence in the clips, and ranged from 35--60dB. The two videos for each trial differed from each other in noise level by 5--25dB. At the end of each trial, participants indicated which classroom was the louder of the two. To understand how participants evaluated the referent of the question, we also asked at the end of the experiment whether the term ``louder'' {[}in the question, ``Which room was louder- Room (X) or Room (Y)''{]} referred to the loudness of the speaker or the loudness of the background noise, but was not an exclusion criteria. The majority of participants -- 33/40 -- indicated the loudness of the background noise as the referent of the question. Additionally, to reduce participant inattention in the data, we included two attention check questions and excluded participants who answered at least one question incorrectly. SNR levels of each classroom were counterbalanced across trials and conditions. Because SNR is a relative measure, the relative intensity between stimuli was standardized across participants. While we do recognize potential differences in absolute intensity between participants, this difference alone likely has no significant bearing on the results presented here. \hypertarget{results-and-discussion}{% \subsection{Results and Discussion}\label{results-and-discussion}} \textbackslash begin\{CodeChunk\} \textbackslash begin\{figure\}{[}t{]} \{\centering \includegraphics{figs/e1a-bar-1} \} \textbackslash caption{[}Results from Experiment 1a{]}\{Results from Experiment 1a. Proportion of responses correctly indicating the stimuli with the greatest sound pressure level. Participants were presented with a binary choice and had a 50\% chance of correctly responding. SNR levels on the x-axis ranged from (left to right) 5, 10, 15, 20, and 25dB. Error bars show 95\% confidence intervals.\}\label{fig:e1a-bar} \textbackslash end\{figure\} \textbackslash end\{CodeChunk\} Given prior data, we expected that across SNR levels, adults would correctly identify relative differences in the auditory environments presented in this experiment (which served primarily as a comparison for Experiment 1b with children). We preregistered {[}\url{https://osf.io/tqay9}{]} a Bayesian mixed-effects logistic regression predicting correct responding as a function of SNR, with a maximal random effect structure (random slopes by SNR and a random intercept by participant). SNR level was centered at 15 dB. In this and subsequent models, we used the package default of weakly informative priors (normal distributions on coefficients with SD=2.5, scaled to predictor magnitudes). On average, adults were above chance across all five SNR levels (intercept: \(\beta\) = 2.15, 95\% Crl = {[}1.66 - 2.88{]}), and there was a modest effect of SNR on performance (intercept: \(\beta\) = 0.08, 95\% Crl = {[}0.01 - 0.16{]}). Data are shown in Figure 2. This finding is both a replication of previous studies which have found similar performance levels in adults, as well as an extension that revealed these findings hold even with more complex stimuli. These results affirm adults' auditory discrimination skills are fully mature, and that they possess the cognitive resources necessary to successfully complete this task. \hypertarget{experiment-1b}{% \section{Experiment 1b}\label{experiment-1b}} In Experiment 1b, we reran the same experiment with 3--5-year-old-children. \hypertarget{methods-1}{% \subsection{Methods}\label{methods-1}} \hypertarget{participants-1}{% \subsubsection{Participants}\label{participants-1}} 36 children (3;0 years-5;11 years, mean age = 4 years, 12 children per age group, 41.7\% Caucasian/White) completed the same task as adults in Experiment 1a with a few notable differences. An additional 7 children were ultimately excluded from analysis because their caregivers indicated they heard English less than 75\% of the time. Participants were recruited through online advertisements on social media and through direct sign-ups on a multi-lab developmental research website. \hypertarget{materials-and-procedure-1}{% \subsubsection{Materials and Procedure}\label{materials-and-procedure-1}} Children were tested synchronously over the Zoom platform by an undergraduate research assistant. The researcher first collected informed consent from the caregiver, who was often present but instructed not to engage during the session, followed by assent from the child. Children whose caregivers pointed to the computer screen or provided answers during the session were excluded from analysis. Due to the age range of interest, the experiment was presented strictly though images and videos, and the research assistant verbally explained each slide to the children. Between trials, children were given virtual gold stars, which served to pace the experiment and to maintain engagement. Children were not provided any feedback on their performance. Unlike adults, children were not asked to identify whether the speaker or the background was the referent of ``louder.'' \hypertarget{results-and-discussion-1}{% \subsection{Results and Discussion}\label{results-and-discussion-1}} \begin{CodeChunk} \begin{figure}[t] {\centering \includegraphics{figs/e1b-bar-1} } \caption[Experiment 1b]{Experiment 1b. Proportion of correct responses across SNR levels from 5--25dB. Error bars show 95\% confidence intervals.}\label{fig:e1b-bar} \end{figure} \end{CodeChunk} We anticipated that, while the strength of the effect would increase with age, all children would correctly identify relative differences in SNRs from 10--25dB, and that only three-year-old children would be unable to correctly identify this difference at 5dB. We ran the same Bayesian logistic regression presented in Experiment 1a, but added age (centered at the mean) as a main effect. Figure 3 demonstrates a similar, though weaker, pattern of auditory discrimination skills in preschool children. In the aggregate, 3--5-year-old children showed some discrimination ability on the current paradigm (intercept : \(\beta\) = -4.52, Crl = {[}-6.77 - -2.42{]}), but independent of SNR (intercept : \(\beta\) = 0.14, Crl = {[}-0.14 - 0.42{]}). Age played a larger role in children's performance than we anticipated. To explore this effect, we binned the data by the child's age in years {[}3;0-3;11, 4;0-4;11, and 5;0-5;11 years{]} and reran the same analysis. Older children were more likely to correctly discriminate auditory signals than younger children (intercept : \(\beta\) = -3.14, Crl = {[}-4.94 - -1.47{]}). Our findings differed from prior results in that only 5 year olds appeared to be robustly above chance in discrimination. There are several possible reasons for this disparity. First, as described earlier, this task is much more challenging than prior rapid discrimination tasks: it requires assessing the level of noise in a video, remembering it, and comparing it to another over the course of almost a minute. Additionally, the type of stimuli presented here differs from the tonal bursts or other non-speech sounds used in earlier work. \hypertarget{experiment-2a}{% \section{Experiment 2a}\label{experiment-2a}} If 5-year-old participants can successfully discriminate between sound pressure levels, can they then use this information to reason about which goals are most appropriate in these environments? In our next set of experiments, we assessed this hypothesis. Participants watched a video of a third-person character with several goals and were asked to select the environment in which he should complete these goals. As in Experiment 1, we began by assessing performance in a convenience sample of adults. \hypertarget{methods-2}{% \subsection{Methods}\label{methods-2}} \hypertarget{participants-2}{% \subsubsection{Participants}\label{participants-2}} 128 adults (mean age = 27.82 years; 69.5\% Caucasian/White) living in the United States at the time of test were recruited to participate via the online platform, Prolific. An additional 19 participants were excluded from analysis for failing one or more of the attention checks. Testing was restricted to a laptop, desktop, or tablet. All participants were fluent in English and had no severe visual or cognitive impairments. Informed consent was collected from each participant before the experiment began. \hypertarget{materials-and-procedure-2}{% \subsubsection{Materials and Procedure}\label{materials-and-procedure-2}} Participants were introduced to a preschool-aged character named Ryan with eight goals to complete throughout the experiment: (1) to read a book, (2) to build a tower out of blocks, (3) to learn the letters of the alphabet, (4) to paint a picture, (5) to dance to his favorite music, (6) to learn a new language called Zerpie, (7) to talk to a friend, and (8) to eat lunch. All activities had relatively simple explanations with the exception of (6). For this trial, participants were told that Ryan's new neighbor, Logan, speaks a rare language called Zerpie, a language he doesn't speak. Ryan wants to learn Zerpie so he can communicate with Logan. In each of the eight trials, participants watched a video in which Ryan stood in between two closed doors labeled ``A'' and ``B'', respectively. Before the video began, participants were told to watch and listen carefully to decide which of the two rooms Ryan should go to in order to complete his goal. As in Experiment 1, we manipulated the sound level of each room, but removed any classroom stimuli, including the teacher, and only depicted one child opening and standing in front of each door. As such, participants did not have access to any visual information about the room, and could only rely on auditory information, as well as any information provided by the character who opened the door. Each character's voice was equalized to 65dB and, unlike in Experiment 1, all characters shared the same voice. All characters except Ryan were preschool girls but differed in appearance. The same background noise in Experiment 1 was used for the current experiment. For each trial, the difference in SNR between the two rooms was randomly selected to be either 5, 10, 15, 20, or 25dB such that on average participants heard a range of smaller and larger intensity differences. During the video, each character would open their respective door beginning with Room A. The character in Room A always said, ``You can {[}goal{]} in this room'', while the character in Room B always said, ``Or you can {[}goal{]} in this room.'' While the room on the left was always labeled ``A'' and the room on the right was always labeled ``B'', the characters from and sound levels of each room, as well as goal order were counterbalanced across conditions. For each trial, participants were told which goal Ryan wanted to complete and were asked to select the room that he should complete his goal. After making a selection, they were then asked to briefly explain their choice. Responses for the quieter room (relative to the other and based on the actual sound pressure level) were given a 1, while responses for the louder room were given a 0. \hypertarget{results-and-discussion-2}{% \subsection{Results and Discussion}\label{results-and-discussion-2}} \begin{CodeChunk} \begin{figure}[t] {\centering \includegraphics{figs/e2a-bar-1} } \caption[Experiment 2a]{Experiment 2a. Proportion of participants selecting the quiet room based on activity, with activities sorted by response level.}\label{fig:e2a-bar} \end{figure} \end{CodeChunk} We expected that adults would select the quieter room when the goal was (1) to read a book, (2) to learn the new language called Zerpie, and (3) to learn the letters of the alphabet. We were uncertain but thought that some adults might be more likely to select the louder room when the goal was (1) to dance to his favorite music, (2) to talk to a friend, and (3) to build a tower out of blocks because these are more social activities and louder rooms might imply more people being present. Additionally, we expected participants to have no sound level preference for (1) eating lunch and (2) painting a picture because the goals are unconnected with the auditory environment. As in Experiments 1a and 1b, we preregistered {[}\url{https://osf.io/hjqys}{]} a Bayesian mixed-effects logistic regression predicting environmental preference as a function of activity type. Figure 4 depicts adult participants' preferences for quieter environments based on the chosen activity. Coefficients for the read, learn, Zerpie, paint, and dance activities all had 95\% credible intervals that did not overlap with zero. Interestingly, only for the dance activity did adults choose the louder room more than 50\% of the time, likely reflecting some ambivalence about whether someone might want to, e.g., eat in a loud room. In sum, these findings suggests adults can reason about the match between acoustic environments and activity goals. \hypertarget{experiment-2b}{% \section{Experiment 2b}\label{experiment-2b}} In the next next study, we asked about whether children could also evaluate the match between acoustic environments and activity goals. Following the results of Experiment 1b, we conducted this experiment with 5-year-olds only. \hypertarget{methods-3}{% \subsection{Methods}\label{methods-3}} \hypertarget{participants-3}{% \subsubsection{Participants}\label{participants-3}} 30 5-year-old children (69.5\% Caucasian/White) completed a truncated version of Experiment 2a to both prevent testing fatigue and to maximize any response differences based on the presented goals. Participants were initially recruited and tested at a local Bay Area preschool but due to COVID restrictions, recruitment moved exclusively online. In total, 8 participants were tested in-person and 22 were tested online. The in-person testing was conducted with both caregiver consent and participant assent. As with the online testing, participants were included only if they heard English at home at least 75\% of the time and had no known cognitive, visual, or neurological impairments, which led to an exclusion of an additional 8 children. \hypertarget{materials-and-procedure-3}{% \subsubsection{Materials and Procedure}\label{materials-and-procedure-3}} We tested children on the four activities with the widest differences observed in Experiment 2a: (1) to read a book, (2) to learn the letters of the alphabet, (3) to build a tower out of blocks, and (4) to dance to music, for a total of four trials. Additionally, participants in this experiment were only shown videos in which the two rooms had SNR differences of 25dB because there were no differences in performance across SNR levels in Experiment 1b. Rooms and characters depicted in the videos remained consistent with Experiment 2a, with one exception: the room labels, ``A'' and ``B'', were replaced with one black circle for Room 1 and two black circles for Room 2. This change was implemented after finding that several participants in the pilot study seemed to favor the letter A over B, and because these letter labels may interfere with responses when the goal is to learn the letters of the alphabet. Black circle labels, on the other hand, are more abstract and may reduce this bias. As done previously, the characters, sound pressure levels, and goal order were counterbalanced across conditions. Whether testing online or in-person, participants were shown the same set of videos and a research assistant (for online testing) or the first author (for in-person testing) verbally explained each slide and video to participants. After watching each video, participants were asked to select the room Ryan should complete his goal and to briefly explain their response. As in Experiment 2a, responses for the quieter room (relative to the other and based on the actual sound pressure level) were given a 1, while responses for the louder room were given a 0. \hypertarget{results-and-discussion-3}{% \subsection{Results and Discussion}\label{results-and-discussion-3}} \begin{CodeChunk} \begin{figure}[t] {\centering \includegraphics{figs/2b-bar-1} } \caption[Experiment 2b]{Experiment 2b. Proportion of participants selecting the quieter room by activity.}\label{fig:2b-bar} \end{figure} \end{CodeChunk} We expected to see a similar, though weaker, response pattern as adult participants in Experiment 2a. Figure 5 depicts children's preferences for quieter environments based on the chosen activity. We ran the same logistic regression as in Experiment 2a. Children were more likely than chance to select the quieter room for book reading (which was set to the intercept: \(\beta\) = 1.56, Crl = {[}0.49 - 3{]}), but credible intervals for the other activities overlapped zero, suggesting that they could not individually be differentiated from those for the read activity. Overall, children appeared to have a preference for the quieter room across activities. Children's preference across activities appeared different from those of adults. For example, adults strongly preferred to learn in a quiet room while children had numerically the lowest quiet preference for the learning activity. We speculate that children's associations with these activities may differ from those of adults: for example, many children may think of learning as something to be done in a noisy classroom setting. As an exploratory analysis, we asked whether the inclusion of activity predictors as a whole improved model fit over an intercept-only model by using bridge sampling to compare between models with and without activity as a predictor. This comparison revealed a Bayes Factor of 27.64 in favor of the activity model, suggesting that as a whole these predictors did substantially improve model fit and hence children showed some sensitivity to goal in their room selections, despite their bias for the quieter room. \hypertarget{general-discussion}{% \section{General Discussion}\label{general-discussion}} We asked here whether adults and children can reason about how acoustic noise changes their environment. We found that both 5-year-old children and adults could discriminate noise levels differing by 5dB in long-form auditory stimuli. On the other hand, 3- and 4-year-old children were unable to do so. We then asked whether 5-year-olds and adults would reason about which acoustic environments best matched a particular activity goal. Adults showed clear and graded sensitivity, choosing quieter environments for reading and learning and louder environments for dancing. Five-year-olds were more likely to select the quieter room overall but showed initial evidence that they differentiated between activities as well. In other research, children in the age ranges we studied show evidence that they learn actively (Ruggeri et al., 2019; Xu, 2019), pursue ways to reduce uncertainty when faced with a possible reward (Feldstein \& Witryol, 1971), and search for additional information on a particular topic when their intuitive theories are less informative (Wang, Yang, Macias, \& Bonawitz, 2021). Yet we found that younger children struggled even to differentiate environments with different levels of noise, and even 5-year-olds showed only modest sensitivity to the congruence between acoustic environments and goals. Each of these tasks may have been challenging for children for reasons unrelated to their sensitivity to the underlying constructs, however. The discrimination task required encoding and comparing noise levels across two different 25s videos, which might have been challenging for reasons of attention and memory. And the environmental selection task required noticing that the rooms differed in noise levels and encoding their noise levels as well as associating different noise levels with particular activities. Thus, in future work we intend to explore simpler and more naturalistic paradigms for evaluating children's environmental selection abilities. There are several further limitations that point the way towards new experiments. First, our research relied on convenience samples and so our specific estimates are not broadly generalizable to other populations. Second, the paradigm used third-party scenarios where participants assisted someone else with achieving certain goals; it is still unknown whether children would make similar decisions if they themselves were given goals to complete. Finally, there is a possibility that participants' familiarity with the context of particular activities (e.g., that they have typically danced in a noisy preschool classroom) influenced their environmental preferences. Future work should explore novel activities where participants cannot rely on their current knowledge about which auditory environments are most optimal for each activity. By understanding the strategies children use to learn in noisy auditory environments, we might offer better solutions for those exposed to chronic noise, thereby mitigating some of its negative effects. Such mitigation is becoming more and more critical as cities become more populated (bringing construction with it) and auditory noise becomes even more unavoidable. Future studies will need to (1) explore the developmental trajectory of environmental selection, and (2) examine the boundaries of environmental selection by probing these questions with other goals and in other contexts (e.g.~first-person settings). Investigating how children learn in noise will ultimately bring us closer to understanding how children can thrive across a wide range of environments. \hypertarget{references}{% \section{References}\label{references}} \setlength{\parindent}{-0.1in} \setlength{\leftskip}{0.125in} \noindent \hypertarget{refs}{} \leavevmode\hypertarget{ref-bjorklund1990}{}% Bjorklund, D. F., \& Harnishfeger, K. K. (1990). The resources construct in cognitive development: Diverse sources of evidence and a theory of inefficient inhibition. \emph{Developmental Review}, \emph{10}(1), 48--71. \leavevmode\hypertarget{ref-casey2017}{}% Casey, J. A., Morello-Frosch, R., Mennitt, D. J., Fristrup, K., Ogburn, E. L., \& James, P. (2017). Race/ethnicity, socioeconomic status, residential segregation, and spatial variation in noise exposure in the contiguous united states. \emph{Environmental Health Perspectives}, \emph{125}(7), 077017. \leavevmode\hypertarget{ref-castro2008}{}% Castro, R. M., Kalish, C., Nowak, R., Qian, R., Rogers, T., \& Zhu, X. (2008). Human active learning. In \emph{Advances in neural information processing systems} (pp. 241--248). Citeseer. \leavevmode\hypertarget{ref-clark20123}{}% Clark, C., Sörqvist, P., \& others. (2012). A 3 year update on the influence of noise on performance and behavior. \emph{Noise and Health}, \emph{14}(61), 292. \leavevmode\hypertarget{ref-cohen1973}{}% Cohen, S., Glass, D. C., \& Singer, J. E. (1973). Apartment noise, auditory discrimination, and reading ability in children. \emph{Journal of Experimental Social Psychology}, \emph{9}(5), 407--422. \leavevmode\hypertarget{ref-erickson2017}{}% Erickson, L. C., \& Newman, R. S. (2017). Influences of background noise on infants and children. \emph{Current Directions in Psychological Science}, \emph{26}(5), 451--457. \leavevmode\hypertarget{ref-fallon2000}{}% Fallon, M., Trehub, S. E., \& Schneider, B. A. (2000). Children's perception of speech in multitalker babble. \emph{The Journal of the Acoustical Society of America}, \emph{108}(6), 3023--3029. \leavevmode\hypertarget{ref-feldstein1971}{}% Feldstein, J. H., \& Witryol, S. L. (1971). The incentive value of uncertainty reduction for children. \emph{Child Development}, 793--804. \leavevmode\hypertarget{ref-hygge2019}{}% Hygge, S. (2019). Noise and cognition in children. \leavevmode\hypertarget{ref-jensen1993}{}% Jensen, J. K., \& Neff, D. L. (1993). Development of basic auditory discrimination in preschool children. \emph{Psychological Science}, \emph{4}(2), 104--107. \leavevmode\hypertarget{ref-klatte2013}{}% Klatte, M., Bergström, K., \& Lachmann, T. (2013). Does noise affect learning? A short review on noise effects on cognitive performance in children. \emph{Frontiers in Psychology}, \emph{4}, 578. \leavevmode\hypertarget{ref-mcmillan2016}{}% McMillan, B. T., \& Saffran, J. R. (2016). Learning in complex environments: The effects of background speech on early word learning. \emph{Child Development}, \emph{87}(6), 1841--1855. \leavevmode\hypertarget{ref-riley2012}{}% Riley, K. G., \& McGregor, K. K. (2012). Noise hampers children's expressive word learning. \leavevmode\hypertarget{ref-ruggeri2019}{}% Ruggeri, A., Swaboda, N., Sim, Z. L., \& Gopnik, A. (2019). Shake it baby, but only when needed: Preschoolers adapt their exploratory strategies to the information structure of the task. \emph{Cognition}, \emph{193}, 104013. \leavevmode\hypertarget{ref-settles2009}{}% Settles, B. (2009). Active learning literature survey. \leavevmode\hypertarget{ref-viet2014}{}% Viet, S. M., Dellarco, M., Dearborn, D. G., \& Neitzel, R. (2014). Assessment of noise exposure to children: Considerations for the national children's study. \emph{Journal of Pregnancy and Child Health}, \emph{1}(1). \leavevmode\hypertarget{ref-wang2021}{}% Wang, J., Yang, Y., Macias, C., \& Bonawitz, E. (2021). Children with more uncertainty in their intuitive theories seek domain-relevant information. \emph{Psychological Science}, \emph{32}(7), 1147--1156. \leavevmode\hypertarget{ref-xu2019}{}% Xu, F. (2019). Towards a rational constructivist theory of cognitive development. \emph{Psychological Review}, \emph{126}(6), 841. \bibliographystyle{apacite} \end{document}
{ "alphanum_fraction": 0.7961034055, "avg_line_length": 48.889182058, "ext": "tex", "hexsha": "8e07aed0ede189210d02be6609f3bd00904ced8f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f408fab2cc2655d4a2020b2931b9a0762680fd9a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rondeline/adapt", "max_forks_repo_path": "writeup/ADAPT_CogSci22_AnonymousSubmission.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f408fab2cc2655d4a2020b2931b9a0762680fd9a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rondeline/adapt", "max_issues_repo_path": "writeup/ADAPT_CogSci22_AnonymousSubmission.tex", "max_line_length": 171, "max_stars_count": null, "max_stars_repo_head_hexsha": "f408fab2cc2655d4a2020b2931b9a0762680fd9a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rondeline/adapt", "max_stars_repo_path": "writeup/ADAPT_CogSci22_AnonymousSubmission.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8861, "size": 37058 }
\documentclass[{{cookiecutter.project_slug}}.tex]{subfiles} \begin{document} \chapter{Appendix} \end{document}
{ "alphanum_fraction": 0.7699115044, "avg_line_length": 16.1428571429, "ext": "tex", "hexsha": "880b5d9139aaf2e0f1db2c04ae7417a194d74a94", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-12-16T12:57:47.000Z", "max_forks_repo_forks_event_min_datetime": "2020-03-23T16:54:47.000Z", "max_forks_repo_head_hexsha": "03559cb4cc4cf09ce38b6dc1553a69f9442729da", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Oli4/cookiecutter-latex-thesis", "max_forks_repo_path": "{{cookiecutter.project_slug}}/Appendix.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "03559cb4cc4cf09ce38b6dc1553a69f9442729da", "max_issues_repo_issues_event_max_datetime": "2022-02-26T22:42:18.000Z", "max_issues_repo_issues_event_min_datetime": "2018-08-07T12:25:25.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Oli4/cookiecutter-latex-thesis", "max_issues_repo_path": "{{cookiecutter.project_slug}}/Appendix.tex", "max_line_length": 59, "max_stars_count": 5, "max_stars_repo_head_hexsha": "03559cb4cc4cf09ce38b6dc1553a69f9442729da", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Oli4/cookiecutter-latex-thesis", "max_stars_repo_path": "{{cookiecutter.project_slug}}/Appendix.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-26T22:40:02.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-25T06:29:08.000Z", "num_tokens": 33, "size": 113 }
\subsection{A.1. First} Impact and need for something \cite{article}. \subsection{A.2. Second} ... \subsection{A.3. Third} ... \subsection{A.4. Summary} ...
{ "alphanum_fraction": 0.6369047619, "avg_line_length": 9.8823529412, "ext": "tex", "hexsha": "6ce667d7b90720747817e1dc1a065579b32e839d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "730cc2b7933899156b05fbf83365a0e76ff0c974", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wanwanbeen/columbia_phd_proposal_template", "max_forks_repo_path": "sec_A_significance.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "730cc2b7933899156b05fbf83365a0e76ff0c974", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wanwanbeen/columbia_phd_proposal_template", "max_issues_repo_path": "sec_A_significance.tex", "max_line_length": 45, "max_stars_count": 2, "max_stars_repo_head_hexsha": "730cc2b7933899156b05fbf83365a0e76ff0c974", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wanwanbeen/columbia_phd_proposal_template", "max_stars_repo_path": "sec_A_significance.tex", "max_stars_repo_stars_event_max_datetime": "2017-11-05T02:07:42.000Z", "max_stars_repo_stars_event_min_datetime": "2017-11-05T01:06:19.000Z", "num_tokens": 53, "size": 168 }
\documentclass[main.tex]{subfiles} \begin{document} \chapter{Logic} \label{chapter:logic} \epigraph{All opinions are not equal. Some are a very great deal more robust, sophisticated and well supported in logic and argument than others.}{Douglas Adams} \minitoc \section{Introduction} Let's spell out two logic puzzles: \begin{center} \textit{Sam has 1 cow. If Sam has at least 2 cows, then Sam can breed the cows to make one more cow. Assuming Sam has access to infinite resources and time, how many cows can Sam make?} \end{center} \begin{center} \textit{Two givens: knights always tell the truth, and knaves always lie. On the island of knights and knaves, you are approached by two people. The first one says to you, ``we are both knaves." What are they actually?} (from Popular Mechanic's Riddle of the Week \#43: Knights and Knaves, Part 1 \cite{pop-mech}) \end{center} Thinking logically about these puzzles will help you -- think about what can and cannot happen; what can and cannot be true. Solving these problems is left as an exercise. There are only two possibilities for Boolean\index{Boolean} statements -- True or False. Here's a formal definition of logic, from Merriam-Webster: \begin{defn}[Logic\index{Logic}] A science that deals with the principles and criteria of validity of inference and demonstration : the science of the formal principles of reasoning \cite{logic} \end{defn} This chapter includes a wealth of topics. We will touch on propositional logic and its corollaries, as well as predicate logic and it's applications to the rest of this course. Thinking logically should be a natural process, so we hope this section is relatively straightforward. \section{Propositional Logic} Propositional logic is a branch of logic that deals with simple propositions and logical connectives. Sometimes propositional logic is referred to as \textbf{zeroth-order logic}, as it lays the foundations for \textit{predicate logic}, also known as \textit{first-order logic}. \begin{defn}[Proposition] \index{Proposition} A statement that is exclusively either true or false. A proposition must \textit{have} a true or false value, and it cannot have \textit{both}. Propositions are usually represented as variables. These variables can represent a single statement, or large compound statements. We will see examples later \end{defn} \begin{defn}[Logical Connective] An operation that connects two propositions. We study these below \end{defn} The motivation behind propositional logic is that we want to represent basic logical statements as an expression of variables and operators. Propositional logic also lays the groundwork for higher-order logic. Before we start, let us motivate propositional logic and Boolean algebra by recalling the fundamental ideas of mathematical concepts we are already familiar with. For standard arithmetic, we utilize real numbers \(\R\), binary (two inputs) operators \(+\),\(-\),\(\times\),\(\div\),\(\cdots\), and unary (one input) operators \(\exp(\cdot)\), \(\log(\cdot)\), \(\sqrt{\cdot}\), \(\text{abs}(\cdot)\), \((-1)(\cdot)\),\(\cdots\). Note that our operators here take elements from our main set, and return an element back inside that set. For example, \(4.01 + 3.99\) gives us 8. We have a way to signal that those two quantities above are the same, namely the equals sign \(=\). Finally, we have a way to abstract out elements as \textit{variables}, for example \(x+y=z\). And if we fill in some of the variables, we can solve for the other ones. For example \(x+3.99=8\) implies \(x=4.01\). In a similar way, this whole mathematical system can be completely abstracted, and we can reason about those abstract structures. Then, any specific example that holds the same inherent properties of our structures will also have any reasoned theorem apply. This is quite an important idea in mathematics -- we can abstract concepts and reason about abstract structures that can apply to a variety of specific examples. \begin{figure}[h] \centering \[(\hspace{2mm} \rule{0.5cm}{0.5pt} \hspace{2mm} \mathbin{\square} \hspace{2mm} \rule{0.5cm}{0.5pt} \hspace{2mm}) \mapsto \hspace{1mm} \rule{0.5cm}{0.5pt}\] \caption{ In abstract mathematics, we care about \textit{operators} that \textit{map} elements from a set into another element in the same set. For addition in the reals, this would look like \((4.01+3.99) \mapsto 8\). } \end{figure} Why does this apply here? Well, for propositional logic, we are, essentially, going to \textit{define} a new \textit{structure}. Our new ``number set'' becomes \(\B = \{\mathbf{T},\mathbf{F}\}\) (analogous to \(\R\)). Our new ``unary operators'' become \(\lnot\) (analogous to multiplying by \(-1\), etc). Our new ``binary operators'' become \(\land,\lor,\Rightarrow,\cdots\) (analogous to \(+,-,\times,\div,\cdots\)). Our new ``equality'' becomes \textit{logical equivalence} (\(\equiv\) versus \(=\)). Our idea of \textit{variables} stays the same. But, interestingly, because our input set \(\B\) is \textit{finite} (which is different from the countably infinite \(\Z\) and the uncountable \(\R\)). This gives us nice ways to actually \textit{write out all possibilities} for inputs and outputs of our operators. With this in mind, let us push onward to propositional logic. \subsection{Truth Tables and Logical Connectives} Before we dive into the logical connectives, let's study the notion of a truth table. This will help us fully understand the logical connectives. \begin{defn}[Truth Table] \index{Truth Table} A table that shows us all truth-value possibilities. For example, with two propositions \(p\) and \(q\): \begin{center} \begin{tabular}{c|c|c} \(p\) & \(q\) & \textit{some compound proposition} \\ \hline F & F & T/F \\ F & T & T/F \\ T & F & T/F \\ T & T & T/F \end{tabular} \end{center} \end{defn} Now we can begin our study of the logical connectives. The following definitions explain the intuition behind the logical connectives, and present their associated truth tables. \begin{defn}[And \(\land\)] Also known as the \textit{conjunction}. Logical connective that evaluates to true when the propositions that it connects are both true. If either proposition is false, then \textit{and} evaluates to false. To remember: \textit{prop 1} \textbf{and} \textit{prop 2} must \textit{both} be true. Truth table: \begin{center} \begin{tabular}{c|c|c} \(p\) & \(q\) & \(p \land q\) \\ \hline F & F & F \\ F & T & F \\ T & F & F \\ T & T & T \end{tabular} \end{center} \textit{Notice the only row that evaluates to true is when both propositions are true} \end{defn} \begin{defn}[Or \(\lor\)] Also known as the \textit{disjunction}. Logical connective that evaluates to true when either of the propositions that it connects are true (at least 1 of the connected propositions is true). If both propositions are false, then \textit{or} evaluates to false. \textbf{Note}: if both propositions are true, then \textit{or} still evaluates to true. To remember: either \textit{prop 1} \textbf{or} \textit{prop 2} must be true. Truth table: \begin{center} \begin{tabular}{c|c|c} \(p\) & \(q\) & \(p \lor q\) \\ \hline F & F & F \\ F & T & T \\ T & F & T \\ T & T & T \end{tabular} \end{center} \textit{Notice the only row that evaluates to false is when both propositions are false} \end{defn} \begin{defn}[Not \(\lnot\), \(\sim\)] Also known as the \textit{negation}. Logical connective that flips the truth value of the proposition to which it is connected. Unlike \textit{and} and \textit{or}, \textit{not} only affects 1 proposition. Truth table: \begin{center} \begin{tabular}{c|c} \(p\) & \(\lnot p\) \\ \hline F & T \\ T & F \\ \end{tabular} \end{center} \end{defn} \begin{defn}[Implication/Conditional \(\Rightarrow\), \(\rightarrow\)] Logical connective that reads as an \textit{if-then} statement. The implication must be false if the first proposition is true and the implied (connected/second) proposition is false. Otherwise it is true. Truth table: \begin{center} \begin{tabular}{c|c|c} \(p\) & \(q\) & \(p \Rightarrow q\) \\ \hline F & F & T \\ F & T & T \\ T & F & F \\ T & T & T \end{tabular} \end{center} Note: the direction of an implication can be flipped: \(p \Leftarrow q\) is the same as \(q \Rightarrow p\) \end{defn} Let's try to understand the truth table for the implication statement before we continue. We present two examples that attempt to form an intuitive analogy to the implication. \begin{example} Think of the implication as a vending machine. \(p\) is the statement \textit{we put money into the vending machine}, and \(q\) is the statement \textit{we received a snack from the vending machine}. Notice that the statements do not necessarily depend on each other. We examine the four cases and see when we are \textit{unhappy}: \begin{enumerate} \item \(p\) is \textbf{false} and \(q\) is \textbf{false} -- we did not put in money, and we did not get a snack, so we remain happy (normal operations) \item \(p\) is \textbf{false} and \(q\) is \textbf{true} -- we did not put in money, and we did get a snack, so we are very very happy (free snack!) \item \(p\) is \textbf{true} and \(q\) is \textbf{false} -- we did put in money, and we did not get a snack, so we are very very unhappy (we got robbed!) \item \(p\) is \textbf{true} and \(q\) is \textbf{true} -- we did put in money, and we did get a snack, so we are happy (normal operations) \end{enumerate} When we are unhappy, then the implication statement is false. Otherwise it is true (we are not \textit{unhappy}). \end{example} \begin{example} Think of the implication in the lens of a program. You want to evaluate whether your program \textit{makes sense}. Here is the example program from the statement \(p \Rightarrow q\): \begin{lstlisting} (...) if (p is true) { <run body code if q is true> } (...) \end{lstlisting} The body code is run only if \(q\) is true. Now let's examine the 4 cases and see whether the program makes sense: \begin{enumerate} \item \(p\) is \textbf{false} and \(q\) is \textbf{false} -- the program does not go into the body of the if-statement and hence makes sense \item \(p\) is \textbf{false} and \(q\) is \textbf{true} -- the program again does not go into the body of the if-statement and hence makes sense (regardless of the value of \(q\)) \item \(p\) is \textbf{true} and \(q\) is \textbf{false} -- the program goes into the body of the if-statement but since \(q\) is false the program does not evaluate the body code. This does not make sense \item \(p\) is \textbf{true} and \(q\) is \textbf{true} -- the program goes into the body of the if-statement and evaluates the body code. This makes sense \end{enumerate} When the code evaluator makes sense, then the implication statement is true. \end{example} % todo add the 'promise' example? \begin{defn}[Bi-conditional \(\Leftrightarrow\), \(\leftrightarrow\)] Logical connective that reads as an \textit{if and only if} statement. This means that both propositions must imply each other. For the bi-conditional to be true, both propositions must either be true or false. Truth table: \begin{center} \begin{tabular}{c|c|c} \(p\) & \(q\) & \(p \Leftrightarrow q\) \\ \hline F & F & T \\ F & T & F \\ T & F & F \\ T & T & T \end{tabular} \end{center} \end{defn} Some more complicated ones: \begin{defn}[Exclusive Or (\textit{Xor}) \(\oplus\)] Logical connective that evaluates to true when \textit{only} one of the two propositions that it connects is true. If both propositions are true or false, then \textit{xor} evaluates to false. The exclusive part means we \textit{exclude} the \textit{or} case when both propositions are true. Truth table: \begin{center} \begin{tabular}{c|c|c} \(p\) & \(q\) & \(p \oplus q\) \\ \hline F & F & F \\ F & T & T \\ T & F & T \\ T & T & F \end{tabular} \end{center} \end{defn} \begin{defn}[Exclusive Nor (\textit{Xnor}) \(\otimes\)] Logical connective that negates the \textit{xor}. It is just an \textit{xor} connective appended with a \textit{not} connective. Truth table: \begin{center} \begin{tabular}{c|c|c} \(p\) & \(q\) & \(\lnot (p \oplus q) \equiv (p \otimes q)\) \\ \hline F & F & T \\ F & T & F \\ T & F & F \\ T & T & T \end{tabular} \end{center} \end{defn} Now an example: \exsol{ Translate the following statement into a propositional logic statement: \textit{exclusively either the weather rains or students wear rain jackets}. }{ Let \(r\) be the proposition \textit{the weather rains} and \(j\) be the proposition \textit{students wear rain jackets}. Then the statement becomes \(r \oplus j\) } \subsection{Boolean Algebra} The motivation behind Boolean algebra is that we want to take complicated compound propositional statements and simplify them. If we notice that a variable does not affect the final output, then getting rid of that variable cuts the amount of truth-value possibilities (truth-table rows) in half. \begin{defn}[Boolean Statement] \index{Boolean Statement} A statement that is exclusively either true or false \end{defn} Some handy notations: \begin{defn}[Equivalence \(\equiv\)] Logical equivalence says that the two connected statements are logically the same. You can think of this notation as the \textit{equals} sign. Equality is poorly defined for Boolean expressions, so we use the equivalence notation instead \end{defn} \begin{defn}[Tautology \(t\) \textit{or} \(\mathbf{T}\)] A proposition that is always true \end{defn} \begin{defn}[Contradiction \(c\) \textit{or} \(\mathbf{F}\)] A proposition that is always false \end{defn} We provide a handful of helpful theorems to aid in your Boolean algebra simplifications. You do not need to memorize these theorems -- they will be given to you as a table. \marginpar{See appendix \ref{appendix:tables}} \begin{thm}[Commutativity] For any propositions \(p\) and \(q\) the \textbf{and} and \textbf{or} operations are commutative: \[p \lor q \equiv q \lor p\] \[p \land q \equiv q \land p\] \end{thm} \begin{thm}[Associativity] For any propositions \(p\), \(q\), and \(r\) the \textbf{and} and \textbf{or} operations are associative: \[(p \lor q) \lor r \equiv p \lor (q \lor r)\] \[(p \land q) \land r \equiv p \land (q \land r)\] \end{thm} \begin{thm}[Distributivity] For any propositions \(p\), \(q\), and \(r\) the \textbf{and} and \textbf{or} operations are distributive: \[p \land (q \lor r) \equiv (p \land q) \lor (p \land r)\] \[p \lor (q \land r) \equiv (p \lor q) \land (p \lor r)\] \end{thm} \begin{thm}[Identity] For any proposition \(p\) the following hold: \[p \lor c \equiv p\] \[p \land t \equiv p\] \end{thm} \begin{thm}[Negation] For any proposition \(p\) the following hold: \[p \lor \lnot p \equiv t\] \[p \land \lnot p \equiv c\] \end{thm} \begin{thm}[Double Negation] For any proposition \(p\) the following holds: \[\lnot (\lnot p) \equiv p\] \end{thm} \begin{thm}[Idempotence] For any proposition \(p\) the following hold: \[p \lor p = p\] \[p \land p = p\] \end{thm} \begin{thm}[De Morgan's] For any propositions \(p\) and \(q\) the following hold: \[\lnot (p \lor q) = \lnot p \land \lnot q\] \[\lnot (p \land q) = \lnot p \lor \lnot q\] \end{thm} \begin{thm}[Universal Bound] For any proposition \(p\) the following hold: \[p \lor t \equiv t\] \[p \land c \equiv c\] \end{thm} \begin{thm}[Absorption] For any propositions \(p\) and \(q\) the following hold: \[p \lor (p \land q) \equiv p\] \[p \land (p \lor q) \equiv p\] \end{thm} \begin{thm}[Negation of Tautology and Contradiction] The following hold: \[\lnot t \equiv c\] \[\lnot c \equiv t\] \end{thm} Here are two important theorems that you will use throughout your proofs in this course. The first theorem says that for a bi-conditional both propositions must imply each other. The second theorem gives an equivalence between the implication and \textit{or} connective. The proofs of these theorems are left as an exercise to the reader. \begin{thm}[Bi-conditional to Implication] \label{bicond-to-imp} For any propositions \(p\) and \(q\) the following hold: \[p \Leftrightarrow q \equiv (p \Rightarrow q) \land (q \Rightarrow p)\] \end{thm} \begin{thm}[Implication to Disjunction] \label{imp-to-disj} For any propositions \(p\) and \(q\) the following holds: \[p \Rightarrow q \equiv \lnot p \lor q\] \end{thm} Now some examples of Boolean statement simplification. % todo \exsol{ Simplify the following expression: \((p \Rightarrow q) \Rightarrow r\) }{ \begin{align*} (p \Rightarrow q) \Rightarrow r &\equiv \big(\lnot (p \Rightarrow q)\big) \lor r & \text{Implication to Disjunction} \\ &\equiv \big(\lnot ((\lnot p) \lor q)\big) \lor r & \text{Implication to Disjunction} \\ &\equiv ((\lnot (\lnot p)) \land (\lnot q)) \lor r & \text{De Morgan's} \\ &\equiv (p \land (\lnot q)) \lor r & \text{Double Negation} \end{align*} } Notice how we reference a theorem in each step. This allows us to fully explain our equivalence, keeps us from making mistakes, and ensures our equivalence is valid. \exsol{ Simplify the following expression: \((p \otimes q) \land p\) }{ \begin{align*} (p \otimes q) \land p &\equiv \lnot (p \oplus q) \land p & \text{XNOR equivalence} \\ &\equiv \lnot \big((p \land (\lnot q)) \lor ((\lnot p) \land q)\big) \land p & \text{XOR equivalence} \\ &\equiv \big( \lnot (p \land (\lnot q)) \land \lnot ((\lnot p) \land q)\big) \land p & \text{De Morgan's} \\ &\equiv \big( ((\lnot p) \lor \lnot(\lnot q)) \land (\lnot (\lnot p) \lor (\lnot q))\big) \land p & \text{De Morgan's} \\ &\equiv \big( ((\lnot p) \lor q) \land (p \lor (\lnot q)) \big) \land p & \text{Double Negation} \\ % &\equiv ((\lnot p) \lor q) \land \big( (p \lor (\lnot q)) \land p \big) & \text{Associativity} \\ &\equiv ((\lnot p) \lor q) \land \big(p \land (p \lor (\lnot q)) \big) & \text{Commutativity} \\ &\equiv ((\lnot p) \lor q) \land p & \text{Absorption} \\ % &\equiv p \land ((\lnot p) \lor q) & \text{Commutativity} \\ &\equiv (p \land (\lnot p)) \lor (p \land q) & \text{Distributivity} \\ &\equiv c \lor (p \land q) & \text{Negation} \\ &\equiv p \land q & \text{Identity} \end{align*} } Note in the preceding example that we used equivalences between the \textit{xnor} to the \textit{xor}, and the \textit{xor} to the \textit{or}. We will discuss later exactly how we deduced these equivalences. Now that we understand some equivalences, we can motivate some definitions relating to the implication. \begin{defn}[Converse] The converse of \(p \Rightarrow q\) is \(q \Rightarrow p\). Obtain this by reversing the arrow direction. \end{defn} \begin{defn}[Inverse] The inverse of \(p \Rightarrow q\) is \((\lnot p) \Rightarrow (\lnot q)\). Obtain this by negating both propositions. \end{defn} \begin{defn}[Contrapositive] The contrapositive of \(p \Rightarrow q\) is \((\lnot q) \Rightarrow (\lnot p)\). Obtain this by reversing the arrow direction, and negating both propositions. Or, take both the converse and inverse. \end{defn} \begin{defn}[Negation] The negation of \(p \Rightarrow q\) is \(\lnot (p \Rightarrow q)\). Obtain this by negating the entire implication. \end{defn} Now we can use our equivalencies to show some important facts about these definitions. \begin{thm}[Contraposition Equivalence] \[p \Rightarrow q \equiv (\lnot q) \Rightarrow (\lnot p)\] \end{thm} We leave the proof as an exercise. This theorem will serve us well in our study of Number Theory. The contrapositive indirect proof technique relies on this fact. \begin{thm}[The Converse Error] \[(p \Rightarrow q) \not\Rightarrow (q \Rightarrow p)\] \end{thm} \begin{proof} We want to show that if you have \(p \Rightarrow q\), then you do not necessarily have \(q \Rightarrow p\). There are many ways to show this, but we will start by simply examining cases of \(p\) and \(q\). Consider \(p \equiv \mathbf{F}\) and \(q \equiv \mathbf{T}\). Then \(p \Rightarrow q \equiv \mathbf{F} \Rightarrow \mathbf{T} \equiv \mathbf{T}\). But then \(q \Rightarrow p \equiv \mathbf{T} \Rightarrow \mathbf{F} \equiv \mathbf{F}\). Then overall we have \((p \Rightarrow q) \Rightarrow (q \Rightarrow p) \equiv \mathbf{T} \Rightarrow \mathbf{F} \equiv \mathbf{F}\). So we've shown that if you have \(p \Rightarrow q\), then you do not necessarily have \(q \Rightarrow p\). \end{proof} \begin{thm}[The Inverse Error] \[(p \Rightarrow q) \not\Rightarrow ((\lnot p) \Rightarrow (\lnot q))\] \end{thm} \begin{rem} A similar proof can be made for this theorem. Find one truth-value pairings for \(p\) and \(q\) such that \(p \Rightarrow q\) is true, but \((\lnot p) \Rightarrow (\lnot q)\) is false. \end{rem} \subsection{Circuits} %todo make better Propositions have two possible values: true and false. If we set true to mean \textit{on} and false to mean \textit{off}, then we can translate our Boolean statements into logical circuits. To do this, think of each logical connective as a \textit{gate}. Similar to the logical connectives, a gate takes 2 (or more) inputs and returns some output. The inputs and outputs are all 1s and 0s (\textit{on}s and \textit{off}s -- true and false). Circuits are the bare-bones to computers, so it is necessary you understand the basics. For computer engineers, you must know circuits by heart. \begin{defn}[Circuit] Representations of Boolean statements into electronic components \end{defn} Boolean variables can be exclusively either true or false; this is analogous to electric wires being exclusively either on or off. We let a tautology be equivalent to a \textit{power source}, which is a wire that is always \textbf{on}. Similarly we let a contradiction be equivalent to \textbf{off}, or a wire receiving no power. The following gates are exactly equivalent to their logic counterparts. We thus only include the gate picture. \begin{defn}[And Gate -- \textit{conjunction}] Picture: \begin{center} \begin{circuitikz} \draw (0,0) node[and port] () {}; \end{circuitikz} \end{center} To remember the \textit{and} gate, note that the picture looks like a \textbf{D}, which corresponds to the D in AN\textbf{D} \end{defn} \begin{defn}[Or Gate -- \textit{Disjunction}] Picture: \begin{center} \begin{circuitikz} \draw (0,0) node[or port] () {}; \end{circuitikz} \end{center} \end{defn} \begin{defn}[Not Gate -- \textit{Negation}] Picture: \begin{center} \begin{circuitikz} \draw (0,0) node[not port] () {}; \end{circuitikz} \end{center} \end{defn} \begin{rem} When we have a gate that has a \textit{not} after it, we can simplify the gate by just adding a circle to the output. Example (the \textit{nand} gate is the negation of the \textit{and} gate): \begin{multicols}{3} \begin{center} \begin{circuitikz} \draw (0,0) node[and port] (and1) {} (1,0) node[not port] (not1) {} (and1.out) -- (not1.in); \end{circuitikz} \end{center} \begin{center} becomes \end{center} \begin{center} \begin{circuitikz} \draw (0,0) node[nand port] (and1) {}; \end{circuitikz} \end{center} \end{multicols} \end{rem} \begin{defn}[Xor Gate (\textit{eXclusive Or gate})] Picture: \begin{center} \begin{circuitikz} \draw (0,0) node[xor port] () {}; \end{circuitikz} \end{center} \end{defn} We can think of truth tables in an equivalent fashion, where 1 is true and 0 is false. \exsol{ Write the truth table for the \textit{nand} gate. }{ \mbox{} \begin{center} \begin{tabular}{c|c|c} \(p\) & \(q\) & \(\lnot (p \land q)\) \\ \hline 0 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{tabular} \end{center} } You can use 1/0 or T/F, whichever you prefer. If an assignment specifies you use a specific key, then use the one specified. Later when we discuss different number systems, you will notice a correlation between binary numbers and truth tables with 1/0s. This makes it easy to construct a truth table very quickly with a given amount of inputs. \subsubsection{Circuit Addition} We can also create circuits that add numbers. For a discussion of different number bases, see section \ref{diff-num-bases}. We can build a circuit that adds two single-digit binary numbers? There are only 4 possibilities for adding single-digit numbers, so let's examine what a truth table for this process might look like (all numbers are in base-2): \begin{center} \begin{tabular}{cc|c} input \(a\) & input \(b\) & output \(a + b\) \\ \midrule 0 & 0 & 0 \\ 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 10 \\ \end{tabular} \end{center} In the output, we can always append zeros to the beginning of the binary number without changing the actual number: \begin{center} \begin{tabular}{cc|c} input \(a\) & input \(b\) & output \(a + b\) \\ \midrule 0 & 0 & 00 \\ 0 & 1 & 01 \\ 1 & 0 & 01 \\ 1 & 1 & 10 \\ \end{tabular} \end{center} Now let's separate our output column into two columns -- the sum-bit (right) and the carry-bit (left): \begin{center} \begin{tabular}{cc|cc} input \(a\) & input \(b\) & carry & sum \\ \midrule 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 1 \\ 1 & 1 & 1 & 0 \\ \end{tabular} \end{center} We know how to generate Boolean expressions for the output columns: \begin{align*} \text{sum} &\equiv a \oplus b \\ \text{carry} &\equiv a \land b \end{align*} Which gives us the \textbf{half adder}: \begin{boxx} Adding two bits \(a+b = cs\) is equivalent to the circuit \begin{center} \begin{circuitikz} \draw (0,0) node (xor1) [xor port] {} (0,1.5) node (and1) [and port] {} (and1.in 1) node (a) [anchor=east,xshift=-1cm] {\(a\)} (xor1.in 2) node (b) [anchor=east,xshift=-1cm] {\(b\)}; \draw (and1.in 1) -- (a); \draw (and1.in 2) -- (b); \draw (xor1.in 1) -- (a); \draw (xor1.in 2) -- (b); \draw (and1.out) node (c) [anchor=west,xshift=1cm] {carry} (xor1.out) node (s) [anchor=west,xshift=1cm] {sum}; \draw (and1.out) -- (c); \draw (xor1.out) -- (s); \node [xshift=.2cm] at (a) {\textbullet}; \node [xshift=.2cm] at (b) {\textbullet}; \node [xshift=-.5cm] at (c) {\textbullet}; \node [xshift=-.5cm] at (s) {\textbullet}; \end{circuitikz} \end{center} \end{boxx} Now we can add two 1-bit numbers together. What if we want to add multiple-bit numbers together? Consider adding two 2-bit numbers: \begin{center} \begin{tabular}{ccc} & \(c\) & \\ & \(x\) & \(y\) \\ \(+\) & \(z\) & \(w\) \\ \midrule \(o_2\) & \(o_1\) & \(s\) \end{tabular} \end{center} We see here that after adding \(y+w\) we are left with a carry bit \(c\) for our addition \(x+z\). How do we add three bits \(c+x+z\)? Well, we can separate it into two 1-bit additions: \(c+x\) which, via a half-adder, yields a sum bit \(s_1\) and carry bit \(c_1\), then \(s_1 + z\) which, via another half-adder, yields a sum bit \(s_2\) and carry bit \(c_2\). Then, we can let \(o_1 = s_2\). Unfortunately this leaves us two carry bits that somehow need to be combined into a final carry bit. Consider now the truth-table for adding three bits: \begin{center} \begin{tabular}{ccc|c|ccc} \(c\) & \(x\) & \(z\) & \(c+x+z\) & carry\((c+x)\) & sum\((c+x) = s_1\) & carry\((s_1+z)\) \\ \midrule 0 & 0 & 0 & 00 & 0 & 0 & 0 \\ 0 & 0 & 1 & 01 & 0 & 0 & 0 \\ 0 & 1 & 0 & 01 & 0 & 1 & 0 \\ 0 & 1 & 1 & 10 & 0 & 1 & 1 \\ 1 & 0 & 0 & 01 & 0 & 1 & 0 \\ 1 & 0 & 1 & 10 & 0 & 1 & 1 \\ 1 & 1 & 0 & 10 & 1 & 0 & 0 \\ 1 & 1 & 1 & 11 & 1 & 0 & 0 \\ \end{tabular} \end{center} Now notice the carry bit in the output column \(c+x+z\) is the same as OR-ing the two carry bits from \(c+x\) and \(s_1+z\). This completes our circuit for adding three bits. We call this the \textbf{full adder}: \begin{boxx} Adding three bits \(a+b+c = os\) is equivalent to the circuit \begin{center} \begin{circuitikz} \draw (0,0.75) node[draw,minimum width=2cm,minimum height=1cm] (HA1) {Half Adder} (3.5,0) node[draw,minimum width=2cm,minimum height=1cm] (HA2) {Half Adder} ($(HA1.west)!0.75!(HA1.north west)$) coordinate (ha1in1) ($(HA1.west)!0.75!(HA1.south west)$) coordinate (ha1in2); \draw (ha1in1) node (a) [anchor=east,xshift=-0.5cm] {\(a\)} (ha1in2) node (b) [anchor=east,xshift=-0.5cm] {\(b\)} (ha1in1) -- (a) (ha1in2) -- (b); \node [xshift=.2cm] at (a) {\textbullet}; \node [xshift=.2cm] at (b) {\textbullet}; \draw ($(HA1.east)!0.75!(HA1.north east)$) coordinate (ha1out1) ($(HA1.east)!0.75!(HA1.south east)$) coordinate (ha1out2); \draw (ha1out1) node (c1) [anchor=west,xshift=4cm] {c\(_1\)} (ha1out2) node (s1) [anchor=west,xshift=0.35cm] {sum} (ha1out1) -- (c1) (ha1out2) -- (s1); %\node [xshift=-.2cm] at (c1) {\textbullet}; \draw ($(HA2.west)!0.75!(HA2.north west)$) coordinate (ha2in1) ($(HA2.west)!0.75!(HA2.south west)$) coordinate (ha2in2); \draw (s1) -- (ha2in1) (ha2in2) node (c) [anchor=east,xshift=-4cm] {\(c\)} (ha2in2) -- (c); \node [xshift=.2cm] at (c) {\textbullet}; \draw ($(HA2.east)!0.75!(HA2.north east)$) coordinate (ha2out1) ($(HA2.east)!0.75!(HA2.south east)$) coordinate (ha2out2); \draw (ha2out1) node (c2) [anchor=west,xshift=0.5cm] {c\(_2\)} (ha2out2) node (s) [anchor=west,xshift=2.65cm] {\(s\)} (ha2out1) -- (c2) (ha2out2) -- (s); \node [xshift=-.2cm] at (s) {\textbullet}; \draw (7,0.75) node (or1) [or port] {} (or1.in 1) -- (c1) (or1.in 2) -- (c2) (or1.out) node (o) [anchor=west] {\(o\)}; \node [xshift=-.2cm] at (o) {\textbullet}; \end{circuitikz} \end{center} \end{boxx} Putting these two structures, the half adder and full adder, together, we can construct a 2-bit adder, which solves our previous problem of doing \(xy + zw = o_2o_1s\): \begin{boxx} Adding three bits \(a+b+c = os\) is equivalent to the circuit \begin{center} \begin{circuitikz} \draw (0,0.75) node[draw,minimum width=2cm,minimum height=1cm] (HA1) {Half Adder} (3.75,-0.2) node[draw,minimum width=2cm,minimum height=1.5cm] (FA1) {Full Adder} ($(HA1.west)!0.75!(HA1.north west)$) coordinate (ha1in1) ($(HA1.west)!0.75!(HA1.south west)$) coordinate (ha1in2); \draw (ha1in1) node (y) [anchor=east,xshift=-0.5cm] {\(y\)} (ha1in2) node (w) [anchor=east,xshift=-0.5cm] {\(w\)} (ha1in1) -- (y) (ha1in2) -- (w); \node [xshift=.2cm] at (y) {\textbullet}; \node [xshift=.2cm] at (w) {\textbullet}; \draw ($(HA1.east)!0.75!(HA1.north east)$) coordinate (ha1out1) ($(HA1.east)!0.75!(HA1.south east)$) coordinate (ha1out2); \draw (ha1out1) node (s) [anchor=west,xshift=4.5cm] {\(s\)} (ha1out2) node (c1) [anchor=west,xshift=0.35cm] {carry} (ha1out1) -- (s) (ha1out2) -- (c1); \node [xshift=-.2cm] at (s) {\textbullet}; \draw ($(FA1.west)!0.75!(FA1.north west)$) coordinate (fa1in1) ($(FA1.west)!0.75!(FA1.south west)$) coordinate (fa1in2) ($(FA1.west)!0!(FA1.south west)$) coordinate (fa1in3); \draw (c1) -- (fa1in1) (fa1in2) node (x) [anchor=east,xshift=-4.25cm] {\(x\)} (fa1in2) -- (x) (fa1in3) node (z) [anchor=east,xshift=-4.25cm] {\(z\)} (fa1in3) -- (z); \node [xshift=.2cm] at (x) {\textbullet}; \node [xshift=.2cm] at (z) {\textbullet}; \draw ($(FA1.east)!0.75!(FA1.north east)$) coordinate (fa1out1) ($(FA1.east)!0.75!(FA1.south east)$) coordinate (fa1out2); \draw (fa1out1) node (o1) [anchor=west,xshift=0.75cm] { \(o_1\)} (fa1out2) node (o2) [anchor=west,xshift=0.75cm] { \(o_2\)} (fa1out1) -- (o1) (fa1out2) -- (o2); \node [xshift=-.3cm] at (o1) {\textbullet}; \node [xshift=-.3cm] at (o2) {\textbullet}; \end{circuitikz} \end{center} \end{boxx} This solves our 2-bit addition problem. What if we want to add 3-bit numbers? 4-bit numbers? \(n\)-bit numbers? Well, after the first bit column (adding two bits) we must add three bits. After this second column, we get a sum bit and a carry bit. If we tack on another column, which would make a 3-bit adder, then we add in the previous carry to the two new bits. This same process repeats for all further columns. So, tacking on another bit solely entails tacking on another full adder! \textit{Picture omitted}. \exsol{ How many half adders are required for an \(n\)-bit adder? }{ Note, here we implicitly assumed \(n > 0\). We need 1 half adder to start, and we need \(n-1\) full adders which each contain 2 half adders. This totals to \(2(n-1) + 1 = 2n-1\) half adders. } \subsection{Translations} You must know how to translate between circuits, Boolean statements, and truth tables. We start by motivating the translation between truth tables and Boolean statements. First, notice that we have already discussed how to make a truth table from a Boolean statement -- simply draw the truth table and fill in each row. Now we can focus on the reverse. We attempt to first motivate a process for this translation by showing a few examples. \begin{example} First recall the truth table for the conjunction: \begin{center} \begin{tabular}{cc|c} \(p\) & \(q\) & \(p \land q\) \\ \hline F & F & F \\ F & T & F \\ T & F & F \\ T & T & T \end{tabular} \end{center} Recall that we can extend the conjunction to take multiple inputs -- in this case, \textit{each} input \textit{must} be \textbf{True} for the output to be \textbf{True}. Now consider the following unknown table: \begin{center} \begin{tabular}{cc|c} \(p\) & \(q\) & unknown \\ \hline F & F & F \\ F & T & T \\ T & F & F \\ T & T & F \end{tabular} \end{center} This table is similar to the conjunction table in that it has \textit{one row} that outputs \textbf{True}. In the conjunction case, the assignments to \(p\) and \(q\) were both \textbf{True}. What are the assignments to the previous truth table? Well, the \(p\) input is \textbf{False} and the \(q\) input is \textbf{True}, as per the table. If we apply the conjunction to this row we have, we get the following table: \begin{center} \begin{tabular}{c|c|c|c} \(p\) & \(\lnot p\) & \(q\) & \((\lnot p) \land q\) \\ \hline F & T & F & F \\ F & T & T & T \\ T & F & F & F \\ T & F & T & F \end{tabular} \end{center} In the previous table, if we look at the middle two columns, we recover the conjunction with the \(p\) variable negated. \end{example} This is only half the story though. \begin{example} First recall the truth table for the disjunction: \begin{center} \begin{tabular}{cc|c} \(p\) & \(q\) & \(p \lor q\) \\ \hline F & F & F \\ F & T & T \\ T & F & T \\ T & T & T \end{tabular} \end{center} For this example, we only care about the middle two rows where \textit{either} variable is \textbf{True}. \end{example} These bring light to a simple 3-step process to translate from truth tables to Boolean statements: \begin{enumerate} \item Collect the rows whose \textit{output} is \textbf{True} \item For each of those rows \begin{enumerate} \item Look at the truth value assigned to the \textit{input} \item Construct a Boolean statement where, for each input \begin{itemize} \item If the assignment is \textbf{True} then use the variable itself \item If the assignment is \textbf{False} then use the \textit{negation} of the variable \end{itemize} \item Chain each input with \textbf{And} (\(\land\)) connectives \end{enumerate} \item Chain each row's statement with \textbf{Or} (\(\lor\)) connectives \end{enumerate} \exsol{ Find the unknown Boolean formula corresponding to the following truth table: \begin{center} \begin{tabular}{cc|c} \(p\) & \(q\) & unknown \\ \hline F & F & T \\ F & T & F \\ T & F & T \\ T & T & F \end{tabular} \end{center} }{ We follow the algorithm as before. The first and third rows return true in the output. In the first row, we see neither input is true, so we AND the negation of each input: \((\lnot p) \land (\lnot q)\). In the third row, we see the first input is true, while the second input is false, thus we get: \(p \land (\lnot q)\). OR-ing each statement together thus yields our unknown formula: \[((\lnot p) \land (\lnot q)) \lor (p \land (\lnot q))\] } \begin{rem} In our Boolean formula algorithm, we only focus on the true rows. If we wanted to also include the false rows, then we would need the statement in each false row to return false. We know how to get a statement that returns true, so we can simply take that statement and negate it! What happens, though, if we include the false rows then? \end{rem} \exsol{ Include the false rows in the Boolean formula from the previous example. }{ We negate the second and fourth row returned by the algorithm: \(\lnot ((\lnot p) \land q)\) and \(\lnot (p \land q)\) So our entire statement is now \[((\lnot p) \land (\lnot q)) \lor (p \land (\lnot q)) \lor (\lnot ((\lnot p) \land q)) \lor (\lnot (p \land q))\] Let's simplify this statement (rules omitted): \begin{align*} &\equiv ((\lnot p) \land (\lnot q)) \lor (p \land (\lnot q)) \lor (\lnot ((\lnot p) \land q)) \lor (\lnot (p \land q)) \\ &\equiv ((\lnot p) \land (\lnot q)) \lor (p \land (\lnot q)) \lor (p \lor (\lnot q)) \lor ((\lnot p) \lor q) \\ &\equiv ((\lnot p) \land (\lnot q)) \lor (p \land (\lnot q)) \lor (p \lor (\lnot p)) \lor ((\lnot q) \lor q) \\ &\equiv ((\lnot p) \land (\lnot q)) \lor (p \land (\lnot q)) \lor t \lor t \\ &\equiv ((\lnot p) \land (\lnot q)) \lor (p \land (\lnot q)) \end{align*} Nice, we have recovered the original statement! } \begin{rem} Including each false row in the Boolean statement generated by the algorithm -- so long as their true versions are negated -- does not logically change the Boolean statement \end{rem} We have a name for the type of statement generated from our algorithm above: \begin{defn}[Disjunctive Normal Form\index{Disjunctive Normal Form}] Describes a Boolean statement that is a conjunction of disjunctions -- abbreviated DNF \end{defn} If we flip each gate (AND goes to OR, OR goes to AND), then we get another important type of statement: \begin{defn}[Conjunctive Normal Form\index{Conjunctive Normal Form}] Describes a Boolean statement that is a disjunction of conjunctions -- abbreviated CNF \end{defn} Interestingly, any Boolean statement can be translated to an equivalent statement in CNF. Namely, statements in DNF, which are easy to generate, can be translated into CNF. This is important for computational complexity theory -- specifically NP-completeness. There exists a problem in computer science which entails finding a set of truth-value assignments for \(n\) different Boolean variables which makes a Boolean statement in CNF return true (or, become \textit{satisfiable}). \subsection{Reasoning/Deductions} \label{section:reasoning} Propositional logic also allows us to \textit{reason} about things. \begin{defn}[Knowledge Base] A group of information that you know is true \end{defn} \begin{defn}[Reasoning] The process of deriving new information from a given knowledge base \end{defn} See the introduction of this chapter for an example. \subsubsection{Classical Rules of Deduction} We may refer to \textit{deductions} as \textit{inferences}. They are the same. \begin{defn}[Deductions] Using previously-known knowledge in your knowledge base to obtain/create new knowledge \end{defn} We have a whole list of useful deductions that are provably valid. As with our Boolean algebra theorems, you do not need to memorize these theorems -- they will be given to you as a table. \begin{thm}[Modus Ponens] \mbox{} \begin{center} \begin{tabular}{c@{\,}l@{}} & \(p\) \\ & \(p \Rightarrow q\) \\ \cline{2-2} \(\therefore\) & \(q\) \end{tabular} \end{center} \end{thm} \begin{thm}[Modus Tollens] \mbox{} \begin{center} \begin{tabular}{c@{\,}l@{}} & \(\lnot q\) \\ & \(p \Rightarrow q\) \\ \cline{2-2} \(\therefore\) & \(\lnot p\) \end{tabular} \end{center} \end{thm} \begin{thm}[Disjunctive Addition] \mbox{} \begin{center} \begin{tabular}{c@{\,}l@{}} & \(p\) \\ \cline{2-2} \(\therefore\) & \(p \lor q\) \end{tabular} \end{center} \end{thm} \begin{thm}[Conjunctive Addition] \mbox{} \begin{center} \begin{tabular}{c@{\,}l@{}} & \(p,q\) \\ \cline{2-2} \(\therefore\) & \(p \land q\) \end{tabular} \end{center} \end{thm} \begin{thm}[Conjunctive Simplification] \mbox{} \begin{center} \begin{tabular}{c@{\,}l@{}} & \(p \land q\) \\ \cline{2-2} \(\therefore\) & \(p,q\) \end{tabular} \end{center} \end{thm} \begin{thm}[Disjunctive Syllogism] \mbox{} \begin{center} \begin{tabular}{c@{\,}l@{}} & \(p \lor q\) \\ & \(\lnot p\) \\ \cline{2-2} \(\therefore\) & \(q\) \end{tabular} \end{center} \end{thm} \begin{thm}[Hypothetical Syllogism] \mbox{} \begin{center} \begin{tabular}{c@{\,}l@{}} & \(p \Rightarrow q\) \\ & \(q \Rightarrow r\) \\ \cline{2-2} \(\therefore\) & \(p \Rightarrow r\) \end{tabular} \end{center} \end{thm} \begin{thm}[Resolution] \mbox{} \begin{center} \begin{tabular}{c@{\,}l@{}} & \(p \lor q\) \\ & \((\lnot q) \lor r\) \\ \cline{2-2} \(\therefore\) & \(p \lor r\) \end{tabular} \end{center} \end{thm} \begin{thm}[Division Into Cases] \mbox{} \begin{center} \begin{tabular}{c@{\,}l@{}} & \(p \lor q\) \\ & \(p \Rightarrow r\) \\ & \(q \Rightarrow r\) \\ \cline{2-2} \(\therefore\) & \(r\) \end{tabular} \end{center} \end{thm} \begin{thm}[Law of Contradiction] \mbox{} \begin{center} \begin{tabular}{c@{\,}l@{}} & \((\lnot p) \Rightarrow \mathbf{c}\) \\ \cline{2-2} \(\therefore\) & \(p\) \end{tabular} \end{center} \end{thm} You may be wondering how to prove these deductions are \textit{valid}. We have two equivalent methods: \begin{enumerate} \item Tautological implication \item Critical-row identification \end{enumerate} Consider an arbitrary deduction: \begin{center} \begin{tabular}{c@{\,}l@{}} & \(P_1\) \\ & \(P_2\) \\ & \(\cdots \cdots \cdots\) \\ & \(P_n\) \\ \cline{2-2} \(\therefore\) & \(Q\) \end{tabular} \end{center} To prove it valid, \textbf{Tautological implication} \begin{enumerate} \item Construct a new proposition \(A \Rightarrow B\) where \(A\) is a \textbf{conjunction of the premises} and \(B\) is the conclusion. For our arbitrary deduction, we would have \[\big( P_1 \land P_2 \land \cdots \land P_n \big) \Rightarrow Q\] \item Inspect this proposition in a truth-table. If the proposition is a tautology, then the deduction is valid. Otherwise, the deduction is invalid. So, to be valid we must have \[(A \Rightarrow B) \equiv \big( P_1 \land P_2 \land \cdots \land P_n \big) \Rightarrow Q \equiv t\] \end{enumerate} \textbf{Critical-row identification} \begin{enumerate} \item Construct a truth-table with columns for each proposition and for the conclusion. \textit{You may need extra columns, that is fine}. For our arbitrary deduction, we would have \begin{center} \begin{tabular}{cccc|c} \(P_1\) & \(P_2\) & \(\cdots\) & \(P_n\) & \(Q\) \\ \midrule F/T & F/T & \(\cdots\) & F/T & F/T \\ && \(\vdots\) && \\ T/T & F/T & \(\cdots\) & F/T & F/T \\ \end{tabular} \end{center} \item Identify the rows in which each \textbf{premise} is true. We call these rows \textbf{critical-rows}. \item For each critical-row, inspect the conclusion. If the conclusion is true in \textbf{every} critical-row, then the deduction is valid. Otherwise, the deduction is invalid. \end{enumerate} Examples following the above steps. \exsol{ Show that Modus Ponens is a valid rule of inference. }{ \textit{method 1 -- tautological implication} Our proposition we care about is \((p \land (p \Rightarrow q)) \Rightarrow q\), so we build the following truth-table with extraneous rows: \begin{center} \begin{tabular}{cc|cc|cc|c} \(p\) & \(q\) & \(p\) & \(p \Rightarrow q\) & \(p \land (p \Rightarrow q)\) & \(q\) & \((p \land (p \Rightarrow q)) \Rightarrow q\) \\ \midrule F & F & F & T & F & F & T \\ F & T & F & T & F & T & T \\ T & F & T & F & F & F & T \\ T & T & T & T & T & T & T \\ \end{tabular} \end{center} Inspect the last column. \begin{center} \begin{tabular}{cc|cc|cc|c} \(p\) & \(q\) & \(p\) & \(p \Rightarrow q\) & \(p \land (p \Rightarrow q)\) & \(q\) & \((p \land (p \Rightarrow q)) \Rightarrow q\) \\ \midrule F & F & F & T & F & F & \cellcolor{Melon}\color{green} \textbf{T} \\ F & T & F & T & F & T & \cellcolor{Melon}\color{green} \textbf{T} \\ T & F & T & F & F & F & \cellcolor{Melon}\color{green} \textbf{T} \\ T & T & T & T & T & T & \cellcolor{Melon}\color{green} \textbf{T} \\ \end{tabular} \end{center} In the case of Modus Ponens, the final column is a tautology, hence the deduction is valid. } \exsol{ Show that Modus Ponens is a valid rule of inference. }{ \textit{method 2 -- critical-row identification} Construct a truth table with each premise and conclusion: \begin{center} \begin{tabular}{cc|cc|c} \(p\) & \(q\) & \(p\) & \(p \Rightarrow q\) & \(q\) \\ \midrule F & F & F & T & F \\ F & T & F & T & T \\ T & F & T & F & F \\ T & T & T & T & T \\ \end{tabular} \end{center} Identify the critical-rows. \begin{center} \begin{tabular}{cc|cc|c} \(p\) & \(q\) & \(p\) & \(p \Rightarrow q\) & \(q\) \\ \midrule F & F & F & T & F \\ F & T & F & T & T \\ T & F & T & F & F \\ \rowcolor{Melon} T & T & \textbf{T} & \textbf{T} & T \\ \end{tabular} \end{center} Inspect the conclusion in each critical-row. \begin{center} \begin{tabular}{cc|cc|c} \(p\) & \(q\) & \(p\) & \(p \Rightarrow q\) & \(q\) \\ \midrule F & F & F & T & F \\ F & T & F & T & T \\ T & F & T & F & F \\ \rowcolor{Melon} T & T & \textbf{T} & \textbf{T} & \textbf{\color{green} T} \\ \end{tabular} \end{center} In the case of Modus Ponens, the conclusion is true in each critical row, hence the deduction is valid. } We leave it to the reader to understand why the two methods are equivalent. \subsubsection{Deducing Things} In a later section, we will see that this order of logic is not powerful enough to prove mathematical statements. For now, we can still do interesting things with a given knowledge base. \exsol{ Given the following knowledge base, deduce as much new information as possible using the following rules of inference: Modus Ponens, Modus Tollens, Hypothetical Syllogism, and Disjunctive Syllogism. \begin{center} \hfill \(a \Rightarrow b\) \hfill \(b \Rightarrow (\lnot d)\) \hfill \(e\) \hfill \(d \lor (\lnot e)\) \hfill \mbox{} \end{center} }{ \\ By Hypothetical Syllogism \(a \Rightarrow b\), \(b \Rightarrow (\lnot d)\), \hfill \(\therefore a \Rightarrow (\lnot d)\) By Disjunctive Syllogism \(d \lor (\lnot e)\), \(e\), \hfill \(\therefore d\) By Modus Tollens \(d\), \(a \Rightarrow (\lnot d)\), \hfill \(\therefore \lnot a\) By Modus Tollens \(d\), \(b \Rightarrow (\lnot d)\), \hfill \(\therefore \lnot b\) % todo is this finished? } \begin{rem} From the above example, we restricted the rules you could have used. We did this mainly because Disjunctive Addition can allow you to generate any new knowledge you like -- so long as you have one thing that is true, then you can add in a disjunction infinitely-many times. \end{rem} \begin{rem} In the above example, we could have translated \(d \lor (\lnot e) \equiv e \Rightarrow d\) and concluded \(d\) by Modus Ponens. This somewhat tells you that Disjunctive Syllogism and Modus Ponens are equivalent. \end{rem} \begin{rem} From an inconsistent database, anything follows. This is due to the law of contradiction. An \textit{inconsistent database} is one that contains a contradiction. Recall from the law of contradiction that \((\lnot p) \Rightarrow c \equiv (\lnot (\lnot p)) \lor c \equiv p\) using the Identity Boolean algebra theorem. Using Disjunctive Addition, we have the contradiction \(c\), \(\therefore A \lor c\), and by Identity, \(\therefore A\). \(A\) can be \textit{Anything}. \end{rem} In Artificial Intelligence, there exists an algorithm called \textbf{The Resolution Algorithm}. Essentially, it says to take a given knowledge base, translate each statement into disjunctive normal form, then apply the Resolution rule of inference as many times as possible. \section{Predicate Logic} Sometimes basic propositions are not enough to do what you want. In programming we can have functions that return true or false. We can do the same thing with logic -- we call this \textit{first-order} logic, or \textit{predicate} logic. Predicate logic includes all of propositional logic, however it adds predicates and quantifiers. \begin{defn}[Predicate\index{Predicate}] A property that the subject of a statement can have. In logic, we represent this sort-of like a function. A predicate takes, as input, some element, and returns whether the inputted element has the specific property. \end{defn} \begin{example} We could use the predicate \(EVEN(x)\) to mean \(x\) is an even number. In this case, the predicate is \(EVEN(\cdot)\) \end{example} \begin{example} We could use the predicate \(P(y)\) to mean \(y\) is an integer multiple of 3. In this case, the predicate is \(P(\cdot)\) \end{example} \begin{rem} Predicates take \textbf{elements}. They do \textbf{not} take in other predicates. This is because predicates say whether the input element \textit{has} the property specified by the predicate -- true and false cannot have properties. In terms of programming, you can think of a predicate as a program method. For example, the \(EVEN(x)\) predicate might be implemented as follows: \begin{lstlisting} func EVEN(Entity x) -> bool { if IS_INTEGER(x) { return x % 2 == 0 } return false } \end{lstlisting} In this case, entities are \textit{objects} and true/false are \textit{Boolean primitives} (or, propositional statements, which can only be true or false). In contrast to, say, Java, a compiler for this code would not allow \textit{true/false} to be an object. A better example, \begin{lstlisting} class Foo extends Entity { bool isInteger bool isOdd Foo(Integer i) { this.isInteger = true this.isOdd = i % 2 == 1 } } func ODD(Entity x) -> bool { if x has type Foo { return x.isOdd } if IS_INTEGER(x) { return x % 2 == 1 } return false } \end{lstlisting} \end{rem} \begin{defn}[Quantifier\index{Quantifier}] A way to select a specific range of elements that get inputted to a predicate. We have two quantifiers: \begin{itemize} \item The Universal quantifier \(\forall\) \item The Existential quantifier \(\exists\) \end{itemize} The universal quantifier says to select \textbf{all} elements, and the existential quantifier says to select \textbf{at least one} element. \end{defn} \begin{defn}[Quantified Statement] A logical statement involving predicates and quantifiers. Syntax: \[(\text{quantifier } var \in D)[\text{statement involving predicates}]\] \end{defn} And now we can define: \begin{defn}[Predicate Logic] Also called \textit{first-order logic}, is a logic made up of quantified statements. \end{defn} \exsol{ Translate the following statements to predicate logic: \begin{enumerate} \item All people are mortal \item Even integers exist \item If an integer is prime then it is not even \end{enumerate} }{ \begin{enumerate} \item Denote \(P\) as the domain of people, and the predicate \(M(x)\) to mean \(x\) is mortal. Then the statement translates to \[(\forall p \in P)[M(p)]\] \item Denote \(\Z\) as the domain of integers, and the predicate \(EVEN(x)\) to mean \(x\) is even. Then the statement translates to \[(\exists x \in \Z)[EVEN(x)]\] \item Denote the predicate \(PRIME(y)\) to mean \(y\) is prime. Then the statement translates to \[(\forall a \in \Z)[PRIME(a) \Rightarrow \lnot EVEN(a)]\] \end{enumerate} } \subsection{Negating Quantified Statements} One may find useful to negate a given quantified statement. We present here how to do this, first with an English example, followed by a quantified example, followed by an algorithm. \begin{example} The following statement \begin{center} \textit{There is no student who has taken calculus.} \end{center} is equivalent to \begin{center} \textit{All students have not taken calculus.} \end{center} \end{example} \begin{example} The following statement \begin{center} \textit{Not all students have taken calculus.} \end{center} is equivalent to \begin{center} \textit{There is a student who has not taken calculus.} \end{center} \end{example} \begin{example} The following statement \[\lnot(\exists x \in D)[C(x)]\] is equivalent to \[(\forall x \in D)[\lnot C(x)]\] \end{example} \begin{example} The following statement \[\lnot(\forall x \in D)[C(x)]\] is equivalent to \[(\exists x \in D)[\lnot C(x)]\] \end{example} The generic algorithm for pushing the negation into a quantified statement: \begin{enumerate} \item Flip each quantifier \(\forall \rightarrow \exists\) and \(\exists \rightarrow \forall\) \item Apply the negation to the propositional part of the quantified statement, and simplify \begin{enumerate} \item If the inside contains another quantified statement, then recursively apply this algorithm \end{enumerate} \end{enumerate} \begin{rem} The domain and variable attached to any quantifier are \textbf{not} changed. \end{rem} \exsol{ Push the negation in as far as possible: \[\lnot (\forall x,y \in \Z)[(x < y) \Rightarrow (\exists m \in \Q)[x < m < y]]\] }{ \begin{align*} & \lnot (\forall x,y \in \Z)[(x < y) \Rightarrow (\exists m \in \Q)[x < m < y]] \\ \equiv & (\exists x,y \in \Z) \lnot [(x < y) \Rightarrow (\exists m \in \Q)[x < m < y]] \\ \equiv & (\exists x,y \in \Z) \lnot [\lnot (x < y) \lor (\exists m \in \Q)[x < m < y]] \\ \equiv & (\exists x,y \in \Z) [\lnot \lnot (x < y) \land \lnot (\exists m \in \Q)[x < m < y]] \\ \equiv & (\exists x,y \in \Z) [(x < y) \land (\forall m \in \Q) \lnot [x < m < y]] \\ \equiv & (\exists x,y \in \Z) [(x < y) \land (\forall m \in \Q) \lnot [x < m \land m < y]] \\ \equiv & (\exists x,y \in \Z) [(x < y) \land (\forall m \in \Q) [x \geq m \lor m \geq y]] \end{align*} } \begin{rem} We typically expect the final statement to contain no \(\lnot\) operators. \end{rem} \subsection{Quantified Rules of Inference} Again, \textit{deductions} and \textit{inferences} are the same. We present a handful of important \textit{quantified} rules of inference. \begin{thm}[Universal Instantiation] For a predicate \(P(\cdot)\) and some domain \(D\) with \(c \in D\), \begin{center} \begin{tabular}{c@{\,}l@{}} & \((\forall x \in D)[P(x)]\) \\ \cline{2-2} \(\therefore\) & \(P(c)\) \end{tabular} \end{center} As an example, if our domain consists of all dogs and Fido is a dog, then the above rule can be read as \begin{center} ``All dogs are cuddly'' ``Therefore Fido is cuddly'' \end{center} \end{thm} \begin{thm}[Universal Generalization] For a predicate \(P(\cdot)\) and some domain \(D\) for an arbitrary \(c \in D\), \begin{center} \begin{tabular}{c@{\,}l@{}} & \(P(c)\) \\ \cline{2-2} \(\therefore\) & \((\forall x \in D)[P(x)]\) \end{tabular} \end{center} This is most-often used in mathematics. As an example, if our domain consists of all dogs, then the above rule can be read as \begin{center} ``An arbitrary dog is cuddly'' (which in-turn applies to all dogs) ``Therefore all dogs are cuddly'' \end{center} \end{thm} \begin{thm}[Existential Instantiation] For a predicate \(P(\cdot)\) and some domain \(D\) for some element \(c \in D\), \begin{center} \begin{tabular}{c@{\,}l@{}} & \((\exists x \in D)[P(x)]\) \\ \cline{2-2} \(\therefore\) & \(P(c)\) \end{tabular} \end{center} As an example, if our domain consists of all dogs, then the above rule can be read as \begin{center} ``There is a dog who is cuddly'' ``Let's call that dog \(c\), and so \(c\) is cuddly'' \end{center} \end{thm} \begin{thm}[Existential Generalization] For a predicate \(P(\cdot)\) and some domain \(D\) for some element \(c \in D\), \begin{center} \begin{tabular}{c@{\,}l@{}} & \(P(c)\) \\ \cline{2-2} \(\therefore\) & \((\exists x \in D)[P(x)]\) \end{tabular} \end{center} As an example, if our domain consists of all dogs and Fido is a dog, then the above rule can be read as \begin{center} ``Fido is cuddly'' ``Therefore there is a dog who is cuddly'' \end{center} \end{thm} \begin{thm}[Universal Modus Ponens] For two predicates \(P(\cdot)\) and \(Q(\cdot)\), and some domain \(D\) with \(a \in D\), \begin{center} \begin{tabular}{c@{\,}l@{}} & \(P(a)\) \\ & \((\forall x \in D)[P(x) \Rightarrow Q(x)]\) \\ \cline{2-2} \(\therefore\) & \(Q(a)\) \end{tabular} \end{center} \end{thm} \begin{thm}[Universal Modus Tollens] For two predicates \(P(\cdot)\) and \(Q(\cdot)\), and some domain \(D\) with \(a \in D\), \begin{center} \begin{tabular}{c@{\,}l@{}} & \(\lnot Q(a)\) \\ & \((\forall x \in D)[P(x) \Rightarrow Q(x)]\) \\ \cline{2-2} \(\therefore\) & \(\lnot P(a)\) \end{tabular} \end{center} \end{thm} \subsection{Proving Things} Our familiar rules of inference are not strong enough to prove abstract mathematical statements. Typically we want our proof to apply to a whole \textit{set} of things (numbers). Now that we know about \textit{predicate logic}, we can apply our more powerful \textit{quantified} rules of inference to prove real mathematical statements. \exsol{ Using Universal Modus Ponens, verify the validity of the following proof: \begin{proof} Let \(m,n \in \Z\), and let \(m\) be even. Then \(m = 2p\) for some integer \(p\).\textsuperscript{\color{blue} (1)} Now, \begin{align*} m \cdot n &= (2p)n & \text{by substitution} \\ &= 2(pn)^{\text{\color{blue} (2)}} & \text{by associativity} \end{align*} Now, \(pn \in \Z\),\textsuperscript{\color{blue} (3)} so by definition of even \(2(pn)\) is even.\textsuperscript{\color{blue} (4)} Thus \(mn\) is even. \end{proof} }{ \\ \makebox[4mm]{\color{blue} (1)} \makebox[6mm]{} If an integer is even, then it equals twice some integer.\\ \makebox[4mm]{} \makebox[6mm]{} \(m\) is a particular integer, and it is even.\\ \makebox[4mm]{} \makebox[6mm]{\(\therefore\)} \(m\) equals twice some integer \(p\).\\ \makebox[4mm]{\color{blue} (2)} \makebox[6mm]{} If a quantity is an integer, then it is a real number.\\ \makebox[4mm]{} \makebox[6mm]{} \(p\) and \(n\) are both particular integers.\\ \makebox[4mm]{} \makebox[6mm]{\(\therefore\)} \(p\) and \(n\) are both real numbers.\\ \makebox[4mm]{} \makebox[6mm]{} For all \(a,b,c\), if \(a,b,c \in \R\) then \((ab)c = a(bc)\).\\ \makebox[4mm]{} \makebox[6mm]{} \(2\), \(p\), and \(n\) are all particular real numbers.\\ \makebox[4mm]{} \makebox[6mm]{\(\therefore\)} \((2p)n = 2(pn)\).\\ \makebox[4mm]{\color{blue} (3)} \makebox[6mm]{} For all \(u,v\), if \(u,v \in \Z\) then \(uv \in \Z\).\\ \makebox[4mm]{} \makebox[6mm]{} \(p\) and \(n\) are both particular integers.\\ \makebox[4mm]{} \makebox[6mm]{\(\therefore\)} \(pn \in \Z\).\\ \makebox[4mm]{\color{blue} (4)} \makebox[6mm]{} If a number equals twice some integer, then that number is even.\\ \makebox[4mm]{} \makebox[6mm]{} \(2(pn)\) equals twice the integer \(pn\).\\ \makebox[4mm]{} \makebox[6mm]{\(\therefore\)} \(2(pn)\) is even.\\ } Of course, we would never do a mathematical proof like this. In reality, you do this in your head automatically. Seeing this form, however, allows you to easily verify the \textbf{validity} of the proof. \section{Summary} \begin{itemize} \item Propositional logic contains the entirety of Boolean algebra and logic connectives, with True and False as the only inputs/outputs \item Predicate logic contains the entirety of propositional logic and uses functions along with entities \item Deriving knowledge from familiar rules entails mathematical proof \end{itemize} \section{Practice} \begin{enumerate} \item Answer the two logic puzzles presented in the introduction of this chapter. \item Translate the following statement into propositional logic: \textit{turn right then turn left}. \item Translate the following statement into propositional logic: \textit{if it is raining then everyone has an umbrella}. \item How can you quickly construct a truth table with all row possibilities? Use your technique to construct a truth table with 4 variables. \item How many rows does a truth table with \(n\) variables have? \item Prove theorem \ref{bicond-to-imp}. \item Prove theorem \ref{imp-to-disj}. \item Draw the circuit representation of theorem \ref{bicond-to-imp}. \item Prove the following rule valid or invalid: \begin{center} \begin{tabular}{c@{\,}l@{}} & \((a \land d) \Rightarrow b\) \\ & \(e\) \\ & \(b \Rightarrow (\lnot e)\) \\ & \((\lnot a) \Rightarrow f\) \\ & \((\lnot d) \Rightarrow f\) \\ \cline{2-2} \(\therefore\) & \(f\) \end{tabular} \end{center} \item Prove the following rule valid or invalid: \begin{center} \begin{tabular}{c@{\,}l@{}} & \((a \land d) \Rightarrow b\) \\ & \(e\) \\ & \(b \Rightarrow (\lnot e)\) \\ & \((\lnot a) \Rightarrow f\) \\ & \((\lnot d) \Rightarrow f\) \\ \cline{2-2} \(\therefore\) & \(b \Rightarrow e\) \end{tabular} \end{center} \item Push the negation inside the following statement as far as possible: \[\lnot (\forall x \in \R)(\exists m \in \Z)[(0 \leq x - m < 1) \Leftrightarrow (m = \floor{x})]\] \end{enumerate} %\section{Solutions} % %\begin{enumerate} % \item (1) Sam cannot make any cows, because Sam only starts with 1 cow. (2) We know knights always tell the truth. If we assume the speaker is a knight, then he will have lied about whom he is. Therefore, the speaker is a knave. Since knaves always lie, then we know the speaker lied, so the two people cannot both be knaves. Therefore, the second person is a knight. % \item This is a trick question since the statement does not have any true or false value! If you answered something like \(r \Rightarrow l\), good thinking, however this is incorrect. % \item One possibility: let \(r\) be the proposition \textit{it is raining} and \(u\) be the proposition \textit{everyone has an umbrella}, then the statement becomes \(r \Rightarrow u\). % \item \(2^n\) rows. % \item Examine the rows from right to left. In the first column, alternate T/F by 1 step. In the second column, alternate T/F by 2 steps. In the third column, alternate T/F by 4 steps. In the \(n\)th column, alternate T/F by \(2^{n-1}\) steps. The following example uses 1/0, however you should be able to translate it to T/F. % % \begin{center} % \begin{tabular}{cccc} % \(p\) & \(q\) & \(r\) & \(s\) \\ % \hline % 0 & 0 & 0 & 0 \\ % 0 & 0 & 0 & 1 \\ % 0 & 0 & 1 & 0 \\ % 0 & 0 & 1 & 1 \\ % 0 & 1 & 0 & 0 \\ % 0 & 1 & 0 & 1 \\ % 0 & 1 & 1 & 0 \\ % 0 & 1 & 1 & 1 \\ % 1 & 0 & 0 & 0 \\ % 1 & 0 & 0 & 1 \\ % 1 & 0 & 1 & 0 \\ % 1 & 0 & 1 & 1 \\ % 1 & 1 & 0 & 0 \\ % 1 & 1 & 0 & 1 \\ % 1 & 1 & 1 & 0 \\ % 1 & 1 & 1 & 1 % \end{tabular} % \end{center} % \item % \item %\end{enumerate} \end{document}
{ "alphanum_fraction": 0.6478576805, "avg_line_length": 40.8106583072, "ext": "tex", "hexsha": "17d68398af61586f1a429dbc2fb1c19c90bcae7e", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-06-19T22:24:49.000Z", "max_forks_repo_forks_event_min_datetime": "2021-06-19T22:24:49.000Z", "max_forks_repo_head_hexsha": "ebfcd8e9d15079fe8924bf562a194ed057aed302", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "jugoodma/250-textbook", "max_forks_repo_path": "ch-logic.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ebfcd8e9d15079fe8924bf562a194ed057aed302", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "jugoodma/250-textbook", "max_issues_repo_path": "ch-logic.tex", "max_line_length": 586, "max_stars_count": 5, "max_stars_repo_head_hexsha": "ebfcd8e9d15079fe8924bf562a194ed057aed302", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "jugoodma/250-textbook", "max_stars_repo_path": "ch-logic.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-27T14:39:11.000Z", "max_stars_repo_stars_event_min_datetime": "2020-04-22T03:33:30.000Z", "num_tokens": 21318, "size": 65093 }
\documentclass[11pt,a4paper]{article} \usepackage{od} \usepackage[utf8]{inputenc} \usepackage[main=english,russian]{babel} \title{Inventive problem solving using the\\ OTSM-TRIZ “TONGS” model} \author{Nikolai Khomenko, John Cooke} \date{2007 (?)} \begin{document} \maketitle \begin{quote} Source: \url{https://otsm-triz.org/sites/default/files/ready/tongs_en.pdf} \end{quote} \begin{abstract} One of the simplest tools in OTSM-TRIZ is the “TONGS” model. Despite being simple, the “TONGS” model provides a versatile way to frame and study an inventive technological problem. For TRIZ novices, the “TONGS” model can be trained and applied very rapidly. For more advanced TRIZ users, the “TONGS” model can set discrete TRIZ tools in context or can direct each process step in ARIZ (Algorithm for Inventive Problem Solving) and PFN (OTSM based Problem Flow Networks approach). In this paper we describe the “TONGS” model and apply it to an inventive problem situation, highlighting the benefits of this model. Finally, we describe some key applications for the “TONGS” model in education, problem solving and in the development of new OTSM-TRIZ tools. \emph{Keywords:} OTSM-TRIZ, TONGS model, Problem Solving \end{abstract} \section{Introduction and short history of the\\ TONGS problem solving model:} The “TONGS” model is the simplest and most accessible model from Classical TRIZ and it has the advantage that it can be learned quickly and used by people who are entirely new to TRIZ, without any deep learning of the theory and its tools. At the same time this model was historically the first one that appeared in the course of Classical TRIZ evolution, starting from the very simple methodology that was used in the 1940-50s. We believe it can be useful to introduce this old model to students with modern remarks and comments and that is why we mention in the paper some of the theoretical background and tools of Classical TRIZ and OTSM, linking this oldest model with Classical TRIZ and OTSM and the more recent problem solving process models and tools based on TRIZ and OTSM. What is surprising is that this model appears perfectly ready for practical applications of all three postulates of Classical TRIZ that were formulated later to provide a part of the theoretical background of Classical TRIZ. The name of the model first appeared in the course of OTSM evolution; this was done for educational purposes to help our students communicate efficiently about the various problem solving process models which are used in Classical TRIZ and OTSM. After the “TONGS“ problem solving process model first appeared in TRIZ in the 1940-s [1, 2], it evolved through the introduction of additional rules and procedures but was not fundamentally changed until after the mid 1970s when new problem solving process model was implemented for the first time in ARIZ 77. We named it the “HILL” model [3]. This model has a very different structure from the “TONGS” model but included it as a component. The next problem solving process model appeared in the middle of 1980-s. We named it “Problem Flow Model”. It was implemented in ARIZ-85C. Both previous models became components of the new model. In the course of the transition from Classical TRIZ to OTSM, it appeared that the “PROBLEM FLOW” model included the three previous models as components. Finally the “PROBLEM FLOW” model appeared as a component of the more advanced and universal OTSM Fractal Model of a problem solving process [4, 5]. This new model is now being used to create the third generation of the OTSM toolbox, including all previous models as components. We should mention here just one more model that we call the “FUNNEL” model. This problem solving process model of Classical TRIZ is used for integration of all others into a unified system. It illustrates the process of narrowing the area of research we should conduct in order to develop a satisfactory solution. We have provided this short history of the evolution of the TRIZ and OTSM problem solving process models to show why we propose to start the TRIZ educational process with the “TONGS” model. First of all it is still an active tool that is a component of the more powerful models and tools. It is simple to learn and can be developed further to prepare students for deep learning of Altshuller’s ARIZ. Step by step the students can learn all of the TRIZ and OTSM models and how to implement them in particular situations. The “TONGS” model is also helpful in developing the many skills that they are necessary to have in order to understand and learn the modern tools of Classical TRIZ and OTSM. The “TONGS” model can be viewed also as a “frame” for learning every single tool, its components and steps more deeply. This model is not only a problem solving tool in itself but also a tool for learning many other problem solving concepts starting right away from the deeply philosophical but still practical background of classical TRIZ. \section{How to apply the TONGS model} According the OTSM Axiom of a root of problems fundamentally, any problem situation can be described as a conflict between human desire and objective factors or natural laws. The TONGS model can be used to spell out the conflicting elements of any problem situation, drill down into the objective factors preventing us from solving the problem and steer us towards a strong solution. Although the TONGS model is simple to apply and use, there are some key steps which should be followed to get most advantage from the approach. The sequence of application is not critical but it is important to be consistent and logical when using the model and to work systematically towards uncovering the core of the problem. One last point about the TONGS model is that it is intended to support an iterative process, where we use partial solutions to help us to explore the barriers which prevent us from completing our problem solving journey from our “departure point” to our “final destination”. The more competence users have in Classical TRIZ, OTSM and their related problem solving toolboxes, the less iteration they will need. \begin{center} \includegraphics[width=.95\textwidth]{./1.jpg}\\ \textbf{Figure 1:} General schema of The Tongs model of a problem solving process. \end{center} The key steps in the TONGS model are: \textbf{2.1 State the Initial Situation (IS):} This should be a description in simple language (i.e. without professional jargon and terminology) of the initial problem situation. The statement should highlight the main Negative Effect of the problem situation -- in other words it should describe what it is that seems at the beginning the most unsatisfactory aspect of the current situation. As a trivial example, we could consider the subject of cleaning dishes after a meal, in which case the main Negative Effect we might state is “it takes a lot of time to clean the dishesafter a family meal”. \textbf{2.2 State the Most Desirable Result (MDR):} This should be a description of the best outcome to the problem situation or an ideal solution. The stronger and more provocative we can make the MDR statement, the more useful it will be in guiding the subsequent problem solving stages. To generate a really stretching statement of the Most Desirable Result, we can imagine that we have in our hand a magic wand which can be waved over the problem situation to achieve a result that would usually seem totally impossible to achieve. Another way to think of the MDR is to state the positive result that is needed which also eliminates the negative effect. To steer the analysis away from more complex solutions, it also helps to write the MDR statement in such a way that the system with the problem stays the same (or possibly becomes simpler) while the problem is solved completely. It is a good idea to use here the rules of DTC operator to exaggerate the situation. Do not forget – we have a magic wand! Everything is possible on this stage of a problem analysis (according to the OTSM Axiom of Impossibility)! For the example of cleaning dishes, after exaggeration a bold statement of the MDR might be “the dishes clean themselves” (or “the dirt disappears by itself” -- OTSM-TRIZ has rules to make a choice among set of alternative MDRs. Those rules can be learned later or right away, depending on the duration of the course). \textbf{2.3 a) State the Barrier:} During this stage, we describe what seems most impossible about our statement of the MDR within the context of the Initial Situation. The purpose of stating the Barrier in this way is to expose what it is that is stopping us from achieving the MDR. At this stage it can be useful to think about the OTSM axiom of impossibility or in other words, if something “impossible” happens, how might it practically happen? If the duration of the training course allows, to answer the question, we can teach students how they can use the “Gold Fish” method proposed by G. Altshuller or Sword Fish method (to assume something that that cannot be assumed) developed by V. Gerasimov [6]. In the example of cleaning dishes, we might say that the main Barrier is that we have no means to make the dishes clean themselves. One question we can now start to ask is if we did have self cleaning dishes, how might they clean themselves? \textbf{b) Reframe the Barrier as a contradiction:} Now we drill down into the problem to discover the objective (natural) factor which is behind the problem we stated in the Initial Situation. This also helps us to re-state the problem as a new Initial Situation with a new Negative Effect. To do this we can ask ourselves the question “what is the new Negative Effect we now have or the new Initial Situation we have to improve?” In the case of the example of cleaning dishes, the objective factor we need to address is that without any cleaning action, the food residue will stay stuck on the dishes. In other words, the Negative Effect we now need to deal with is the food residue sticking to the dish surface. \textbf{2.4 a) Identify “Common Sense” Solutions:} Confronted with the new Negative Effect, we now need to ask ourselves “what common sense or professional solutions might solve or partially solve this re-stated problem”. At this stage we don’t need to identify a complete solution, we simply want to attempt to move closer to the MDR. In the case of preventing food residue from sticking to the dish surface, we might suggest a low-friction coating on the surface of the dish. This new “common sense” solution may well have further drawbacks, when tested against the MDR, which can be used as the basis for a new Initial Situation description which allows us to iterate through another cycle of the TONGS model. \textbf{b) Identify OTSM-TRIZ based solutions} using OTSM-TRIZ principles of contradiction resolution or Classical TRIZ system of Standard Inventive Solution or any other OTSM-TRIZ based method that the user might know: If we have more knowledge of TRIZ, we can apply a number of TRIZ tools at this stage, for example we can answer the following questions: “What principles of technical contradiction resolution could be useful to resolve the technical contradictionin this problem?” “What principles of OTSM-TRIZ could be used to satisfy both opposite demands for the same parameter?” “What is the Substance-Field model for this problem situation and which of the 76 Standard Inventive solutions can be used?” Once again we can test solutions generated during this step against the MDR and if necessary, use the most appropriate solution for a further cycle through the TONGS model. \section{An example of the TONGS model applied to a real problem:} The problem being solved appeared as a Request for Proposal (RFP) document on the Nine Sigma website (\url{http://www.ninesigma.com}) in July 2008. \subsection*{The problem} Against the background of a need for more fuel efficient vehicles, auto manufacturers are urgently looking for ways to reduce vehicle weight. One area which is under active investigation is the use of aluminium body panels to replace steel. Indeed, some car manufacturers such as Audi and Jaguar are already using the technique on their more expensive models. In order to form a body panel, flat sheets of metal are fed from a stack of sheets into a press (figure~2).\vskip1em \begin{minipage}{.45\textwidth}\centering \includegraphics[width=.8\textwidth]{./2.jpg}\\[1em] \textbf{Fig. 2:} De-stack sheet feeder system. \end{minipage}\hfill \begin{minipage}{.45\textwidth}\centering \includegraphics[width=.8\textwidth]{./3.jpg}\\ \textbf{Fig. 3:} Aluminium sheet feeder layout. \end{minipage}\vskip1em Steel sheets can be fed very quickly by this method – as fast as one every two seconds, but aluminium sheets can only be fed at a rate of 8 per minute. There are tried and tested methods to separate steel sheets using magnets but these don’t work for aluminium because it is non-magnetic. Also, the aluminium sheets are coated with a sticky oil film, which is needed for a previous process step and cannot be easily removed. The Nine Sigma RFP requested solutions which would allow a doubling of the feed rate for aluminium sheets. Figure 3 shows the arrangement of the aluminium sheet feeding system. Application of the TONGS model to this problem: \textbf{3.1 State the Initial Situation (IS-0):} In this problem, the Initial Situation is one where the main Negative Effect is “\emph{if the sheets are fed too quickly, the second sheet sticks to the first sheet and stops the press feeder}”. \textbf{3.2 State the Most Desirable Result (MDR-0): } For the sheet feeding problem, an MDR we might wish for is that “\emph{as the vacuum suckers arrive above the top sheet, the top sheet immediately separates itself from the second sheet and moves towards the suckers}”. \textbf{3.3 a) State the Barrier:} In this problem, the key barrier seems to be that “\emph{the sheets can’t separate themselves because air doesn’t have enough time to get between the top sheet and second sheet and atmospheric pressure is holding the two sheets together}”. \textbf{b) Reframe the Barrier} as a contradiction between Human desire and Natural Laws or other objective factor (OTSM axiom of a root of problems): In this problem, the objective factor which is at the root of the problem is that “\emph{it takes a certain amount of time for the air to pass between the two sheets but we need this to happen faster}”. \textbf{3.4 a) Identify “Common Sense” Solutions:} What common sense or professional solutions might solve or partially solve this re-stated problem? A possible partial solution to get air to move more quickly between the two top sheets is to set up a pressurised air feed to blow air into the gap between the two sheets. If we benchmark this solution against the MDR, we can see that while it does provide a more positive separation of the two top sheets, it also complicates the system. To continue the analysis, we will now iterate through the TONGS model using this partial solution. \textbf{\emph{Iteration 1:}} \textbf{3.5 State the Initial Situation (IS-1):} In this problem, the Initial Situation is one where the main Negative Effect is “\emph{the air blast system complicates the sheet feeder}”. \textbf{3.6 State the Most Desirable Result (MDR-1):} For the sheet feeding problem, an MDR we might wish for is that “\emph{as the vacuum suckers arrive above the top sheet, the top sheet immediately separates itself from the second sheet and moves towards the suckers \underline{without} complicating the system}”. \textbf{3.7 a) State the Barrier:} In this problem, the key barrier seems to be that “\emph{I need something to force air past the oil and between the two top sheets}”. \textbf{b) Reframe the Barrier as a contradiction:} What is the objective (natural) factor which is behind the problem we stated in the Initial Situation? Re-state the problem as a new Initial Situation with a new Negative Effect. In this problem, the objective factor which is at the root of the problem is that “\emph{the oil is stopping the air from moving between the two top sheets}”. \textbf{3.8 a) Identify “Common Sense” Solutions:} A possible partial solution to prevent the oil stopping the air is to get rid of the oil completely. If we benchmark this solution against the MDR, we now have a much simpler solution but we have lost an important function, that is, to protect the aluminium sheets. We will now iterate another time through the TONGS model using this new partial solution. \textbf{\emph{Iteration 2:}} \textbf{3.9 State the Initial Situation (IS-2):} In this problem, the Initial Situation is one where the main \textbf{Negative Effect} is “\emph{without an oil coating, the aluminium sheets are not properly protected}”. \textbf{3.10 State the Most Desirable Result (MDR-2 ):} For the sheet feeding problem, an \textbf{MDR} we might wish for is that “\emph{as the vacuum suckers arrive above the top sheet, the top sheet immediately separates itself from the second sheet and moves towards the suckers \underline{without} complicating the system and the aluminium sheets are fully protected}”. As we can see each of the iterations gives us new knowledge on the context of the particular situation. So the Tongs model appears to be a tool to apply the Third postulate of classical TRIZ concerning the context of a specific situation [4, 7, 8]. \textbf{3.11 a) State the Barrier:} In this problem, the key barrier seems to be that “\emph{I need something to protect the aluminium sheets but I can’t use oil}”. \textbf{b) Reframe the Barrier as a contradiction:} In this problem, the objective factor which is at the root of the problem is that “\emph{the oil should stop the air to protect the sheet surface but shouldn’t stop the air from moving between the two top sheets}”. \textbf{3.12 a) Identify “Common Sense” Solutions:} A solution direction suggested here is that something needs to happen to the oil but what could it be? We will now move onto stage 4 b) to complete the analysis: \textbf{b) Identify OTSM-TRIZ solutions} using OTSM-TRIZ principles of technical or physical contradiction resolution or Classical TRIZ system of Standard Inventive Solution: “What principles of technical contradiction resolution could be useful to resolve the technical contradiction in this problem?” Possible conflicts we have are between \emph{Speed} and \emph{Loss of Substance}, \emph{Speed} and \emph{Harmful Effects Acting on the System} and \emph{Productivity} and \emph{Loss of Substance}. A principle which seems to recur is number 35, parameter change. “What principles of OTSM-TRIZ could be used to satisfy both opposite demands for the same parameter?” To indentify a physical contradiction for the oil we can state the useful action as “protects aluminium sheet” and the harmful action as “stops air moving between first and second sheets”. In order to maintain the useful action the oil must be able to flow $\to$ liquid. In order to prevent the harmful action, to oil must be not able to flow $\to$ solid. We can separate in time and use “low melting point oil” which can flow over the sheets to protect them and then solidify before the sheets are put into a stack so that air can move freely between the sheets. In other words, the sheets should be coated in wax. If we benchmark this solution against the MDR, we now have a relatively simple solution and we can still fully protect the aluminium sheets. For the purpose of this example, we can now decide to stop the analysis. \section{Some other applications of the TONGS problem solving process model} \subsection{Applications for OTSM-TRIZ Education} There are at least two key points to mention about theapplication of the “TONGS” model for the OTSM-TRIZ educational process. First of all, as we discussed at the beginning of the paper, the “TONGS” model is one of the main sub-component of the other problem solving process models which were developed in the course of Classical TRIZ and OTSM evolution. This means that learning the “TONGS” model can be a key first step towards developing a deep understanding of many other notions, theory and practical tools which should be known by TRIZ and OTSM practitioners, professionals and developers. The model helps us to understand how the theoretical background can work for practice; how TRIZ based tools can reduce the amount of trial and error without losing out on quality of the solution for non typical problematic situation. The “TONGS” model also serves to help us better understandhow Altshuller’s three postulates work as practical tools, helping us to narrow our research to discover the deep root of a problematic situation and develop an image of a satisfactory solution. Each iteration of the “tongs” model adds at least one more detail to the image of Most Desirable Result as well as understanding and clarification the Initial Situation. The more students learn about practical application of the three postulates, the better they can apply many other tools of Classical TRIZ and OTSM. Depending of the aim of students and teachers we can provide a deep understanding on how Classical TRIZ works as a theory for creating new tools for solving various kinds of non typical problems. The second point is about the structure of the model. Understanding of the “TONGS” model structure can be used later to study any other OTSM-TRIZ based tools as well as many other methods. For example, each tool or method has an Initial Situation to which it must be applied. Also every single step of ARIZ or other similar methodshould start with an Initial Situation. Similarly, the MDR is a statement of the best possible output that should be delivered from that step or method. Finally, the core of the step is the mechanism to overcome the barrier that prevents us from obtaining result of the step or method from our Initial Situation. This allowed student to understand better what kind of difficulties (barriers) they face and how they can overcame those barriers as soon as they try to apply particular step of ARIZ or any other methodology. In other words: The “TONGS” model can be viewed not only as a problem solving tool but as a tool for education and self education. \subsection{Applications for OTSM-TRIZ Users} The more deeply the users study how to apply the different TRIZ and OTSM techniques and methods for effective application of the “Tongs” model, the better they can use it for many other OTSM-TRIZ based tools. For instance: the first part of Altshuller’s ARIZ (ARIZ-85C) is based on the “HILL” model and the third part is based on the “PROBLEM FLOW" model; steps 1.1 and 1.6 is a direct application of the “TONGS“ model; steps 1.2 – 1.5 dedicated to improve and verify the “TONGS“ model that was created on the step 1.1. When students clearly understand the meaning and practical application of the “TONGS“ model for study various tools they can better understand various applications of those tools and how the tools are integrated into while system. As a result, they can develop their own combination of the tools for certain particular needs. In turn this lead to more flexible use of various tools (not only OTSM and TRIZ based tools) to operate them for complex interdisciplinary problematic situations according to the OTSM “FRACTAL MODEL“ of a problem solving process. For instance “TONGS“ model was used for developing an OTSM interpretation of Altshuller's Law of Completeness of a Technical System. In turn this interpretation was used to create OTSM Negative system technique, OTSM express analysis of an initial situation, OTSM Network of Problems/Solutions etc. As we saw in the earlier example, the “TONGS” model can be used to clarify both the deep root of an Initial Situation and the more detailed image of a satisfactory solution as close to MDR as possible. One of the most important applications of the ”TONGS model is the description of a sub problems that can be used for creating an OTSM Network of Problems that can then be used to discover the bottleneck of a problematic situation and for the evaluation of obtained solutions [9,10]. The “TONGS” model is also important as a tool to split initial problems into several subproblems to be solved to obtain an appropriate satisfactory solution [11]. In this application the “TONGS” model can be used for clarification of the OTSM network of Problems and Solutions as well as an independent tool to clarify an initial situation during problem solving process. \subsection{Applications for OTSM-TRIZ developers and new tools creating.} When TRIZ and OTSM professionals start using TRIZ and OTSM as a theoretical basis on which to create new problem solving tools they can apply the ”TONGS” model to identify barriers that the particular new tools should be able to overcome in the course of a problem solving process in general. Then we can set out to answer the problem by creating new problem solving tools, methods or techniques or just by clarifying a particular step of an existing method and tool. \section{Conclusions} More than 60 years of using the “TONGS” model in practice for TRIZ based problem solving makes this model very useful for study by beginners, professionals and developers of tools for problem solving. It is a versatile, domain-free tool that can be used right away for many areas of human activity. In turn this gives much more freedom to beginners than the study of other simple empiric tools of classical TRIZ like the matrix and 40 principles that appeared before TRIZ was formed as a mature theory. With the “TONGS” model, many simplified tools can be used more effectively because it helps us to pose the problem correctly before starting to solve it right away. As we know most beginners try to solve problems right away as soon as they hear the initial problem description. With the “TONGS” model they learn the importance of the MDR as a guide to a satisfactory solution which helps decrease the amount of useless trials and errors. Each iteration with the “TONGS” model leads to a better description of the MDR and we can pose the problem correctly and apply appropriate tools. Looked at it in this way, it is difficult to overestimate the value of the “TONGS” model for the TRIZ and OTSM educational process. Understanding the “TONGS” model allows professionals to use existing tools more effectively and to be more fluent in the application of various tools from the OTSM-TRIZ toolboxes. For developers of new problems solving tools, the “TONGS” model provides a framework for specifying requirements for new tools, methods or new steps in existing tools. In turn this allows us to pose the problem about the importance of a new tool and/or process step in a clear form. When we obtain a proposal to improve existing tools or create new ones we can use this form for preliminary evaluation of the proposals we have developed. Of course the “TONGS” model cannot replace real life evaluation but preliminary evaluation of the tools can bring some more improvement before testing it to solve real problems. Preliminary evaluation is also helpful in developing several options of the tool with final selection of the best one through practical application. Last but not least, we should stress that the “TONGS” model is a powerful domain-free educational tool for OTSM-TRIZ teachers that allows them to reduce the overall time needed to educate students to a good professional level. The “TONGS” model builds the ability of students to solve problems based on Classical TRIZ ideology from the very first steps of their education. Continued use of the model provides a framework for students to learn about more advanced models and tools for problem solving in terms of both the problem context and evaluation of very strong solutions. \section{Summary} This paper describes the “TONGS” model, one of the very first TRIZ tools, and discussed how the tool is still very relevant today, being useful for both TRIZ novices and established TRIZ users. The “TONGS” model provides an important “frame” for the problem solving process, or sub-steps within a more complex process, and gives a simple objective means to determine if the problem solving process is progressing in the right direction. \section*{References} \begin{itemize} \item[{[1]}] Altshuller G.S., Shapiro R.B. (1956). Psychology of inventive creativity. Voprosi Psihologii, 6, 37–49. \item[{[2]}] Altshuller G.S. (1986). The history of ARIZ evolution. Simferopol. Manuscript (In Russian). \item[{[3]}] Altshuller G.S. (1975). The Inventive Problem Solving Process: fundamental steps and mechanisms. Manuscript. (In Russian). [\foreignlanguage{russian}{Г.С. Альтшуллер. Процесс решения изобретательской задачи: основные этапы и механизмы. Рукопись. Баку 1975}] \item[{[4]}] Khomenko N. (1999). Education Materials for OTSM Development: State of Art 1980–1997, LG-Electronics Learning Center, Piangteck, South Korea (in English). \item[{[5]}] Khomenko N. (2004). Materials for OTSM modules of the course master in innovation design. Strasbourg: INSA. \item[{[6]}] Gerasimov V. To assume something that that cannot be assumed.\\ \url{http://www.trizminsk.org/e/212004.htm}. \item[{[7]}] Khomenko N., Ashtiany M. (2007). Classical TRIZ and OTSM as a scientific theoretical background for non-typical problem solving instruments. Proceedings of TRIZ-Future 2007, Frankfurt, Germany. \item[{[8]}] Altshuller G.S. (1979). The equations of thinking. (In Russian). [\foreignlanguage{russian}{Г.С. Альтшуллер. Формулы талантливого мышления. Журнал «Техника и Наука», 1979 No. 3, с. 29-30}.] \item[{[9]}] Khomenko N., Kaikov I., Shenk, E. (2006). OTSM-TRIZ Problem network technique: application to the history of German high-speed trains. Proceedings of the TRIZ-Future 2006, Kortrjik, Belgium. \item[{[10]}] Khomenko N., De Guio R., Lelait L., Kaikov I. (2007). A Framework for OTSM-TRIZ Based Computer Support to be used in Complex Problem Management. International Journal of Computer Application in Technology (IJCAT). Volume 30 issue 1/2, 2007. \item[{[11]}] Khomenko N., Kucheriavy D. (2002). OTSM-TRIZ problem solving process: solutions and their classification. Proceedings of the TRIZ-Future 2002, Strasbourg, France. \end{itemize} \end{document}
{ "alphanum_fraction": 0.7900372136, "avg_line_length": 52.908462867, "ext": "tex", "hexsha": "e1519d12b8fcb100abe0e5e32b5f7f8c68c7e5cf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "445b25b8a6f5d03e41a98c28a60c38003e9b84a4", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "wumm-project/OpenDiscovery", "max_forks_repo_path": "Sources/Khomenko_NN/Tongs-2007-en.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "445b25b8a6f5d03e41a98c28a60c38003e9b84a4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "wumm-project/OpenDiscovery", "max_issues_repo_path": "Sources/Khomenko_NN/Tongs-2007-en.tex", "max_line_length": 80, "max_stars_count": 1, "max_stars_repo_head_hexsha": "445b25b8a6f5d03e41a98c28a60c38003e9b84a4", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "wumm-project/OpenDiscovery", "max_stars_repo_path": "Sources/Khomenko_NN/Tongs-2007-en.tex", "max_stars_repo_stars_event_max_datetime": "2020-04-21T08:48:43.000Z", "max_stars_repo_stars_event_min_datetime": "2020-04-21T08:48:43.000Z", "num_tokens": 7374, "size": 30634 }
\documentclass[letterpaper,twocolumn]{article} \usepackage[utf8]{inputenc} \title{LaTeX Line and Page Breaking} \date{} \author{Andrés Hurtado López} \begin{document} \maketitle \section{Introduction} The first thing LaTeX does when processing ordinary text is to translate your input file into a string of glyphs and spaces. To produce a printed document, this string must be broken into lines, and these lines must be broken into pages. In some environments, you do the line breaking yourself with the \textbackslash\textbackslash~command, but LaTeX usually does it for you. The available commands are: \begin{itemize} \item \textbf{\textbackslash \textbackslash}~ start a new paragraph. \item \textbf{\textbackslash \textbackslash *} start a new line but not a new paragraph. \item \textbf{\textbackslash -} OK to hyphenate a word here. \item \textbf{\textbackslash cleardoublepage} flush all material and start a new page, start new odd numbered page. \item \textbf{\textbackslash clearpage} plush all material and start a new page. \item \textbf{\textbackslash hyphenation} enter a sequence pf exceptional hyphenations. \item \textbf{\textbackslash linebreak} allow to break the line here. \item \textbf{\textbackslash newline} request a new line. \item \textbf{\textbackslash newpage} request a new page. \item \textbf{\textbackslash nolinebreak} no line break should happen here. \item \textbf{\textbackslash nopagebreak} no page break should happen here. \item \textbf{\textbackslash pagebreak} encourage page break. \end{itemize} \section{\textbackslash \textbackslash} \begin{quote} \textbackslash \textbackslash [ * ] [ extra-space ] \end{quote} The \textbackslash \textbackslash~ command tells LaTeX to start a new line. It has an optional argument, extra-space, that specifies how much extra vertical space is to be inserted before the next line. This can be a negative amount. The \textbackslash \textbackslash * command is the same as the ordinary \\ command except that it tells LaTeX not to start a new page after the line. \section{\textbackslash -} The \textbackslash - command tells LaTeX that it may hyphenate the word at that point. LaTeX is very good at hyphenating, and it will usually find all correct hyphenation points. The \textbackslash - command is used for the exceptional cases, as e.g. man\textbackslash-u\textbackslash-script \section{\textbackslash cleardoublepage } The \textbackslash cleardoublepage command ends the current page and causes all figures and tables that have so far appeared in the input to be printed. In a two-sided printing style, it also makes the next page a right-hand (odd-numbered) page, producing a blank page if necessary. \section{\textbackslash clearpage} The \textbackslash clearpage command ends the current page and causes all figures and tables that have so far appeared in the input to be printed. \textbackslash hyphenation \section{\textbackslash hyphenation\{words\}} The \textbackslash hyphenation command declares allowed hyphenation points, where words is a list of words, separated by spaces, in which each hyphenation point is indicated by a - character, e.g. \textbackslash hyphenation\{man-u-script man-u-stripts ap-pen-dix\} \section{\textbackslash linebreak} \begin{quote} \textbackslash linebreak[number] \end{quote} The \textbackslash linebreak command tells LaTeX to break the current line at the point of the command. With the optional argument, number, you can convert the \textbackslash \textbackslash linebreak command from a demand to a request. The number must be a number from 0 to 4. The higher the number, the more insistent the request is. The \textbackslash linebreak command causes LaTeX to stretch the line so it extends to the right margin. \section{\textbackslash newline} The \textbackslash newline command breaks the line right where it is. The \textbackslash newline command can be used only in paragraph mode. \section{\textbackslash newpage} The \textbackslash newpage command ends the current page. \section{\textbackslash nolinebreak} \begin{quote} \textbackslash nolinebreak[number] \end{quote} The \textbackslash nolinebreak command prevents LaTeX from breaking the current line at the point of the command. With the optional argument, number, you can convert the \textbackslash nolinebreak command from a demand to a request. The number must be a number from 0 to 4. The higher the number, the more insistent the request is. \section{\textbackslash nopagebreak} \begin{quote} \textbackslash nopagebreak[number] \end{quote} The \textbackslash nopagebreak command prevents LaTeX form breaking the current page at the point of the command. With the optional argument, number, you can convert the \textbackslash nopagebreak command from a demand to a request. The number must be a number from 0 to 4. The higher the number, the more insistent the request is. \section{\textbackslash pagebreak} \begin{quote} \textbackslash pagebreak[number] \end{quote} The \textbackslash pagebreak command tells LaTeX to break the current page at the point of the command. With the optional argument, number, you can convert the \textbackslash pagebreak command from a demand to a request. The number must be a number from 0 to 4. The higher the number, the more insistent the request is. \end{document}
{ "alphanum_fraction": 0.7986425339, "avg_line_length": 71.6756756757, "ext": "tex", "hexsha": "5accd0dd8d44c4881882427591b093d56a135b7c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "122077454dff2eba954c33cec9201e23cfc22ff8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "andres-hurtado-lopez/LaTeX_template", "max_forks_repo_path": "help/texbreaksguide.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "122077454dff2eba954c33cec9201e23cfc22ff8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "andres-hurtado-lopez/LaTeX_template", "max_issues_repo_path": "help/texbreaksguide.tex", "max_line_length": 403, "max_stars_count": null, "max_stars_repo_head_hexsha": "122077454dff2eba954c33cec9201e23cfc22ff8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "andres-hurtado-lopez/LaTeX_template", "max_stars_repo_path": "help/texbreaksguide.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1306, "size": 5304 }
\documentclass{scrartcl} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{subcaption} \usepackage{listings} \usepackage{color} \definecolor{dkgreen}{rgb}{0,0.6,0} \definecolor{gray}{rgb}{0.5,0.5,0.5} \definecolor{mauve}{rgb}{0.58,0,0.82} \lstset{frame=tb, language=Java, aboveskip=3mm, belowskip=3mm, showstringspaces=false, columns=flexible, basicstyle={\small\ttfamily}, numbers=none, numberstyle=\tiny\color{gray}, keywordstyle=\color{blue}, commentstyle=\color{dkgreen}, stringstyle=\color{mauve}, breaklines=true, breakatwhitespace=true, tabsize=3 } \graphicspath{ {image/} } \title{CMPE 434 - Introduction to Robotics} \subtitle{Lab 4: Odometry Calibration} \date{Deadline: October 21, 2019} \begin{document} \maketitle In this lab, we will be dealing with the odometry calibration of a differential drive robot. We will use our existing robots as they are typical differential-drive robots. \section{Things to do:} \subsection{Requirements} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Write a Python program that is able to make your robot follow a square trajectory of 2m $\times$ 2m both clockwise and counterclockwise and for a specified number of times. \item Download the "UMBmark Tutorial" from the Downloads section of the course web site. Also, you can download and use the "Odometry Calibration" paper there. \item Perform \textit{UMBmark} for 5 times and record the observed systematic errors. \item Plot the recorded data similar to the Figure~\ref{fig:data_plot}. \item Report the systematic error $E_{max}$ as shown in the tutorial. \item Analyze the \textbf{Type A} and \textbf{Type B} errors of your robots. Try to explain the possible causes of these errors. \item Apply the correction techniques for compensating systematic errors and report your calculations and results. Also report the differences from the initial version by performing the same tests and comparing the results in terms of $E_{max}$. \end{enumerate} \begin{figure} \centering \includegraphics[scale=1]{image/umbmark_plot.png} \caption{Typical results from UMBmark experiments} \label{fig:data_plot} \end{figure} \textbf{Note:} You are expected to shoot videos for both the initial version and the corrected version of your robot. Combining two videos in a single video is encouraged. Also you can include only one run of the experiments. \end{document}
{ "alphanum_fraction": 0.7680623974, "avg_line_length": 32.0526315789, "ext": "tex", "hexsha": "3741d7b1b7646c2ea17936e6f738deed1806c3c2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "35a08231ff96c7ecd5b8c1005b82a0f2f1588d7c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "yildirimyigit/cmpe434-hw-descriptions", "max_forks_repo_path": "lab_4/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "35a08231ff96c7ecd5b8c1005b82a0f2f1588d7c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "yildirimyigit/cmpe434-hw-descriptions", "max_issues_repo_path": "lab_4/main.tex", "max_line_length": 171, "max_stars_count": null, "max_stars_repo_head_hexsha": "35a08231ff96c7ecd5b8c1005b82a0f2f1588d7c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "yildirimyigit/cmpe434-hw-descriptions", "max_stars_repo_path": "lab_4/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 671, "size": 2436 }
\chapter{ODHQL-Syntax}\label{app:odhql-syntax} % in python console: % import re % grammar = """ <copy grammar from parser.py here """ % print '\n\n'.join([re.sub('\s?([a-zA-Z]+)\s', ' <\\1> ', l) for l in grammar.splitlines() if len(l) > 0]) % copy result into the grammar env below \begin{grammar} \small <UnionQuery> ::= <Query> ( "union" <Query> )* ( <OrderByList> )? <Query> ::= <FieldSelectionList> <DataSourceSelectionList> ( <FilterList> )? <FieldSelectionList> ::= "select" <FieldSelection> ( "," <FieldSelection> )* <FieldSelection> ::= <Field> | <Expression> "as" Alias <CaseExpression> ::= "case" ( "when" <Condition> "then" <Expression> )+ ( "else" <Expression> )? "end" <Expression> ::= <Function> | <LiteralExpression> | <Field> | CaseExpression <Function> ::= <Identifier> "(" ( <FunctionArgumentList> )? ")" <FunctionArgumentList> ::= <Expression> ( ( "," <Expression> )* )? <Field> ::= <DataSourceNameOrAlias> "." FieldName <DataSourceNameOrAlias> ::= <DataSourceName> | Alias <DataSourceSelectionList> ::= "from" <DataSourceName> ( "as"? <Alias> )? ( <JoinDefinition> )* <JoinDefinition> ::= ("left" | "right" | "full" )? "join" <DataSourceName> ( "as"? <Alias> )? "on" JoinCondition <JoinCondition> ::= <SingleJoinCondition> | "(" <SingleJoinCondition> ( "and" <SingleJoinCondition> )* ")" <SingleJoinCondition> ::= <Expression> "=" Expression <FilterList> ::= "where" FilterAlternative <FilterAlternative> ::= <FilterCombination> ( "or" <FilterCombination> )* <FilterCombination> ::= <Condition> ( "and" <Condition> )* <Condition> ::= <BinaryCondition> | <InCondition> | <IsNullCondition> | <PredicateCondition> \\ | "(" <FilterAlternative> ")" <BinaryCondition> ::= <Expression> <BinaryOperator> Expression <BinaryOperator> ::= "=" | "!=" | "<=" | "<"| ">=" | ">" | ( "not" )? "like" <InCondition> ::= <Expression> ( "not" )? "in" "(" <Expression> ( "," <Expression> )* ")" <IsNullCondition> ::= <Field> "is" ( "not" )? Null <PredicateCondition> ::= ( "not" )? Function <OrderByList> ::= "order" "by" <OrderByField> ( "," <OrderByField> )* <OrderByField> ::= ( <Field> | <Alias> | Position) ( "asc" | "desc" )? <Integer> ::= ( "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" )+ <LiteralExpression> ::= <SingleQuotedString> | <Number> | <Boolean> | Null <Number> ::= <Integer> | Float <Float> ::= <Integer> "." Integer <Boolean> ::= "true" | "false" <Null> ::= "null" <SingleQuotedString> ::= " <string> <in> <single> quotes" <DoubleQuotedString> ::= " <string> <in> <double> quotes" <DataSourceName> ::= Identifier <FieldName> ::= Identifier <Alias> ::= Identifier <Identifier> ::= ( "a..z" | "A..Z" | "_" ) ( "a..z" | "A..Z" | "_" | <Integer> )* | DoubleQuotedString \end{grammar}
{ "alphanum_fraction": 0.569136224, "avg_line_length": 33.2840909091, "ext": "tex", "hexsha": "aa6ed3bab85e234c3116d9c6dbc0f57aa7461f41", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0cf4e61d5cfbee2ea4afa3ddd9b6abb601df8686", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "hsr-ba-fs15-dat/ba-doc", "max_forks_repo_path": "content/appendix/odhql.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0cf4e61d5cfbee2ea4afa3ddd9b6abb601df8686", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "hsr-ba-fs15-dat/ba-doc", "max_issues_repo_path": "content/appendix/odhql.tex", "max_line_length": 116, "max_stars_count": null, "max_stars_repo_head_hexsha": "0cf4e61d5cfbee2ea4afa3ddd9b6abb601df8686", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "hsr-ba-fs15-dat/ba-doc", "max_stars_repo_path": "content/appendix/odhql.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 903, "size": 2929 }
% --------------------------------------------------------- % Project: PhD KAPPA % File: background.pca.tex % Author: Andrea Discacciati % % Purpose: Section PCa (background) % --------------------------------------------------------- \section{Prostate cancer} % Disease description %\subsection{Disease description} Prostate cancer is the development of cancer in the prostate, a gland in the male reproductive system that is located just below the bladder, surrounding the urethra. More than 90\% of all prostate cancers develop from the gland cells and are referred to as adenocarcinomas. Early prostate cancer is generally asymptomatic. However, symptoms include increased frequency of urination, painful urination (dysuria), blood in the urine (hematuria), and erection dysfunction. This group of symptoms is known as lower urinary tract symptoms. If the cancer has metastasized to the bones, it can also cause bone pain, especially in the vertebrae, ribs, or pelvis. Prostate cancer is a very heterogeneous disease, ranging from indolent and slow-growing tumors to aggressive and fast-developing tumors (figure \ref{fig:heterogeneitypca}). The majority of prostatic carcinomas are, however, slow-growing and the time period between onset and clinical presentation of the disease can span several years. Men with this subtype of disease are likely to die from unrelated causes, such as cardiovascular diseases. On the other extreme there are aggressive cancers, which grow fast and may metastasize to the bone or lymph nodes, eventually causing premature death. Figure \ref{fig:naturalcourse} schematically exhibits the natural course of prostate cancer. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{figures/heterogeneitypca.jpg} \end{center} \caption[Heterogeneity of prostate cancer progression]{Heterogeneity of prostate cancer progression. The arrow labeled ``fast'' represents a fast-growing cancer, one that quickly leads to symptoms and to death. The arrow labeled ``slow'' represents a slow-growing cancer, one that leads to symptoms and death but only after many years. The arrow labeled ``very slow'' represents a cancer that never causes problems because the patient will die of some other cause before the cancer is large enough to produce symptoms. The arrow labeled ``non-progressive'' represents cellular abnormalities that meet the pathological definition of cancer but never grow to cause symptoms. Reproduced with permission from \citet{welch_overdiagnosis_2010}.} \label{fig:heterogeneitypca} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{figures/naturalcourse.jpg} \end{center} \caption[Natural history of prostate cancer]{Natural history of prostate cancer. This figure illustrates the course of prostate cancer from initiation (A), to diagnosis by screening (B), to diagnosis by clinical symptoms (C), to clinically detectable metastatic disease (D), and finally to death from prostate cancer (E). Reproduced with permission from \citet{salinas_prostate_2014}.} \label{fig:naturalcourse} \end{figure} % Descriptive epidemiology \subsection{Descriptive epidemiology} \label{section:descriptiveepidemiology} \subsubsection{Incidence} Prostate cancer was the second most common cancer in men worldwide and the most common one in more developed regions in 2012 \citep{ferlay_cancer_2015}. The age-standardized Incidence Rates (IRs) showed large geographic variation, with the highest rates observed in Australia/New Zealand (111.6 cases per 100,000 men), Northern America (97.2 cases per 100,000), and Western Europe (94.9 cases per 100,000 men). In contrast, the lowest IRs were observed in Asia (9.4 cases per 100,000 men) (figure \ref{fig:worldincpca}). Geographical differences were present also within Europe, where, for example, the age-standardized IR in Sweden was estimated to be around 1.7-times times of that in Italy \citep{ferlay_cancer_2015}. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{figures/incidencepca.png} \end{center} \caption[Age-standardized prostate cancer incidence rates per 100,000 men, worldwide, 2012]{Age-standardized prostate cancer incidence rates per 100,000 men, worldwide, 2012. Rates are age-standardized to the World population. Source: GLOBOCAN 2012 (IARC).} \label{fig:worldincpca} \end{figure} In Sweden, the incidence of prostate cancer increased constantly during the period 1960--2004, with a steeper increase starting from the mid-1990's, and has been stable or even slightly decreasing since then (1.5\% average yearly decrease during 2004--2013) (figure \ref{fig:incmortsweden}). On average, around 10,000 cases were diagnosed every year during the period 2009--2013 and they amounted to 34\% of the total cancer diagnoses \citep{engholm_nordcan_2015}. Geographic variation is present also within Sweden, where an almost 2-fold difference in prostate cancer incidence was observed between counties according to NPCR data from 2000--2001 \citep{stattin_geographical_2005}. \subsubsection{Mortality} Prostate cancer was the fifth leading cause of cancer death in men worldwide in 2012, with an estimated total of 307,000 deaths (7\% of the overall male cancer mortality) \citep{ferlay_cancer_2015}. Geographical variation was less pronounced for mortality than for incidence (figure \ref{fig:worldmorpca}). Unlike incidence, the highest age-standardized Mortality Rates (MRs) were observed in populations of African descent. However, the lowest MRs were, similarly to incidence, observed in Asia (3.8 deaths per 100,000 men). Northern America showed slightly lower age-standardized MRs as compared with Europe (9.8 versus 11.3 deaths per 100,000 men) \citep{ferlay_cancer_2015}. In Sweden, MRs have been relatively stable over time (figure \ref{fig:incmortsweden}), with a 2.7\% average yearly decrease during the period 2004--2013. Still, around 2,400 men died on average every year and accounted for 21\% of all cancer deaths (2009--2013) \citep{engholm_nordcan_2015}. The 5-year relative survival among men diagnosed with prostate cancer was around 90\%, while the 10-year survival was around 80\% (as of 2012), showing a steady increase over time \citep{socialstyrelsen_cancer_2013}. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{figures/incmortsweden.pdf} \end{center} \caption[Age-standardized prostate cancer incidence and mortality rates per 100,000 men, Sweden, 1952--2013]{Age-standardized prostate cancer incidence and mortality rates per 100,000 men, Sweden, 1952--2013. Rates are age-standardized to the World population. The vertical axis is on the natural log scale. Data source: NORDCAN.} \label{fig:incmortsweden} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{figures/mortalitypca.png} \end{center} \caption[Age-standardized prostate cancer mortality rates per 100,000 men, worldwide, 2012]{Age-standardized prostate cancer mortality rates per 100,000 men, worldwide, 2012. Rates are age-standardized to the World population. Source: GLOBOCAN 2012 (IARC).} \label{fig:worldmorpca} \end{figure} % Classification of prostate cancer \subsection{Classification} \label{section:classification} As a consequence of prostate cancer heterogeneity, its classification in risk categories at the time of diagnosis has the important objective of grouping patients with a similar prognosis. This allows primarily to make recommendations regarding their treatment, but also to compare clinical, pathological, and epidemiologic data coming from different sources. Different classification criteria have been developed with the aim of improving risk stratification \citep{cooperberg_university_2005, boorjian_mayo_2008, heidenreich_eau_2014, mohler_prostate_2014}. These criteria are generally based on a combination of Tumor Node Metastasis (TNM) staging system, Gleason grading system, and PSA serum level at diagnosis. In Sweden, the criterion used by the NPCR is based on an adapted version of the National Comprehensive Cancer Network classification scheme and has been slightly modified during the last years \citep{npcr_prostatacancer_2013} (table \ref{table:classifpca}). In practice, epidemiologic studies employ different and sometimes inconsistent criteria to classify prostate cancer. Moreover, the same terms are often used to refer to subtypes of cancer defined in different ways, thus complicating the interpretation and comparison of the results. For example, the term `advanced' prostate cancer is variably defined as higher grade, later stage, presence of metastatic disease or death, stage C or D on the Whitmore/Jewett scale, or different combinations of these. \begin{table}[] \centering \caption[Risk categories according to the NPCR classification criteria]{Risk categories according to the NPCR classification criteria \citep{npcr_prostatacancer_2013}.} \label{table:classifpca} \begin{tabularx}{\textwidth}{rlX} \hline & {\bf Risk category} & \multicolumn{1}{c}{{\bf Criterion}} \\ \hline 1. & Low-risk & T1--2, Gleason Score 2--6, PSA \textless 10 \ngml \\ 1a. & Very low-risk & T1c, PSA \textless 10 \ngml, Gleason Score 2--6, no more than 2 biopsy cores with cancer, total length of biopsies \textless 4mm \\ 1b. & Low-risk (other) & Low-risk not categorized as 1a \\ 1c. & Low-risk (missing) & Missing information for low-risk categorization according to 1a and 1b \\ 2. & Intermediate-risk & T1--2, Gleason Score 7, \textit{and}/\textit{or} 10 $\le$ PSA \textless 20 \ngml \\ 3a. & Localized high-risk & T1--2, Gleason Score 8--10, \textit{and}/\textit{or} 20 $\le$ PSA \textless 50 \ngml \\ 3b. & Locally advanced & T3, PSA \textless 50 \ngml \\ 4. & Regionally metastatic & T4 \textit{and}/\textit{or} N1 \textit{and}/\textit{or} 50 $\le$ PSA \textless 100 \ngml, \textit{and} Mx--0 \\ 5. & Distant metastases & M1 \textit{and}/\textit{or} PSA $\ge$ 100 \ngml \\ 6. & Missing & Missing information for categorization \\ \hline \end{tabularx} \end{table} % PSA \subsection{Prostate-specific antigen testing} PSA is an enzyme produced by the prostate's epithelial cells and its primary function is to liquefy the semen in the seminal coagulum. Low PSA levels are present in the blood of healthy men and tend to increase naturally with age \citep{lilja_prostatespecific_2008}. However, abnormally high PSA levels may be a sign of prostate cancer or other prostatic diseases, such as benign prostatic hyperplasia or prostatitis --- that is, inflammation of the prostate. This reflects the fact that this enzyme is organ-specific but not prostate cancer--specific. In the U.S., PSA testing was introduced in the late 1980s and approved by the Food and Drug Administration as a prostate cancer diagnostic marker in 1994 \citep{lilja_prostatespecific_2008}. The rationale behind this test is to detect prostate cancer early on, giving the possibility of intervening with curative treatments and, as a result, reduce the mortality from the disease. However, two problems related to the PSA test are its low sensitivity and the risk of overdiagnosis --- that is, the diagnosis of a cancer ``that would otherwise not go on to cause symptoms or death'' \citep{welch_overdiagnosis_2010}. Using data from the placebo arm of the Prostate Cancer Prevention Trial (PCPT), it was estimated that the test sensitivity is 24 and 35\% for cut-offs of 3 and 4 \ngml{}, respectively \citep{thompson_effect_2006}. More in general, PSA had a discrimination ability of 0.68, as measured by the area under the ROC curve \citep{thompson_effect_2006}. Overdiagnosis, which has been estimated to be in the range of 23--67\% for prostate cancer \citep{draisma_lead_2009, welch_overdiagnosis_2010}, can have a major impact on a man's life both in terms of psychological burden due to the cancer diagnosis and in terms of side effects following unnecessary treatment. Lastly, there is no conclusive evidence that PSA screening can in fact be useful to reduce prostate cancer mortality \citep{cuzick_prevention_2014}, and the two largest randomized trials on this matter --- the Prostate, Lung, Colorectal, and Ovarian Cancer (PLCO) screening trial, and the European Randomized Study of Screening for Prostate Cancer (ERSPC) trial --- showed conflicting results. Namely, the PLCO trial observed no evidence of a decrease in prostate cancer mortality comparing systematic screening versus opportunistic screening, whereas the ERSPC trial observed a 21\% reduction in screened versus unscreened men. A recent study using data from Swedish registers showed that more-intense PSA screening decreased prostate cancer--specific mortality as compared with opportunistic screening, which might reconcile the findings from the PLCO and ERSPC trials \citep{stattin_prostate_2014}. The value of using PSA as a screening tool in the general population remains however controversial \citep{cuzick_prevention_2014}. Sweden does not have, to date, a national screening program for prostate cancer. Socialstyrelsen [the National Board for Health and Welfare (NBHW)] carried out an extensive literature review in 2013 and recommended against the introduction of a screening program \citep{socialstyrelsen_screening_2013}.\footnote{``Hälso- och sjukvården bör inte erbjuda screening för prostatacancer med test av prostataspecifikt antigen (PSA).''} Nevertheless, non-systematic, opportunistic PSA testing has increased over time since its introduction in the 1990s, which can explain the increase in prostate cancer incidence shown in figure \ref{fig:incmortsweden} \citep{jonsson_uptake_2011, nordstrom_prostatespecific_2013, socialstyrelsen_screening_2013}. It has been estimated that around half of the Swedish men aged 55--69 years old are PSA-tested, with large regional differences \citep{jonsson_uptake_2011}. In the Stockholm County, the proportion of the 2011 male population that had been tested during the previous 9 years was estimated to be between 46 and 77\%, depending on the age group considered \citep{nordstrom_prostatespecific_2013}. % Risk factors \subsection{Risk factors} The etiology of prostate cancer is poorly understood, with the only established risk factors being age, family history of the disease, and race/ethnicity. To date, prostate cancer is not clearly linked to any preventable risk factors \citep{cogliano_preventable_2011,discacciati_lifestyle_2014, wcrf_continuous_2014}. At the same time, WCRF and AICR recently updated the findings from the 2007 Second Expert Report in their 2014 Continuous Update Project. The conclusions from the 2014 report read ``there is strong evidence that being overweight or obese increases the risk of advanced prostate cancer (being overweight or obese is assessed by body mass index (BMI), waist circumference and waist-hip ratio)'' \citep{wcrf_continuous_2014}. The degree of evidence for body fatness being associated with advanced prostate cancer, however, still does not reach the highest possible level of `strong evidence --- convincing'. \subsubsection{Non-modifiable risk factors} Age is the strongest risk factor for prostate cancer. Diagnosis is very uncommon in men younger than 40 years old and mortality is rare before the age of 50 years. It has been estimated that only 25\% of the incident cases in Europe in 2012 were diagnosed before the age of 65 years \citep{ferlay_cancer_2015}. Similarly, in Sweden, only 30\% of those men who received a prostate cancer diagnosis in 2013 were younger than 65 years of age \citep{socialstyrelsen_cancerincidens_2014}. Incidence of prostate cancer increases sharply after the age of 55 years, peaks around 70--74 years of age, and declines slightly thereafter \citep{ferlay_cancer_2015}. This steep trend in the age-incidence curve has been observed in multiple populations, including populations where PSA screening was completely absent \citep{armitage_age_1954}. Early-onset prostate cancer --- that is, prostate cancer diagnosed in men aged less than 55 years of age --- has been suggested to be a distinct phenotype, both from an etiological and clinical point of view \citep{salinas_prostate_2014}. The risk of developing prostate cancer among men who have a first-degree relative with prostate cancer is around 2.5 times the risk among men without a diagnosed first-degree relative \citep{zeegers_empiric_2003, kicinski_epidemiological_2011}. This risk increases with decreasing age of the proband, with increasing number of affected relatives, and if the affected relative is a brother rather than the father. Family history is also associated to prostate cancer mortality \citep{brandt_agespecific_2010}. Familial aggregation of prostate cancer is largely due to genetic factors, as suggested by twin studies, where heritability was estimated to be around 30--40\% \citep{ahlbom_cancer_1997, lichtenstein_environmental_2000, eeles_identification_2013}. In the last 10 years, more than 70 low-penetrance susceptibility loci have been identified through genome-wide association studies \citep{goh_germline_2014}. Familial aggregation can, however, be partly explained also by increased screening propensity among men with family history of prostate cancer \citep{bratt_effects_2010}. Racial/ethnic variation in prostate cancer risk is very pronounced, too. In the U.S., during the period 2007--2011 (most recent available data), African-American men were observed to have around 60\% higher incidence and 140\% higher mortality as compared with Caucasian men. Conversely, Hispanic men had approximately 10\% lower incidence and mortality \citep{acs_cancer_2015}. These differences are partially due to a combination of genetic and lifestyle factors, but disparities in socioeconomic status, as well as access to health care and prostate cancer screening may also contribute to explain the observed variation \citep{jones_explaining_2008}. Geographical variation is also substantial (figure \ref{fig:worldincpca}). Although this geographic variability can be explained by differences in screening programs and in genetic factors, results from migrant studies support the hypothesis that lifestyle factors might play a role in prostate cancer etiology \citep{wilson_lifestyle_2012}. %TODO: add subsubsection on other modifiable risk factors? \subsubsection{Body mass index} Since body adiposity is related to both hormonal and metabolic pathways and since prostate cancer is a hormone-related cancer \citep{hsing_obesity_2007}, the investigation of a possible association between body fatness and prostate cancer risk has received considerable attention in epidemiologic research. The picture regarding this potential association has become clearer and more nuanced during the last 10 years or so. BMI is probably the most common proxy for body adiposity in epidemiologic studies.\footnote{BMI is calculated as \kgmsq{} --- that is, weight in kilograms multiplied by height in meters to the power of minus 2.} In fact, weight and height can be measured relatively simply and accurately even in large populations unlike waist circumference or waist-to-hip ratio. BMI may be inadequate to measure body adiposity for a single individual, but it has been observed to correspond reasonably well with percentage body fat within sex and age groups \citep{flegal_comparisons_2009}. By the late 2011,\footnote{The beginning of my graduate studies.} the existing body of literature on BMI and total prostate cancer was quite extensive, but at the same time results were inconsistent. In particular, the largest meta-analysis available to that date, which included 27 prospective studies for a total of more than 70 thousand prostate cancer cases, observed no evidence of an association between BMI and total prostate cancer [Relative Risk (RR) for every 5-unit increment: 1.03 (95\% Confidence Interval (CI): 0.99--1.06)]\footnote{The term `relative risk' will be used in this thesis as a generic term for the risk ratio, hazard rate ratio, incidence rate ratio, or odds ratio.} and a high between-study heterogeneity \citep{renehan_bodymass_2008}. Similarly, the 2007 Second Expert Report published by WCRF and AICR observed no evidence of an association, based on 24 prospective studies [RR for every 5-unit increment: 1.00 (95\% CI: 0.99--1.01)]. As a consequence, body fatness was listed among those factors for which no conclusions could be reached (strength of the evidence: `limited --- no conclusion') \citep[section~7.14]{wcrf_food_2007}. The hypothesis that the association between body adiposity and prostate cancer risk could differ according to the aggressiveness of the disease --- therefore suggesting etiological heterogeneity of prostate cancer related to obesity --- repeatedly appeared in the literature during those years \citep{freedland_are_2006, freedland_obesity_2007, hsing_obesity_2007, hsing_androgen_2008}. The available epidemiologic evidence supported this intriguing hypothesis, but at the same time it was still limited. In fact, just a few studies had looked into the association between body adiposity and prostate cancer by subtype of the disease. As a result, the only available meta-analysis that carried out separate analyses by subtype of prostate cancer included 4 case-control and 6 prospective studies (two of which were very small), for a total of less than 2 thousand cases. Despite this, a positive association between BMI and the risk of advanced prostate cancer was observed [RR for every 5-unit increment: 1.12 (95\% CI: 1.01--1.23)] \citep{macinnis_body_2006}. During the years following the meta-analysis by \citet{macinnis_body_2006} and the Second Expert Report \citep{wcrf_food_2007}, a considerable amount of epidemiologic research on body adiposity and prostate cancer has been carried out, including \citetalias{discacciati_body_2011} of this thesis. Furthermore, epidemiologic studies started to systematically report results separately by specific subtypes of prostate cancer, although with the limitations described in section \ref{section:classification}, allowing a clearer picture to emerge. \citetalias{discacciati_body_2012} was the first meta-analysis after the one published by \citet{macinnis_body_2006} to summarize the available evidence on BMI and prostate cancer risk by subtype of the disease. In particular, \citetalias{discacciati_body_2012} was considerably larger, including 13 prospective studies and about 6 times the number of prostate cancer cases. Results showed an increased risk of advanced prostate cancer [RR for every 5-unit increment: 1.09 (95\% CI: 1.02--1.16)] and a decreased risk of localized prostate cancer [RR for every 5-unit increment: 0.94 (95\% CI: 0.91--0.97)]. Lastly, the 2014 Continuous Update Project report showed very similar results to those of \citetalias{discacciati_body_2012} for advanced prostate cancer [RR for every 5-unit increment: 1.08 (95\% CI: 1.04--1.12)], while a non-linear association was observed for localized prostate cancer \citep{wcrf_continuous_2014}. An overview of the results from meta-analyses on BMI and prostate cancer incidence --- including the updated dose--response meta-analysis based on \citetalias{discacciati_body_2012} and described in section \ref{section:results4updated} --- is reported in table \ref{table:summarybmi}. \begin{sidewaystable}[] \centering \begin{threeparttable} \caption[Meta-analyses on BMI and incidence of prostate cancer]{Results from dose--response meta-analyses on BMI and incidence of prostate cancer.} \label{table:summarybmi} \begin{tabular}{lllccc} \hline {\bf Outcome} & \multicolumn{1}{c}{{\bf Year}} & {\bf Authors} & \multicolumn{1}{c}{{\bf Number of studies}} & {\bf RR}\tnote{a,b} & {\bf 95\% CI} \\ \hline Total prostate cancer & \citeyear{wcrf_food_2007} & \citeauthor{wcrf_food_2007} & 24 cohort & 1.00 & 0.99--1.01 \\ & \citeyear{renehan_bodymass_2008} & \citeauthor{renehan_bodymass_2008} & 27 cohort & 1.03 & 0.99--1.06 \\ & \citeyear{wcrf_continuous_2014} & \citeauthor{wcrf_continuous_2014} & 39 cohort & 1.00 & 0.97--1.03 \\ & & & & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ Localized prostate cancer & \citeyear{macinnis_body_2006} & \citeauthor{macinnis_body_2006} & 6 cohort and 4 case-control & 0.96 & 0.89--1.03 \\ & \citeyear{discacciati_body_2012} & \citeauthor{discacciati_body_2012} \citepalias{discacciati_body_2012} & 12 cohort & 0.94 & 0.91--0.97 \\ & \citeyear{wcrf_continuous_2014} & \citeauthor{wcrf_continuous_2014} & 14 cohort & ---\tnote{c,d} & --- \\ & 2015 & Discacciati \citepalias[updated]{discacciati_body_2012} & 14 cohort & ---\tnote{c,e} & --- \\ & & & & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ Advanced prostate cancer & \citeyear{macinnis_body_2006} & \citeauthor{macinnis_body_2006} & 6 cohort and 4 case-control & 1.12 & 1.01--1.23 \\ & \citeyear{discacciati_body_2012} & \citeauthor{discacciati_body_2012} \citepalias{discacciati_body_2012} & 13 cohort & 1.09 & 1.02--1.16 \\ & \citeyear{wcrf_continuous_2014} & \citeauthor{wcrf_continuous_2014} & 23 cohort & 1.08 & 1.04--1.12 \\ & 2015 & Discacciati \citepalias[updated]{discacciati_body_2012} & 18 cohort & 1.07 & 1.03--1.12 \\ \hline \end{tabular} \begin{tablenotes} \item [a] \footnotesize For every 5-unit increment in BMI. \item [b] \footnotesize Results are from random-effect meta-analyses. \item [c] \footnotesize No RR for every 5-unit increment in BMI was calculated, as there was evidence of a non-linear relationship. \item [d] \footnotesize The RRs for 25, 31, and 37 \kgmsq{} versus 21 \kgmsq{} were 1.04 (95\% CI: 1.02--1.05), 0.94 (95\% CI: 0.92--0.96), and 0.79 (95\% CI: 0.75--0.83), respectively ($p_{\textrm{non-linearity}}<0.01$). \item [e] \footnotesize The RRs for 25, 30, and 35 \kgmsq{} versus 22 \kgmsq{} were 1.01 (95\% CI: 0.99--1.04), 0.93 (95\% CI: 0.90--0.98), and 0.81 (95\% CI: 0.74--0.88), respectively ($p_{\textrm{non-linearity}}<0.001$). \end{tablenotes} \end{threeparttable} \end{sidewaystable} In conclusion, the official recommendations issued in 2014 by the WCRF and ARIC read ``to reduce the risk of developing advanced prostate cancer, we recommend maintaining a healthy weight'' \citep{wcrf_continuous_2014}. Given that prostate cancer has usually a long latency period, spanning even decades between tumor initiation and diagnosis, body adiposity earlier in life could in theory play an important role in tumor initiation and development. Moreover, the prostate may be more susceptible to carcinogenic exposures during the developmental stages and immediately thereafter \citep{sutcliffe_prostate_2013}. For these reasons, BMI during childhood, puberty, and early adulthood --- defined as ages between 18 and 30 years --- has been investigated by epidemiologic studies, including \citetalias{discacciati_body_2011} of this thesis. The results, however, are inconsistent \citep{sutcliffe_prostate_2013}. \newpage %TODO: hard coding
{ "alphanum_fraction": 0.7669086466, "avg_line_length": 159.9195402299, "ext": "tex", "hexsha": "1d41c227305f687f754ceef72bb157209869810d", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2022-01-31T10:01:27.000Z", "max_forks_repo_forks_event_min_datetime": "2020-09-02T08:59:40.000Z", "max_forks_repo_head_hexsha": "860faf0686c16f7c97865d99d801050a10d2df7c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "anddis/phd-thesis", "max_forks_repo_path": "background.pca.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "860faf0686c16f7c97865d99d801050a10d2df7c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "anddis/phd-thesis", "max_issues_repo_path": "background.pca.tex", "max_line_length": 2309, "max_stars_count": null, "max_stars_repo_head_hexsha": "860faf0686c16f7c97865d99d801050a10d2df7c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "anddis/phd-thesis", "max_stars_repo_path": "background.pca.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6836, "size": 27826 }
% LaTeX rebuttal letter example. % % Copyright 2019 Friedemann Zenke, fzenke.net % % Based on examples by Dirk Eddelbuettel, Fran and others from % https://tex.stackexchange.com/questions/2317/latex-style-or-macro-for-detailed-response-to-referee-report % % Licensed under cc by-sa 3.0 with attribution required. % See https://creativecommons.org/licenses/by-sa/3.0/ % and https://stackoverflow.blog/2009/06/25/attribution-required/ \documentclass[11pt]{article} \usepackage[utf8]{inputenc} \usepackage{lipsum} % to generate some filler text \usepackage{fullpage} \usepackage{hyperref} \usepackage{xcolor} % import Eq and Section references from the main manuscript where needed % \usepackage{xr} % \externaldocument{manuscript} % package needed for optional arguments \usepackage{xifthen} % define counters for reviewers and their points \newcounter{reviewer} \setcounter{reviewer}{0} \newcounter{point}[reviewer] \setcounter{point}{0} % This refines the format of how the reviewer/point reference will appear. \renewcommand{\thepoint}{P\,\thereviewer.\arabic{point}} % command declarations for reviewer points and our responses \newcommand{\reviewersection}{\stepcounter{reviewer} \bigskip \hrule \section*{Reviewer \thereviewer}} \newenvironment{point} {\refstepcounter{point} \bigskip \noindent {\textbf{Reviewer~Point~\thepoint} } ---\ } {\par } \newcommand{\shortpoint}[1]{\refstepcounter{point} \bigskip \noindent {\textbf{Reviewer~Point~\thepoint} } ---~#1\par } \newenvironment{reply} {\medskip \noindent \begin{sf}\textbf{Response}:\ \color{blue} } {\medskip \end{sf}} \newcommand{\shortreply}[2][]{\medskip \noindent \begin{sf}\textbf{Reply}:\ #2 \ifthenelse{\equal{#1}{}}{}{ \hfill \footnotesize (#1)}% \medskip \end{sf}} \begin{document} \section*{Response to the reviewers (GIGA-D-18-00483)} % General intro text goes here This is the response to the reviews of our manuscript \url{https://doi.org/10.5281/zenodo.1966881}, submitted to GigaScience (GIGA-D-18-00483) on 2018-12-08 and addressed in the updated preprint \url{https://doi.org/10.5281/zenodo.3196309}, re-submitted 2019-05-23. We thank the reviewers for their critical assessment of our work. In the following we address their concerns point by point. % Let's start point-by-point with Reviewer 1 \reviewersection % Point one description \begin{point} This paper is well structured and well written but there is a point to be addressed in the evaluation. Table 5 says that the enactment of Alignment Workflow with `cwltool' with enabling provenance capture on MacOS could not be tested due to insufficient hardware resources. Does it mean that the step (I) in `Evaluation Activity' for Alignment Workflow could not be executed? If so, please clarify it. \label{pt:foo} \end{point} % \textbf{PENDING} \begin{reply} We agree with the reviewer on this point. We have included this information in the caption of Table 5. \end{reply} \begin{point} Sometimes `CWLProv' and its following word are accidentally concatenated. - e.g, p2. line 13 or 14 ``CWLProvoutcome'', p2. line 32 ``CWLProv0.6.0'' \label{pt:spelling1} \end{point} \begin{reply} Thanks for pointing this out, we have fixed these typos. \end{reply} \begin{point} Figure 1 uses the spelling `artifacts' in level 1 but this paper mainly uses `artefacts'. It is better to use consistent spelling. \label{pt:spelling2} \end{point} \begin{reply} We have now made sure the spellings are consistent by modifying the diagram. \end{reply} \begin{point} The left side of Figure 2 shows a GATK workflow but the caption says the right side is a workflow. \label{pt:figurecaption} \end{point} \begin{reply} We have edited the caption to fix this issue. \end{reply} \begin{point} Table 5 says that the enactment of Somatic Variant Calling Workflow with `toil-cwl-runner' due to a known bug. However, the link in the table is for a issue of `cwltool', not `toil-cwl-runner'. I got confused because the enactment of the same workflow with `cwltool' works. If the linked issue has occurred in `toil-cwl-runner' for the variant calling workflow, I recommend making a link to the issue of `toil-cwl-runner' instead of `cwltool'. It is less confusing. \label{pt:bar} \end{point} \begin{reply} We faced similar issue of Docker mount denied when testing this workflow using cwl-toil-runner on Mac. The previous link was intended to give an idea of the nature of issue and the possible solutions proposed, which we tried and could not succeed to get it to work. However, we agree that there should be a separate issue for cwl-toil-runner. We have created and linked to GitHub issue ( \url{https://github.com/DataBiosphere/toil/issues/2680}) to avoid confusion. \end{reply} \reviewersection \begin{point} My main concern regarding this work is that it is often stated that the re-usability of workflow resources (methods/input or output data) is facilitated but it is difficult to evaluate this claim based on CWLProv features and the proposed experiments. It is clear that re-execution of workflows is facilitated but it is unclear to what extent produced/analysed data can be considered for secondary use. \end{point} \begin{reply} It is true that we have not explored all the possible ways data re-use could be facilitated (or hindered) by the CWLProv approach. Exploring this in detail would require developing multiple user scenarios and usability testing with independent domain-experts who had not seen the archived workflow before. We believe this would be extensive future work and out of scope for this manuscript. We have explored some CWLProv consumption scenarios in \textit{cwlprov-py} - we refer to for its documentation \url{https://pypi.org/project/cwlprov/} and \texttt{cwlprov --help}, in particular we would like to point out that commands like \texttt{cwlprov inputs} and \texttt{cwlprov outputs} can use identifiers of individual steps (CWLProv level 1) and nested workflows (CWLProv level 2); these would be harder to represent in a pure file structure without significant storage duplication or creative use of symlinks. Options like \texttt{cwlprov runtimes} and \texttt{cwlprov derived} calculate secondary information on demand based on the PROV trace. Further work would be needed to build a more researcher-oriented interface based on this tool (e.g. hard-coded for a particular workflow). We have added an explanation of this to the end of section \textbf{Evaluation results}. \end{reply} \begin{point} In addition, the "pragmatic" interoperability should refer to top-level provenance and thus domain-specific annotations referring to the scientific context of the computational experiment. The experiments don't clearly show how CWLprov goes into the direction of (still ambitious and challenging) domain-specific provenance. \label{pt:pragmatic} \end{point} \begin{reply} The framework of provenance and CWLProv as a standard conceptually can achieve all three states of interoperability in principle. If we achieve Level 3 by recording domain-specific information and contextual knowledge about the experiment, data used, output produced and the methods employed in the process, we will be able to satisfy requirements of pragmatic interoperability. However, the current implementation/prototype using cwltool described in this paper has achieved up-to Level 2 and we are working (as described in section \textbf{Provenance Profile Augmented with Domain Knowledge}) to implement Level 3 practically, see \url{https://github.com/common-workflow-language/cwlprov/issues/2} for details. In the manuscript we have now more clearly mentioned the state of practical implementation, future direction, the conceptual maturity of provenance framework and CWLProv and finally the state of pragmatic interoperability throughout the manuscript. Having addressed this comment, we believe that the remaining comments about pragmatic interoperability by reviewer are hopefully resolved. We will refer to this comment below in response to other pragmatic interoperability comments. \end{reply} \begin{point} I've also a technical concern regarding the FAIRness of the approach since some of the requirements could be addressed following the (5-star) Linked Data principles. This point should be addressed in the discussion. \end{point} \begin{reply} We have added discussion about 5-star linked data principles in section \textbf{Levels of Provenance and Resource Sharing} (second last paragraph). \end{reply} \begin{point} Finally, I tried to browse the research objects provided as supporting material but unfortunately I could not access the resource. Logs are provided at the end of the review. \end{point} \begin{reply} As indicated under \textbf{Availability of supporting data and materials} we have mirrored the research objects on Zenodo as well, in addition we contacted Mendeley Data to raise the accessibility issue. \end{reply} \subsection*{Introduction} \begin{point} In Key Points, 4th point, space is missing in ``CWLProvoutcome'' \end{point} \begin{reply} Fixed \end{reply} \subsection*{Background and related work} \begin{point} The first paragraph of related works is too long. \end{point} \begin{reply} We have removed some details about the existing studies by mentioning them briefly. \end{reply} \begin{point} ``co-installability'' -\textgreater what does it mean? \end{point} \begin{reply} This term was describing how software package managers such as Conda and Debian help in managing installation of multiple versions of the same software or managing installation of a set of software required for a given analysis. However, while addressing the comment that the background information is too long, we have re-written this section ``\textbf{Workflow Software Environment Capture}'' and as a result no longer have this term. \end{reply} \begin{point} Some references could be added to works addressing the sharing of domain-specific annotated provenance, for instance, https://doi.org/10.1186/2041-1480-5-28, https://doi.org/10.1016/j.websem.2014.07.001, or "From Scientific Workflow Patterns to 5-star Linked Open Data" in TaPP'16. \end{point} \begin{reply} We agree with the reviewer that these are related citations. We have added Clark et al. (2014) and Gaignard et al. (2016) in section \textbf{Provenance Capture \& Standardization} and Gaignard et al. (2014) in section \textbf{Level 3}. \end{reply} \subsection*{Levels of Provenance and resource sharing} \begin{point} ``... in Figure 1 that all WMs can benefit from and conform to without additional technical overhead'' -\textgreater difficult to believe that there is no technical overhead \end{point} \begin{reply} We have changed the statement by replacing ``no technical overhead'' to "minimum technical overhead". If these levels are kept in mind from the beginning while designing a new workflow or a new system, it is possible to achieve these levels with very little technical overhead. \end{reply} \begin{point} Table 1 -\textgreater the list of recommendations is quite long, some of the recommendations are overlapping (R9 and R19 could be merged, as well as R6 and R7). Grouping them, possibly through the proposed levels could ease the reading and understanding of these recommendations. In addition, R18 is too vague. \end{point} \begin{reply} We agree with merging R6 and R7 together as both are dealing with workflow annotation. However, we still would like to keep R9 and R19 separate as one of them refers to software environment and the other (R19) refers to the hardware resources. Taking reviewer's feedback into consideration, we have loosely clustered the related recommendations in different classes. However, clustering based on levels seems reverse engineering as the levels were derived from these recommendations. \end{reply} \begin{point} Figure1 -\textgreater in Level 0 ``Results interpretation is questionable"'' scientists will need some context (Level 3) to understand the produced results, he/she may be lost in all fine-grained provenance, and extracting important parameters would certainly be time-consuming and require technical expertise. \end{point} \begin{reply} We agree with the reviewer that to make complete sense of the results, the end user must have some domain information provided with the results. However, given the expertise level of end-user, it is possible to inspect the results and hence partially interpret some aspects of it. Therefore, we have modified Figure 1 to add the term ``Partial interpretation of results'' instead of ``Results interpretation''. \end{reply} \begin{point} R2, R13, R16-18 are not mentioned in the Levels 0-3 descriptions. \end{point} \begin{reply} With merging the previous R6 \& R7, now these numbers are R2, R12, R15-17. We have included R12, R16 and R17 in section \textbf{Level 1}. We have added R15 in section \textbf{Level 0} as open licensing should be applied at the lowest level and hence is applicable to all the levels above. \end{reply} \begin{point} Level 2 paragraph 2: Re-enactment -\textgreater this feature already exists in make-like systems, such as snakemake, actively developed and used in the bioinformatics community. \end{point} \begin{reply} We have acknowledged the fact mentioned by the reviewer and discussed in the same paragraph (section \textbf{Level 2}). \end{reply} \begin{point} Level 2: "meaningful for a user" -\textgreater which kind of user? \end{point} \begin{reply} We have clarified this statement. \end{reply} \subsection*{CWLProv 0.6.0} \begin{point} ``we have reused the BDBag approach based on BagIt'' -\textgreater a short example of a Bag would have been useful. \end{point} \begin{reply} We have added a box to explain BagIt and simplified the text. \end{reply} \begin{point} ``We utilise mainly two serialisations of PROV […]'' -\textgreater why not using PROV-O to ease the linking of provenance information to other datasets as well as its analytics through querying or logical reasoning. This would also enhance findability on the web. This point should be part of the discussion. \end{point} \begin{reply} We have expanded on how we generate several PROV-O serializations (Turtle, N-Triples. JSON-LD), and why we don’t require all of these in other CWLProv implementations. \end{reply} \begin{point} ``workflow/'' -\textgreater the paragraph on ``executable workflow objects'' is hard to follow. \end{point} \begin{reply} We have rewritten this paragraph to hopefully explain better the reasoning using examples. \end{reply} \begin{point} ``metadata/'' -\textgreater the discussion on URI schemes is hard to follow, again an example would help. \end{point} \begin{reply} We have rewritten this paragraph to hopefully explain better the reasoning using examples. \end{reply} \begin{point} ``Retrospective provenance Profile'' -\textgreater is the production of wfdesc / wfprov RDF data automatic or manual? \end{point} \begin{reply} The production of data about workflow provenance is automatic. \end{reply} \subsection*{Practical realisation of CWLProv} \begin{point} Figure 5 -\textgreater what does ``relativised job object'' mean? \end{point} \begin{reply} We have replaced ``relativised job object'' by ''relitivised file paths for inputs''. It refers to the input configuration file with input data paths relative to the RO they are part of (instead of hard-coded file paths). \end{reply} \begin{point} Figure 5 -\textgreater which steps are the most costly (time/space) \end{point} \begin{reply} With the current proof-of-concept implementation, copying input and output data in "Content addressable Input artefacts" and "Add content addressable outputs" step of Figure 5 will take the most time as well as space in case of large data files. A production quality implementation would not have these overheads. \end{reply} \subsection*{ CWLProv evaluation} \begin{point} CWLProv supports syntactic, semantic, and pragmatic -\textgreater since pragmatic refers to scientific context/claims, etc., it is unclear how pragmatic interoperability is addressed. \end{point} \begin{reply} We have addressed this comment in response to \ref{pt:pragmatic}. \end{reply} \begin{point} Why choosing these 3 bioinformatics workflows, do they cover different aspects of the evaluation? Maybe a single in-depth description would be enough. \end{point} \begin{reply} We have added few lines describing the choice of these three workflows in section \textbf{CWLProv Evaluation with Bioinformatics Workflows}. \end{reply} \begin{point} ``In addition, the resource requirement'' -\textgreater this is a good example for R19, a link to R19 would be useful here. \end{point} \begin{reply} Thanks for pointing it out, we have added reference to R19 where the reviewer suggested as \textit{``In addition, the resource requirements (identified in \textit{R19-resource-use} and [...]) should also be satisfied by choosing a system with enough compute and storage resources for successful enactment.''} \end{reply} \begin{point} The re-enactment scenario is clear as well as the provenance queries scenarios but the interoperability evaluation is less clear towards the ``pragmatic assumption'' and domain annotations. \end{point} \begin{reply} We have addressed this comment in response to \ref{pt:pragmatic}. \end{reply} \begin{point} Temporal and spatial overhead -\textgreater For the RNAseq and Alignment workflows, the Prov overhead appears as quite noticeable. Which part of the process (Fig. 5) would explain this difference? \end{point} \begin{reply} In the current proof-of-concept implementation with cwltool, we are keeping a copy of input and output data in the research object. Copying the data files (happening at "Content addressable Input artefacts" and "Add content addressable outputs" stage in Fig. 5) which are larger in case of these two workflows (as compared to somatic workflow that is using small test data provided with the workflow) contributes to the time difference between the with and without provenance executions. This fact is mentioned in paragraph 3 of section \textbf{Temporal and Spatial Overhead with Provenance} and possible solutions leading to potential future directions described in section \textbf{Big -omics Data}. With the future directions implemented, we think there will not be any overhead with respect to this aspect of the process. \end{reply} \subsection*{Discussion and Future Directions} \begin{point} ``Selected jobs provenance'': this paragraph is a bit confusing since the lack of completeness of provenance was identified as the main issue, it highlights that this complete capture approach may raise human-tractability issues. \end{point} \begin{reply} We have rewritten the paragraph (\textbf{Improving \textit{CWLProv} efficiency with selective provenance capture}) to indicate that the main concern is storage inefficiency of keeping shim step outputs, we also added the reviewer's point that collapsing ``boring'' parts can improve human-tractability. \end{reply} \begin{point} In addition, users can add domain-specific annotations to data -\textgreater How? how difficult/easy it is? \end{point} \begin{reply} We have clarified this statement as ``In addition, users can add standardised domain-specific annotations to data and workflows incorporating the constructs defined by external ontologies (e.g. EDAM) to enhance understanding of the shared specification and the resources it refers to.'' This is another point that would best suited by best-practice guides like \url{https://view.commonwl.org/about#format} to indicate where to add which annotations. \end{reply} \end{document}
{ "alphanum_fraction": 0.7768872414, "avg_line_length": 53.3770053476, "ext": "tex", "hexsha": "6a17e8ae81f48d6b6c6ac62cb59d10441b33dcf4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "00706e62388dcea171c9296ebfbced71d163bb3b", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "stain/cwlprov-paper-gigascience", "max_forks_repo_path": "rebuttal1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "00706e62388dcea171c9296ebfbced71d163bb3b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "stain/cwlprov-paper-gigascience", "max_issues_repo_path": "rebuttal1.tex", "max_line_length": 827, "max_stars_count": null, "max_stars_repo_head_hexsha": "00706e62388dcea171c9296ebfbced71d163bb3b", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "stain/cwlprov-paper-gigascience", "max_stars_repo_path": "rebuttal1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4833, "size": 19963 }
%!TEX root = ../COSCFair.tex \section{The COSCFair Framework} \label{sec:framework} In this section, we explain how the COSCFair framework is constructed and introduce the type of components used to implement each step. Our framework consists of four main steps and components. It starts with pre-processing step for the dataset, where we identify the \textbf{subgroup IDs} of each sample. Then, we apply a clustering algorithm on the training set to discover the natural groups (clusters) it contains, where the samples in each of these groups are closer to each other than the others. The next step is dividing the training set into \textbf{cluster sets}, where the training samples are grouped according to their cluster IDs. Then, we oversample each of these cluster sets based on the subgroup IDs of the samples to achieve an equal number of samples for each subgroup that exists in the cluster. The final step is the classification step, where the classifier training and class label predictions of the test set occur. Here, we have studied three possible strategies for classifier training and label prediction, which are discussed further in Section \ref{ssec:str_classf}. The pseudocode of COSCFair is given in Algorithm \ref{alg1}. \subsection{Data Preparation}\label{ssec:dataprep} The data preparation step consists of several sub-steps, which are identifying the subgroup IDs, adding this information as a new variable to the dataset, and splitting the dataset as training and test. However, identifying the subgroup labels of each sample is one of the most important components of the framework, which we will discuss here in more detail. The subgroups in datasets are discovered based on the total number of binary sensitive attributes and the binary decision label, which corresponds to 2\textsuperscript{n}. In our experiments, we have two binary sensitive attributes and one binary class label in all the datasets, which corresponds to eight subgroups per dataset. Without considering the class labels (2\textsuperscript{n-1}), these subgroups are later identified as the privileged and unprivileged subgroups. For example, there are four main subgroups in each dataset that we use in our experiments. There are two base groups that are always privileged or unprivileged. If a subgroup has unfavorable values in both sensitive attributes, that subgroup becomes the most unprivileged subgroup in the dataset. If a subgroup has favorable values for both sensitive attributes, then that subgroup becomes the most privileged subgroup. The other subgroups that have different combinations of favorable and unfavorable values for different sensitive attributes should be interpreted as both potentially privileged and unprivileged subgroups. Thus, while investigating their position in a dataset, they should be tested as both privileged and unprivileged groups (see Table \ref{Table5}). After the subgroup ID variable is added, the sensitive attributes are removed from the dataset since the new subgroup IDs contain information regarding these sensitive attributes. Finally, if a dataset contains a set of numerical variables, these variables should be standardized in training and test sets separately so that they will not have domination over other variables in clustering and classification steps. \input{Sections/pseudocode.tex}\label{alg1} \subsection{Clustering}\label{ssec:clust} Clustering step in COSCFair framework is implemented with the \textbf{fuzzy c-means clustering} \cite{fuzzyc}, which is a soft clustering algorithm that allows each sample in a dataset to be assigned to more than one cluster. In fuzzy clustering, each sample belongs to a cluster with a certain probability which adds up to 1 in total. The algorithm works with the core idea of assigning the samples to clusters in a way that the samples in the same cluster are as similar as possible, while the samples in different clusters are as dissimilar as possible. Clusters are formed based on a distance measure (such as Euclidean distance), which is used to calculate (and minimize sum of) the distances between the samples and the assigned cluster centroids. Thus, it is important to apply standardization on the numerical features of the datasets in data preparation step to prevent the unjustified domination these features. Fuzzy c-means requires the number of clusters to be given as an input. Thus, we run the fuzzy c-means multiple times using predefined list of values for the number of clusters. In each run, compute the \textbf{fuzzy partition coefficient} (FPC) and the \textbf{silhouette score}. We choose the number that yields the best combination of these two values as the optimal number of clusters. Before using fuzzy c-means, it is recommended to use a dimensionality reduction technique such as principal component analysis (PCA) if the whole dataset consists of numerical variables, or the Factor Analysis of Mixed Data (FAMD) if the dataset consists of both categorical and numerical variables. \subsection{Oversampling}\label{ssec:oversamp} After splitting the training set into cluster sets using the cluster memberships of training samples, we oversample each cluster set, where the oversampling criterion is the subgroup IDs of the samples(2\textsuperscript{n}). We use subgroup IDs to oversample so that we can obtain an equal representation for each subgroup in each cluster, where they all have precisely the same number of samples with both positive and negative outcomes. We use Synthetic Minority Oversampling Technique (SMOTE) \cite{smote} to oversample our cluster sets, although different oversampling algorithms can also be used in this step. SMOTE creates new synthetic samples by drawing a line in between two samples that are closer to each other in feature space that belongs to the class which needs to be oversampled, then it produces the synthetic samples along these lines. Since these synthetic samples are created based on the line between two existing samples, these two samples must be close enough to each other to ensure good quality of synthetic sample production. Therefore, we cluster the training set into smaller cluster sets where the samples used for oversampling in these clusters are closer to each other, which decreases the distances between the samples that belong to the same subgroup for a better quality of oversampling procedure. The main reason why oversampling is used to mitigate bias is because most of the datasets containing bias are actually imbalanced, where different subgroups are not represented equally in terms of number of positive and negative samples per subgroup. This problem can easily be spotted on Table \ref{Table2} in all of the datasets. The most privileged subgroups have the most number of samples in German and Adult datasets, in which of these samples have more positive class labeled samples than other subgroups. This situation is different in COMPAS dataset, where the most privileged group has the least number of samples while the most unprivileged group has the most. However, it is important to note that while all other subgroups have more positive labeled samples, the most unprivileged group has more negative labeled sentences, which is still an imbalance problem that requires oversampling for equal representation of each subgroup with both positive and negative outcomes. \input{Tables/Table2} \subsection{Classification}\label{ssec:str_classf} In this step, a classification algorithm of choice or multiple classification algorithms of the same type (i.e. logistic regression) are trained depending on the strategy that will be followed. After the classifiers are trained, the class labels of the test set are predicted. However, every strategy has its own unique prediction procedure, which will be described in detail in their respective sections. We should note that during classifier training and test set prediction, the sensitive attributes and the subgroup IDs are not used, which ensures \textit{Fairness Through Unawareness}. \stitle{Strategy 1:} This strategy is the most similar one to the mainstream classification training and prediction. After each cluster set is oversampled, they are concatenated back together to form a single large training set. Then, only one classifier is trained with this training set and the class labels are predicted based on only this classifier. \stitle{Strategy 2:} This strategy requires training multiple classifiers, which means that one classifier will be trained based on each oversampled cluster set. Before the class labels of the test set are predicted, each sample's cluster membership is predicted by using the fuzzy c-means clustering object created in the second step. The clustering object will retrieve the ID of a cluster for the sample in which the sample has the highest probability of membership. At the beginning of this process, the classifiers that are trained based on the cluster sets which do not contain samples from the same subgroup as the test sample are discarded. After that, the remaining classifiers are considered for the rest of the process. Next, the classifier object that is trained based on the cluster set which has the same ID as the predicted cluster ID for the given test sample is used to predict the class label of that sample. \stitle{Strategy 3:} Similar to the second strategy, our final strategy also requires training multiple classifiers using the oversampled cluster sets. However, instead of choosing one classifier this time, all the trained classifiers are taken into consideration while predicting the class label of a test sample. First, the fuzzy c-means clustering object is used to retrieve the probabilities of the test sample belonging to each cluster. After that, some of the cluster IDs are discarded if their cluster sets did not contain samples from the same subgroup as the test sample. Next, the classifiers are trained with the remaining cluster sets. Finally, in the prediction step, the cluster membership probabilities of a sample that are retrieved for the remaining clusters are used as a weight for the predicted class label from each corresponding classifier. The weighting is applied to the predicted outcomes by dividing the probability of each eligible cluster \textit{c} by the sum of the probabilities of all eligible clusters and multiplying it with the predicted outcome of the classifier that is trained with the cluster set having the same ID as the cluster \textit{c}. Then, all of the weighted prediction values are summed into a single value. If this value is greater or equal to 0.5, the weighted prediction label becomes 1, otherwise 0. Our experimental results show that the best strategy for COSCFair framework is Strategy3, and thus it is used in the final comparison with the other baseline methods. % This weighing process can be formulated as: % \[\sum_{i=0}^{i_{clusts}}[(Prob_{cluster_{i}}/\sum_{n=0}^{n_{clusts}}Prob_{cluster_{n}})*PredictedLabel_{classifier_{i}}]\] % Where i\textsubscript{clusts} indicates the number of eligible clusters left after some of the clusters are discarded and it is equal to n\textsubscript{clusts}.
{ "alphanum_fraction": 0.8161863121, "avg_line_length": 228.7142857143, "ext": "tex", "hexsha": "595e75e4f47793357ec03632bc7f58c1a71c271b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7e3b54e38eddb7572777be6f9772e3b2a8e398ec", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "bendiste/Algorithmic-Fairness", "max_forks_repo_path": "FairnessPaper/Sections/Framework.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7e3b54e38eddb7572777be6f9772e3b2a8e398ec", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "bendiste/Algorithmic-Fairness", "max_issues_repo_path": "FairnessPaper/Sections/Framework.tex", "max_line_length": 1520, "max_stars_count": null, "max_stars_repo_head_hexsha": "7e3b54e38eddb7572777be6f9772e3b2a8e398ec", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "bendiste/Algorithmic-Fairness", "max_stars_repo_path": "FairnessPaper/Sections/Framework.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2204, "size": 11207 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-8-4-0}Version 8.4.0} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent Release Notes: \begin{itemize} \item HTCondor version 8.4.X not yet released. %\item HTCondor version 8.4.X released on Month Date, 2016. \end{itemize} \noindent New Features: \begin{itemize} \item None. \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item None. \end{itemize}
{ "alphanum_fraction": 0.509054326, "avg_line_length": 16.0322580645, "ext": "tex", "hexsha": "c4399ddcea3ae772bf00cd39835d41315e140438", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "113a5c9921a4fce8a21e3ab96b2c1ba47441bf39", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "neurodebian/htcondor", "max_forks_repo_path": "doc/version-history/version-tmpl.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "113a5c9921a4fce8a21e3ab96b2c1ba47441bf39", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "neurodebian/htcondor", "max_issues_repo_path": "doc/version-history/version-tmpl.tex", "max_line_length": 70, "max_stars_count": null, "max_stars_repo_head_hexsha": "113a5c9921a4fce8a21e3ab96b2c1ba47441bf39", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "neurodebian/htcondor", "max_stars_repo_path": "doc/version-history/version-tmpl.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 128, "size": 497 }
% -------------------------------------------------------- % % Regular Boolean MRSW Register Lock % by: Isai Barajas Cicourel % -------------------------------------------------------- % % Document Start \section{\textbf{Regular Boolean MRSW Register}} % -------------------------------------------------------- % % Particular Caes \subsection{Particular Case} \par A register, is an object that encapsulates a value that can be observed by a \textit{read()} method and modified by a \textit{write()} method \par For Boolean registers, the only difference between safe and regular arises when the newly written value $x$ is the same as the old. A regular register can only return $x$, while a safe register may return either Boolean value. \par % -------------------------------------------------------- % % Solution Information \subsection{Solution} \par A register that implements the \textit{Register<Boolean>} interface is called a Boolean register (we sometimes use 1 and 0 as synonyms for true and false). This register uses the \textit{ThreadLocal} Java java.lang package, which provides thread-local variables. These variables differ from their normal counterparts in that each thread that accesses one (via its get or set method) has its own, independently initialized copy of the variable. \textit{ThreadLocal} instances are typically private static fields in classes that wish to associate state with a thread \par \begin{lstlisting}[frame=single,breaklines=true] public class RegBooleanMRSWRegister implements Register<Boolean> { ThreadLocal<Boolean> last; private boolean s_value; RegBooleanMRSWRegister(int capacity) { this.last = new ThreadLocal<Boolean>() { protected Boolean initialValue() { return false; }; }; } public void write(Boolean x) { if (x != last.get()) { // if new value different ... last.set(x); // remember new value s_value =x; // update register } } public Boolean read() { return s_value; } } \end{lstlisting} % -------------------------------------------------------- % % Experiment \subsection{Experiment Description} \par The test creates $8$ threads that reads the value of a register. All threads have to be able to read the current value of the register despite the previous value is contrary to the current one. The expected result is must be a $1$ or true value. \par % -------------------------------------------------------- % % Results \subsection{Observations and Interpretations} \par The tests executed as expected and no errors where found.
{ "alphanum_fraction": 0.6432997676, "avg_line_length": 37.4202898551, "ext": "tex", "hexsha": "966423188434811033ff0f1eac221cebea2ef51e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rzavalet/multiprocessor", "max_forks_repo_path": "isai/Multiprocessor/sections/RegBooleanMRSWRegisterTest.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rzavalet/multiprocessor", "max_issues_repo_path": "isai/Multiprocessor/sections/RegBooleanMRSWRegisterTest.tex", "max_line_length": 408, "max_stars_count": null, "max_stars_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rzavalet/multiprocessor", "max_stars_repo_path": "isai/Multiprocessor/sections/RegBooleanMRSWRegisterTest.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 536, "size": 2582 }
\batchmode %This Latex file is machine-generated by the BNF-converter \documentclass[a4paper,11pt]{article} \author{Philipp R\"ummer\\ Department of Information Technology, Uppsala University, Sweden} \title{The Princess Input Language\\ (ApInput)} \setlength{\parindent}{0mm} \setlength{\parskip}{1mm} \begin{document} \maketitle \newcommand{\emptyP}{\mbox{$\epsilon$}} \newcommand{\terminal}[1]{\mbox{{\texttt {#1}}}} \newcommand{\nonterminal}[1]{\mbox{$\langle \mbox{{\sl #1 }} \! \rangle$}} \newcommand{\arrow}{\mbox{::=}} \newcommand{\delimit}{\mbox{$|$}} \newcommand{\reserved}[1]{\mbox{{\texttt {#1}}}} \newcommand{\literal}[1]{\mbox{{\texttt {#1}}}} \newcommand{\symb}[1]{\mbox{{\texttt {#1}}}} This document was automatically generated by the {\em BNF-Converter}, with some manual modifications. It was generated together with the lexer, the parser, and the abstract syntax module, which guarantees that the document matches with the implementation of the language. \section*{The lexical structure of ApInput} \subsection*{Identifiers} Identifiers \nonterminal{Ident} are unquoted strings beginning with a letter, followed by any combination of letters, digits, and the characters {\tt \_ '}, reserved words excluded. \subsection*{Literals} DecIntLit literals are recognized by the regular expression\\ \({\nonterminal{digit}}+\) HexIntLit literals are recognized by the regular expression\\ \((\{\mbox{``0x''}\} \mid \{\mbox{``0X''}\}) [\mbox{``0123456789ABCDEFabcdef''}]+\) \subsection*{Reserved words and symbols} The set of reserved words is the set of terminals appearing in the grammar. Those reserved words that consist of non-letter characters are called symbols, and they are treated in a different way from those that are similar to identifiers. The lexer follows rules familiar from languages like Haskell, C, and Java, including longest match and spacing conventions. The reserved words used in ApInput are the following: \\ \begin{tabular}{lll} {\reserved{$\backslash$abs}} &{\reserved{$\backslash$as}} &{\reserved{$\backslash$distinct}} \\ {\reserved{$\backslash$else}} &{\reserved{$\backslash$eps}} &{\reserved{$\backslash$existentialConstants}} \\ {\reserved{$\backslash$exists}} &{\reserved{$\backslash$forall}} &{\reserved{$\backslash$functions}} \\ {\reserved{$\backslash$if}} &{\reserved{$\backslash$interpolant}} &{\reserved{$\backslash$max}} \\ {\reserved{$\backslash$metaVariables}} &{\reserved{$\backslash$min}} &{\reserved{$\backslash$negMatch}} \\ {\reserved{$\backslash$noMatch}} &{\reserved{$\backslash$part}} &{\reserved{$\backslash$partial}} \\ {\reserved{$\backslash$predicates}} &{\reserved{$\backslash$problem}} &{\reserved{$\backslash$relational}} \\ {\reserved{$\backslash$size}} &{\reserved{$\backslash$sorts}} &{\reserved{$\backslash$then}} \\ {\reserved{$\backslash$universalConstants}} &{\reserved{$\backslash$variables}} &{\reserved{bool}} \\ {\reserved{bv}} &{\reserved{false}} &{\reserved{inf}} \\ {\reserved{int}} &{\reserved{mod}} &{\reserved{nat}} \\ {\reserved{signed}} &{\reserved{true}} & \\ \end{tabular}\\ The symbols used in ApInput are the following: \\ \begin{tabular}{lll} {\symb{\{}} &{\symb{\}}} &{\symb{;}} \\ {\symb{{$<$}{$-$}{$>$}}} &{\symb{{$-$}{$>$}}} &{\symb{{$<$}{$-$}}} \\ {\symb{{$|$}}} &{\symb{{$|$}{$|$}}} &{\symb{\&}} \\ {\symb{\&\&}} &{\symb{!}} &{\symb{[}} \\ {\symb{]}} &{\symb{{$<$}{$<$}}} &{\symb{{$>$}{$>$}}} \\ {\symb{{$+$}}} &{\symb{{$-$}}} &{\symb{*}} \\ {\symb{/}} &{\symb{\%}} &{\symb{{$+$}{$+$}}} \\ {\symb{\~{}}} &{\symb{\^}} &{\symb{(}} \\ {\symb{)}} &{\symb{.}} &{\symb{:}} \\ {\symb{{$=$}}} &{\symb{!{$=$}}} &{\symb{{$<$}{$=$}}} \\ {\symb{{$>$}{$=$}}} &{\symb{{$<$}}} &{\symb{{$>$}}} \\ {\symb{,}} & & \\ \end{tabular}\\ \subsection*{Comments} Single-line comments begin with {\symb{//}}. \\Multiple-line comments are enclosed with {\symb{/*}} and {\symb{*/}}. \section*{The syntactic structure of ApInput} Non-terminals are enclosed between $\langle$ and $\rangle$. The symbols {\arrow} (production), {\delimit} (union) and {\emptyP} (empty rule) belong to the BNF notation. All other symbols are terminals.\\ \begin{tabular}{lll} {\nonterminal{Entry}} & {\arrow} &{\nonterminal{API}} \\ & {\delimit} &{\nonterminal{Expression}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{API}} & {\arrow} &{\nonterminal{ListBlock}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ListBlock}} & {\arrow} &{\emptyP} \\ & {\delimit} &{\nonterminal{Block}} {\nonterminal{ListBlock}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Block}} & {\arrow} &{\terminal{$\backslash$problem}} {\terminal{\{}} {\nonterminal{Expression}} {\terminal{\}}} \\ & {\delimit} &{\terminal{$\backslash$sorts}} {\terminal{\{}} {\nonterminal{ListDeclSortC}} {\terminal{\}}} \\ & {\delimit} &{\terminal{$\backslash$functions}} {\terminal{\{}} {\nonterminal{ListDeclFunC}} {\terminal{\}}} \\ & {\delimit} &{\nonterminal{ExConstantsSec}} {\terminal{\{}} {\nonterminal{ListDeclConstantC}} {\terminal{\}}} \\ & {\delimit} &{\terminal{$\backslash$universalConstants}} {\terminal{\{}} {\nonterminal{ListDeclConstantC}} {\terminal{\}}} \\ & {\delimit} &{\terminal{$\backslash$predicates}} {\terminal{\{}} {\nonterminal{ListDeclPredC}} {\terminal{\}}} \\ & {\delimit} &{\terminal{$\backslash$interpolant}} {\terminal{\{}} {\nonterminal{ListInterpBlockC}} {\terminal{\}}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ExConstantsSec}} & {\arrow} &{\terminal{$\backslash$existentialConstants}} \\ & {\delimit} &{\terminal{$\backslash$metaVariables}} \\ & {\delimit} &{\terminal{$\backslash$variables}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{InterpBlockC}} & {\arrow} &{\nonterminal{ListIdent}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ListInterpBlockC}} & {\arrow} &{\nonterminal{InterpBlockC}} \\ & {\delimit} &{\nonterminal{InterpBlockC}} {\terminal{;}} {\nonterminal{ListInterpBlockC}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Expression}} & {\arrow} &{\nonterminal{Expression}} {\terminal{{$<$}{$-$}{$>$}}} {\nonterminal{Expression1}} \\ & {\delimit} &{\nonterminal{Expression1}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Expression1}} & {\arrow} &{\nonterminal{Expression2}} {\terminal{{$-$}{$>$}}} {\nonterminal{Expression1}} \\ & {\delimit} &{\nonterminal{Expression1}} {\terminal{{$<$}{$-$}}} {\nonterminal{Expression2}} \\ & {\delimit} &{\nonterminal{Expression2}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Expression2}} & {\arrow} &{\nonterminal{Expression2}} {\terminal{{$|$}}} {\nonterminal{Expression3}} \\ & {\delimit} &{\nonterminal{Expression2}} {\terminal{{$|$}{$|$}}} {\nonterminal{Expression3}} \\ & {\delimit} &{\nonterminal{Expression3}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Expression3}} & {\arrow} &{\nonterminal{Expression3}} {\terminal{\&}} {\nonterminal{Expression4}} \\ & {\delimit} &{\nonterminal{Expression3}} {\terminal{\&\&}} {\nonterminal{Expression4}} \\ & {\delimit} &{\nonterminal{Expression4}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Expression4}} & {\arrow} &{\terminal{!}} {\nonterminal{Expression4}} \\ & {\delimit} &{\nonterminal{Quant}} {\nonterminal{DeclBinder}} {\nonterminal{Expression4}} \\ & {\delimit} &{\terminal{$\backslash$eps}} {\nonterminal{DeclSingleVarC}} {\terminal{;}} {\nonterminal{Expression4}} \\ & {\delimit} &{\terminal{\{}} {\nonterminal{ListArgC}} {\terminal{\}}} {\nonterminal{Expression4}} \\ & {\delimit} &{\terminal{$\backslash$part}} {\terminal{[}} {\nonterminal{Ident}} {\terminal{]}} {\nonterminal{Expression4}} \\ & {\delimit} &{\nonterminal{Expression5}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Expression5}} & {\arrow} &{\nonterminal{Expression6}} {\nonterminal{RelSym}} {\nonterminal{Expression6}} \\ & {\delimit} &{\nonterminal{Expression6}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Expression6}} & {\arrow} &{\nonterminal{Expression6}} {\terminal{{$<$}{$<$}}} {\nonterminal{Expression7}} \\ & {\delimit} &{\nonterminal{Expression6}} {\terminal{{$>$}{$>$}}} {\nonterminal{Expression7}} \\ & {\delimit} &{\nonterminal{Expression7}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Expression7}} & {\arrow} &{\nonterminal{Expression7}} {\terminal{{$+$}}} {\nonterminal{Expression8}} \\ & {\delimit} &{\nonterminal{Expression7}} {\terminal{{$-$}}} {\nonterminal{Expression8}} \\ & {\delimit} &{\nonterminal{Expression8}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Expression8}} & {\arrow} &{\nonterminal{Expression8}} {\terminal{*}} {\nonterminal{Expression9}} \\ & {\delimit} &{\nonterminal{Expression8}} {\terminal{/}} {\nonterminal{Expression9}} \\ & {\delimit} &{\nonterminal{Expression8}} {\terminal{\%}} {\nonterminal{Expression9}} \\ & {\delimit} &{\nonterminal{Expression8}} {\terminal{{$+$}{$+$}}} {\nonterminal{Expression9}} \\ & {\delimit} &{\nonterminal{Expression9}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Expression9}} & {\arrow} &{\terminal{$\backslash$as}} {\terminal{[}} {\nonterminal{Type}} {\terminal{]}} {\nonterminal{Expression9}} \\ & {\delimit} &{\terminal{{$+$}}} {\nonterminal{Expression10}} \\ & {\delimit} &{\terminal{{$-$}}} {\nonterminal{Expression10}} \\ & {\delimit} &{\terminal{\~{}}} {\nonterminal{Expression10}} \\ & {\delimit} &{\nonterminal{Expression10}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Expression10}} & {\arrow} &{\nonterminal{Expression10}} {\terminal{\^}} {\nonterminal{Expression11}} \\ & {\delimit} &{\nonterminal{Expression11}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Expression11}} & {\arrow} &{\terminal{$\backslash$if}} {\terminal{(}} {\nonterminal{Expression}} {\terminal{)}} {\terminal{$\backslash$then}} {\terminal{(}} {\nonterminal{Expression}} {\terminal{)}} {\terminal{$\backslash$else}} {\terminal{(}} {\nonterminal{Expression}} {\terminal{)}} \\ & {\delimit} &{\terminal{$\backslash$abs}} {\terminal{(}} {\nonterminal{Expression}} {\terminal{)}} \\ & {\delimit} &{\terminal{$\backslash$max}} {\nonterminal{OptArgs}} \\ & {\delimit} &{\terminal{$\backslash$min}} {\nonterminal{OptArgs}} \\ & {\delimit} &{\terminal{$\backslash$distinct}} {\nonterminal{OptArgs}} \\ & {\delimit} &{\terminal{$\backslash$size}} {\terminal{(}} {\nonterminal{Expression}} {\terminal{)}} \\ & {\delimit} &{\nonterminal{Ident}} {\nonterminal{OptArgs}} \\ & {\delimit} &{\nonterminal{Expression11}} {\terminal{.}} {\nonterminal{Ident}} \\ & {\delimit} &{\nonterminal{Expression11}} {\terminal{.}} {\terminal{$\backslash$as}} {\terminal{[}} {\nonterminal{Type}} {\terminal{]}} \\ & {\delimit} &{\nonterminal{Expression11}} {\terminal{.}} {\terminal{$\backslash$size}} \\ & {\delimit} &{\nonterminal{Expression11}} {\terminal{.}} {\terminal{$\backslash$abs}} \\ & {\delimit} &{\nonterminal{Expression11}} {\terminal{[}} {\nonterminal{Expression}} {\terminal{]}} \\ & {\delimit} &{\nonterminal{Expression11}} {\terminal{[}} {\nonterminal{IntLit}} {\terminal{:}} {\nonterminal{IntLit}} {\terminal{]}} \\ & {\delimit} &{\terminal{true}} \\ & {\delimit} &{\terminal{false}} \\ & {\delimit} &{\nonterminal{IntLit}} \\ & {\delimit} &{\terminal{(}} {\nonterminal{Expression}} {\terminal{)}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Quant}} & {\arrow} &{\terminal{$\backslash$forall}} \\ & {\delimit} &{\terminal{$\backslash$exists}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{RelSym}} & {\arrow} &{\terminal{{$=$}}} \\ & {\delimit} &{\terminal{!{$=$}}} \\ & {\delimit} &{\terminal{{$<$}{$=$}}} \\ & {\delimit} &{\terminal{{$>$}{$=$}}} \\ & {\delimit} &{\terminal{{$<$}}} \\ & {\delimit} &{\terminal{{$>$}}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{OptArgs}} & {\arrow} &{\emptyP} \\ & {\delimit} &{\terminal{(}} {\nonterminal{ListArgC}} {\terminal{)}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ArgC}} & {\arrow} &{\nonterminal{Expression}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ListArgC}} & {\arrow} &{\emptyP} \\ & {\delimit} &{\nonterminal{ArgC}} \\ & {\delimit} &{\nonterminal{ArgC}} {\terminal{,}} {\nonterminal{ListArgC}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{IntLit}} & {\arrow} &{\nonterminal{DecIntLit}} \\ & {\delimit} &{\nonterminal{HexIntLit}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{DeclConstC}} & {\arrow} &{\nonterminal{Type}} {\nonterminal{ListIdent}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ListIdent}} & {\arrow} &{\nonterminal{Ident}} \\ & {\delimit} &{\nonterminal{Ident}} {\terminal{,}} {\nonterminal{ListIdent}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{DeclSingleVarC}} & {\arrow} &{\nonterminal{Type}} {\nonterminal{Ident}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{DeclVarC}} & {\arrow} &{\nonterminal{Type}} {\nonterminal{ListIdent}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{DeclBinder}} & {\arrow} &{\nonterminal{DeclVarC}} {\terminal{;}} \\ & {\delimit} &{\terminal{(}} {\nonterminal{ListDeclVarC}} {\terminal{)}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ListDeclVarC}} & {\arrow} &{\nonterminal{DeclVarC}} \\ & {\delimit} &{\nonterminal{DeclVarC}} {\terminal{;}} {\nonterminal{ListDeclVarC}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{DeclFunC}} & {\arrow} &{\nonterminal{ListFunOption}} {\nonterminal{DeclConstC}} \\ & {\delimit} &{\nonterminal{ListFunOption}} {\nonterminal{Type}} {\nonterminal{Ident}} {\nonterminal{FormalArgsC}} {\nonterminal{OptBody}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ListDeclFunC}} & {\arrow} &{\emptyP} \\ & {\delimit} &{\nonterminal{DeclFunC}} {\terminal{;}} {\nonterminal{ListDeclFunC}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{FunOption}} & {\arrow} &{\terminal{$\backslash$partial}} \\ & {\delimit} &{\terminal{$\backslash$relational}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ListFunOption}} & {\arrow} &{\emptyP} \\ & {\delimit} &{\nonterminal{FunOption}} {\nonterminal{ListFunOption}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{DeclSortC}} & {\arrow} &{\nonterminal{Ident}} {\terminal{\{}} {\nonterminal{ListDeclCtorC}} {\terminal{\}}} \\ & {\delimit} &{\nonterminal{Ident}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ListDeclSortC}} & {\arrow} &{\emptyP} \\ & {\delimit} &{\nonterminal{DeclSortC}} {\terminal{;}} {\nonterminal{ListDeclSortC}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{DeclCtorC}} & {\arrow} &{\nonterminal{Ident}} {\nonterminal{OptFormalArgs}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ListDeclCtorC}} & {\arrow} &{\emptyP} \\ & {\delimit} &{\nonterminal{DeclCtorC}} {\terminal{;}} {\nonterminal{ListDeclCtorC}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{DeclConstantC}} & {\arrow} &{\nonterminal{DeclConstC}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ListDeclConstantC}} & {\arrow} &{\emptyP} \\ & {\delimit} &{\nonterminal{DeclConstantC}} {\terminal{;}} {\nonterminal{ListDeclConstantC}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{DeclPredC}} & {\arrow} &{\nonterminal{ListPredOption}} {\nonterminal{Ident}} {\nonterminal{OptFormalArgs}} {\nonterminal{OptBody}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ListDeclPredC}} & {\arrow} &{\emptyP} \\ & {\delimit} &{\nonterminal{DeclPredC}} {\terminal{;}} {\nonterminal{ListDeclPredC}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{OptFormalArgs}} & {\arrow} &{\emptyP} \\ & {\delimit} &{\nonterminal{FormalArgsC}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{FormalArgsC}} & {\arrow} &{\terminal{(}} {\nonterminal{ListArgTypeC}} {\terminal{)}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ArgTypeC}} & {\arrow} &{\nonterminal{Type}} \\ & {\delimit} &{\nonterminal{Type}} {\nonterminal{Ident}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ListArgTypeC}} & {\arrow} &{\nonterminal{ArgTypeC}} \\ & {\delimit} &{\nonterminal{ArgTypeC}} {\terminal{,}} {\nonterminal{ListArgTypeC}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{PredOption}} & {\arrow} &{\terminal{$\backslash$negMatch}} \\ & {\delimit} &{\terminal{$\backslash$noMatch}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{ListPredOption}} & {\arrow} &{\emptyP} \\ & {\delimit} &{\nonterminal{PredOption}} {\nonterminal{ListPredOption}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{OptBody}} & {\arrow} &{\terminal{\{}} {\nonterminal{Expression}} {\terminal{\}}} \\ & {\delimit} &{\emptyP} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{Type}} & {\arrow} &{\terminal{int}} \\ & {\delimit} &{\terminal{nat}} \\ & {\delimit} &{\terminal{int}} {\terminal{[}} {\nonterminal{IntervalLower}} {\terminal{,}} {\nonterminal{IntervalUpper}} {\terminal{]}} \\ & {\delimit} &{\terminal{bool}} \\ & {\delimit} &{\terminal{mod}} {\terminal{[}} {\nonterminal{IntervalLower}} {\terminal{,}} {\nonterminal{IntervalUpper}} {\terminal{]}} \\ & {\delimit} &{\terminal{bv}} {\terminal{[}} {\nonterminal{IntLit}} {\terminal{]}} \\ & {\delimit} &{\terminal{signed}} {\terminal{bv}} {\terminal{[}} {\nonterminal{IntLit}} {\terminal{]}} \\ & {\delimit} &{\nonterminal{Ident}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{IntervalLower}} & {\arrow} &{\terminal{{$-$}}} {\terminal{inf}} \\ & {\delimit} &{\nonterminal{IntLit}} \\ & {\delimit} &{\terminal{{$-$}}} {\nonterminal{IntLit}} \\ \end{tabular}\\ \begin{tabular}{lll} {\nonterminal{IntervalUpper}} & {\arrow} &{\terminal{inf}} \\ & {\delimit} &{\nonterminal{IntLit}} \\ & {\delimit} &{\terminal{{$-$}}} {\nonterminal{IntLit}} \\ \end{tabular}\\ \end{document}
{ "alphanum_fraction": 0.633758139, "avg_line_length": 44.6990049751, "ext": "tex", "hexsha": "001b0e150c5f6e9abffeb74609ad18f39e4f294e", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-11-24T15:55:04.000Z", "max_forks_repo_forks_event_min_datetime": "2021-11-24T15:55:04.000Z", "max_forks_repo_head_hexsha": "eced6fb34622a89f0f87a4e4a83df266b8d1bb9c", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "uuverifiers/princess", "max_forks_repo_path": "parser/ApInput-corrected.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "eced6fb34622a89f0f87a4e4a83df266b8d1bb9c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "uuverifiers/princess", "max_issues_repo_path": "parser/ApInput-corrected.tex", "max_line_length": 362, "max_stars_count": 5, "max_stars_repo_head_hexsha": "eced6fb34622a89f0f87a4e4a83df266b8d1bb9c", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "uuverifiers/princess", "max_stars_repo_path": "parser/ApInput-corrected.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-16T09:41:07.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-08T07:08:22.000Z", "num_tokens": 5910, "size": 17969 }
\section{Monetary policy}
{ "alphanum_fraction": 0.75, "avg_line_length": 7, "ext": "tex", "hexsha": "622c0b36089bead7782256e1fb68b9f790687d6f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/economics/newKeynesian/01-00-Monetary.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/economics/monetaryFiscal/01-00-Monetary_policy.tex", "max_line_length": 25, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/economics/monetaryFiscal/01-00-Monetary_policy.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8, "size": 28 }
\section{Evaluation} \label{sec:evaluation} In this section, we first use a case study (\S\ref{subsec:eva-casestudy}) given in Figure \ref{fig:data-race} to show the complete workflow of \TheName. Next, we focus on evaluate \TheName{} with the four practical requirements discussed in \S\ref{sec:introduction}. We deploy \TheName on an Armv8 Juno r2 board equipped with 6 cores (2 Cortex-A72 cores and 4 Cortex-A53 cores) and 8GB RAM based on Linaro deliverables Linux 5.4.50. We equip the Juno board with an SSD and allocate 256 MB circular buffers for ETM tracing. We use this as the default setting for our experiment but also allowed developers to adjust it as required. \subsection{Case Study} \label{subsec:eva-casestudy} % Please add the following required packages to your document preamble: % \usepackage{booktabs} \begin{table*}[] \caption{Syscall Capturing Result for Figure~\ref{fig:data-race}} \label{study_case_flow} \centering \scalebox{0.62}{ \begin{tabular}{@{}llll@{}} \toprule \textbf{Line} & \textbf{Thread 1} & \textbf{Thread 2} & \textbf{Data Values} \\ \midrule 10 & & bl 400910 \textless{}strlen@plt\textgreater{} & total=0, len=0, buf=? \\ 4 & bl 400960 \textless{}read@plt\textgreater{} & & read, fd=3, size=64, res=25, data="1234567890123456789012345" \\ 12 & & bl 400970 \textless{}strcpy@plt\textgreater{} & total=0, len=875770417, buf=“123456789012345" \\ 4 & bl 400960 \textless{}read@plt\textgreater{} & & read, fd=4, size=64, res=6, data="123456" \\ 10 & & bl 400910 \textless{}strlen@plt\textgreater{} & \\ 12 & & bl 400970 \textless{}strcpy@plt\textgreater{} & total=875770423, len=6, buf="123456" \\ 14 & & bl 400940 \textless{}\_\_assert\_fail@plt\textgreater{} & \\ \bottomrule \end{tabular} } \end{table*} To evaluate the functionality of \TheName, we test it with the program shown in Figure \ref{fig:data-race}. As Table \ref{study_case_flow} illustrates, it is clear that Thread 2 checks the length of \texttt{big\_buf} (Line 10) before the first \texttt{read} (Line 4) in Thread 1. Since we capture the data of \texttt{read}, we know that the \texttt{big\_buf} contains a string with length 25. Therefore, the following \texttt{strcpy} (Line 12) incurs a buffer overflow, and the variable \texttt{len} (Line 7) is unexpectedly changed to \texttt{875770417}, which finally leads to the failure of the \texttt{assert} (Line 14). It is notable that the program does not crash immediately after the buffer overflow. In contrast, it executes normally for the second cycle. In addition, the normal execution of the second time overwrites the values (\texttt{buf} and \texttt{len}) involved in the buffer overflow, which may confuse developers without the assistance of \TheName. \subsection{Completeness} \label{subsec:eva-flowaccuracy} \begin{table} \caption{Buffer usage when file dumps} \centering \begin{tabular}{@{}lllll@{}} \toprule \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Program}}} & \multirow{2}{*}{\textbf{\# dumping file}} & \multicolumn{3}{l}{\textbf{buffer usage}} \\ \multicolumn{1}{c}{} & & Min. & Max. & Avg. \\ \midrule nginx (5,000,000 requests) & 96 & 25 & 360 & 183 \\ large file read (40 GB) & 79 & 39 & 7,550 & 4,042 \\ \bottomrule \end{tabular} \label{table:Completeness} \end{table} % We evaluates the completeness of \TheName by following items: Due to the implementation of the secondary buffer, the record may become incomplete because of not being transferred in time. We evaluate \TheName with two extreme scenarios: nginx with high concurrency and file reading with massive IO operations. As Table \ref{table:Completeness} shows, even in the worst case that 4,042 bytes, much less than 16 MB, is written to buffer before the secondary buffer dumping to file, \TheName ensures the completeness of the record. \subsection{Effectiveness} \label{subsec:eva-Effectiveness} \begin{table} \caption{Syscalls issued from bugs recorded by \TheName{}. N/A=bugid is not available, E=reconstructed bug, R=real-world bug, OV=order violaion, SAV=single variable atomicity violation, MAV=multi variables atomicity violation, DL=deadlock, SEQ=sequential bug (non-concurrency bug), LOC=line of code} \scalebox{1.0}{ \begin{tabular}[]{@{}llll@{}} \toprule \textbf{Program-BugID-GroupType} & \textbf{bug type} & \textbf{LOC} & \textbf{Symptom} \\ \midrule shared\_counter-N/A-E & SAV & 45 & assertion failure \\ log\_proc\_sweep-N/A-E & SAV & 93 & segmentation fault \\ bank\_account-N/A-E & SAV & 95 & race condition fault \\ jdk1.4\_StringBuffer-N/A-E & SAV & 180 & assertion failure \\ circular\_list-N/A-E & MAV & 155 & race condition fault \\ mysql-169-E & MAV & 120 & assertion failure \\ mutex\_lock-N/A-E & DL & 51 & deadlock \\ SQLite-1672-R & DL & 80K & deadlock \\ memcached-127-R & SAV & 18K & race condition fault \\ Python-35185-R & SAV & 1256K & race condition fault \\ Python-31530-R & MAV & 1256K & segmentation fault \\ aget-N/A-R & MAV & 2.5K & assertion failure \\ pbzip2-N/A-R & OV & 2K & use-after-free \\ curl-965-R & SEQ & 160K & unhandled input pattern \\ cppcheck-2782-R & SEQ & 120K & unhandled input pattern \\ cppcheck-3238-R & SEQ & 138K & NULL pointer dereference \\ \bottomrule \end{tabular}} \label{table:bug benchmarks} \end{table} We show how effectively \TheName is for diagnosing the root cause of bugs. As listed in Table \ref{table:bug benchmarks}, we use 16 commonly C/C++ buggy programs \cite{cui2018rept,kasikci_lazy_2017,yu2009case,yu2012maple,kasikci2015failure, liang2020ript} to evaluate \TheName. % We focus on picking open-source software bugs for % reproducibility in the ARM platform We divide these bugs into two groups, i.e., Group E and Group R. Group E contains 7 bugs reconstructed from applications \cite{yu2009case,yu2012maple}, and Group R includes 9 bugs in real-world applications \cite{cui2018rept,kasikci_lazy_2017, kasikci2015failure, liang2020ript}. There are 13 concurrency bugs, of which 6 are single variable atomicity violation (SAV), 4 are multi variables atomicity violation (MAV), 2 are deadlock (DL), and 1 is order violation (OV). There are also 3 non-concurrency bugs. These bugs are collected from a diverse set of real-world systems (e.g., Python, Memcached, SQLite, and Aget) and wide symptoms (e.g., NULL pointer dereference, use-after-free, and race). % The main limitation that restricts us to evaluate \TheName on more % bugs is that we need to base on open-source software to reproduce and estimate % the accuracy of bugs analysis, but most of the software reported bugs are % difficult to install on ARM. % In this section, we evaluate the effectiveness of \TheName by comparing our root % cause results from Root Cause Detector with the bug fix report of the % benchmarks we evaluated. % Note that, since our Root Cause Detector focus on % working for automatic diagnosis of concurrency bugs, we obtain automatic % analysis results about the root cause on 13 concurrent bugs in total. We execute these programs separately in our system until the bug occurs and then use \TheName to record. And then anaylyze the root cause of bugs. % We receive the identified root % cause from \TheName and confirm that it is effective for root cause analysis % of all the 16 bugs. Specifically, we manually analyze the bug by record and compare the related patches of these bugs. The result indicates that the failure reports generated by \TheName are exactly related to the root cause. Out of those 16 bugs, we select one representative example to further demonstrate the effectiveness of \TheName. % \subsubsection{Pragmatistic} \label{subsec:eva-Effectiveness-reversedebugging} % In this section, we show that \TheName can help developers debug non-concurrency % bugs through multiple debug techniques based on reconstructed execution history. \begin{table}[] \caption{\TheName output of Cppcheck-2782} \label{table: The root cause of cppcheck} \centering \begin{tabular}{@{}llll@{}} \toprule \textbf{PID} & \textbf{\Syscall{}} & \textbf{Parameters} & \textbf{Additional information} \\ \midrule 22571 & getcwd & - & path=/home/root/cppcheck \\ 22571 & newfstatat & res=0 & - \\ 22571 & fstat & res=0 & fd=1 \\ 22571 & write & res=29 & - \\ 22571 & openat & res=3 & dir=-100, path=./fail.cpp \\ 22571 & read & fd=3 & data="int main() \{ return0; \} \\ & & & \begin{tabular}[c]{@{}l@{}}\#asm \\ !while (val) mov bx \\ \#endasm"\end{tabular} \\ 22571 & close & fd=3 & res=0 \\ \bottomrule \end{tabular} \end{table} \textbf{Cppcheck-2782.} In this case, we use \TheName to record a non-concurrency bug. We run the application with common C++ source code as its input until it crashes. Table \ref{table: The root cause of cppcheck} shows the \syscall{}s recorded by \TheName. The C++ source code analyzed by Cppcheck which triggered the bug is loaded with the \syscall{} \texttt{read}. We then find the source code is special for containing embedded assembly code. According to the public bug report, the Cppcheck-2782 is known as a bug caused by unhandled input pattern, for its disability in handling embedded assembly code. \TheName can help developer correctly locate the bug and easily analyze the root cause. \subsection{Efficiency} \label{subsec:eva-Efficiency} We show how efficiently \TheName can be used for bug analysis by first running Unixbench 5.1.2 \cite{unixbench} to measure the performance impact on kernel such as \syscall{}. We then run ApacheBench \cite{ApacheBench} with Nginx 1.20 \cite{nginx_1.20.0}, representing a popular server program to simulate a high load scenario. We finally evaluate the runtime performance overhead of \TheName by running four real-world programs. \subsubsection{Unixbench} \label{subsec:eva-Performance-Unixbench} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figures/unixbenchoverheadbar.pdf} \caption{Performance overhead of UnixBench} \label{fig:Performance overhead of running UnixBench} \end{figure} We run UnixBench on Linux and show the performance results in Figure \ref{fig:Performance overhead of running UnixBench}. For the tracing enabling, the performance overhead is 3.88\% on average, and the highest performance overhead is for System Call at 9.3\%. Specifically, there are three types of benchmarks: File Copy, Pipe Throughput, and System Call that have higher overhead. We believe that it is because these benchmarks have more frequent \syscall{} invoking and I/O operations, which incur larger overhead than others. \subsubsection{Nginx} \label{subsec:eva-Performance-Nginx} We use \texttt{nginx} \cite{nginx_1.20.0} as a web server program to test the performance of \TheName in a high concurrency environment. We use \texttt{ab} (Apache HTTP server benchmarking tool) \cite{ApacheBench} to simulate user access behavior. Setting \texttt{nginx} to its default configuration, we operate performance testing with concurrency of 5,000 and request number of 500,000. The average time cost for baseline (i.e., without \TheName) is 88.94s, and \TheName is 90.09s with 1.30\% overhead. This shows that \TheName performs well even in high-pressure environments. \subsubsection{Performance Overhead on Real-world Programs} \label{subsec:eva-Performance-Normal} We use four real-world programs to test \TheName for performance overhead of normal executions, including \texttt{Pbzip2}, \texttt{Aget}, \texttt{SQLite}, and \texttt{Memcached}. We run different fine-grained tests on each of the programs to simulate three different load scenarios. We run \texttt{Pbzip2} to compress $10$MB, $500$MB, and $2$GB files, respectively. We use \texttt{Aget} to download $50$MB, $500$MB, and $2$GB files in the same network to avoid network speed interference. \texttt{SQLite} is evaluated by a \texttt{sqlite-bench} \cite{sqlitebench} to write $100,000$, $500,000$, and $2,000,000$ values in sequential key order in sync mode, respectively. A benchmark tool \texttt{Twemperf} \cite{twemperf} was used to test \texttt{Memcached}, which creates $20,000$, $300,000$, and $1,000,000$ connections to a Memcached server running on localhost. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figures/normaloverheadbar.pdf} \caption{Performance overhead of 4 real-world programs} \label{fig:Performance overhead of Normal Execution} \end{figure} We show performance overhead results in Figure \ref{fig:Performance overhead of Normal Execution}. % For ETM tracing only, control flow tracing incurs a runtime % performance overhead of 0.0071\% on average with no test exceeding 0.1\% % overhead across all programs. Overall, the average performance overhead of all tests is 2.3\%, and the highest overhead is 5.3\% for \texttt{SQLite} when writing $2,000,000$ values. The performance overhead has a slight improvement when test stress increases in all four programs. We believe it is caused by \TheName because there are many I/O operations and a large number of \syscall{}s. %cause more overhead %than others. %compare to REPT LAZY We compare the runtime performance overhead of real-world programs with the state-of-the-art systems \cite{cui2018rept, kasikci_lazy_2017}. The results show that our overhead is slightly higher, mainly caused by the \TheName. Nevertheless, \TheName still incurs a low runtime performance overhead (2.3\% on average). We conclude this overhead is acceptable and still suitable for practical deployment. \subsubsection{Trade-offs Between Performance and Accuracy} \label{space-consumption} % Since we collect a lot of data at runtime, we need to consider its consumption % on space illustrate our trade-offs for \Recordingstage. We design two versions of syscall capturing to record \textit{RC-Type} \syscall{}s: saving the whole content or truncating to the beginning 256 bytes . We test the time and space consumption of both versions by a program that continuously reads a 2 GB text file to simulate a highly concurrent environment on the server. The experimental results shown in Table \ref{space_consumption} demonstrate that saving the entire content imposes a significant overhead (12.5\%). In addition, an estimate for 24 hours of continuous recording generates nearly 1 TB files, which indicates that saving the entire content is also impractical for a larger throughput server. Therefore, we choose to truncate the records of \textit{RC-Type} \syscall{}s to the beginning 256 bytes. \begin{table}[] % \small \caption{Space Consumption for Saving All Content or Truncation} \label{space_consumption} \centering \begin{tabular}{@{}llll@{}} \toprule \textbf{Type} & \textbf{Real Time} & \textbf{File Size} & \textbf{\begin{tabular}[c]{@{}l@{}}Estimated \\ 24-hour File Size\end{tabular}} \\ \midrule \textbf{Baseline} & 2 min 50.3 s & - & - \\ \textbf{All Content} & 3 min 11.552 s (+12.5\%) & 2.0 GB & 902.1 GB \\ \textbf{Truncation} & 2 min 57.675 s (+4.33\%) & 120 MB & 52 GB \\ \bottomrule \end{tabular} \end{table} \subsection{Universality} \label{subsec:eva-Generality} In this section, we shows \TheName is a universal tool for almost all Linux devices. \TheName is basically a kernel module that does not need any dependency. All functions of \TheName are implemented in Linux kenrel, which indicates that \TheName is architecture-independent. I also have verified on different platforms beyond ARM juno board. The specifications of these platforms are as follows. \begin{itemize} \item \textbf{ARM}: Raspberry Pi 3B, Debian GNU/Linux 10 with Linux 5.10.11-v8+. \item \textbf{x86}: Intel Core i5-10500, Ubuntu 18.04.5 LTS with 4.15.0-142-generic. \item \textbf{RISC-V}: Qemu 5.2.0 RISC-V virt, Linux 5.11.0. \end{itemize} Although \TheName currently does not support all features (e.g., recording for return address) on devices of other architectures, the core part of \TheName, i.e., collecting information and saving to file, works fine. Besides, there are some recent work to make extensions based on \TheName on RISC-V \cite{gdjs2_gdjs2mysisdig_2021}, which also indicates the universality of \TheName.
{ "alphanum_fraction": 0.6335710939, "avg_line_length": 60.021875, "ext": "tex", "hexsha": "3ad8756d878ca155f30746041339820887cec2c7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "587b3fa0d4fc9cde4de41202fc5d76af58c765fb", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "Tert-butyllithium/sustechthesis", "max_forks_repo_path": "sections/examples/evaluation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "587b3fa0d4fc9cde4de41202fc5d76af58c765fb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "Tert-butyllithium/sustechthesis", "max_issues_repo_path": "sections/examples/evaluation.tex", "max_line_length": 384, "max_stars_count": null, "max_stars_repo_head_hexsha": "587b3fa0d4fc9cde4de41202fc5d76af58c765fb", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "Tert-butyllithium/sustechthesis", "max_stars_repo_path": "sections/examples/evaluation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4764, "size": 19207 }
\section{Changes in Microbial Ecology after Fecal Microbiota Transplantation for recurrent \textit{C. difficile} Infection Affected by Underlying Inflammatory Bowel Disease}\label{section_fmt} \subsubsection{Background} Gut microbiota play a key role in maintaining homeostasis in the human gut. Alterations in the gut microbial ecosystem predispose to \gls{cdi}, and gut inflammatory disorders such as \gls{ibd}. \Gls{fmt} from a healthy donor can restore gut microbial diversity and pathogen colonization resistance; consequently it is now being investigated for its ability to improve inflammatory gut conditions such as \gls{ibd}. In this study, we investigated changes in gut microbiota following \gls{fmt} in 38 patients with \gls{cdi} with or without underlying \gls{ibd}. \subsubsection{Results} There was a significant change in gut microbial composition towards the donor microbiota, and an overall increase in microbial diversity consistent with previous studies after \gls{fmt}. \gls{fmt} was successful in treating \gls{cdi} using a diverse set of donors, and varying degrees of donor stool engraftment suggesting that donor type and degree of engraftment are not drivers of a successful \gls{fmt} treatment of \gls{cdi}. However, patients with underlying \gls{ibd} experienced an increased number of \gls{cdi} relapses (during a 24-month follow-up), and a decreased growth of new taxa, as compared to the subjects without \gls{ibd}, note that the test used presents some limitations in sample size and statistical assumptions (see methods). Moreover, the need for \gls{ibd} therapy did not change following \gls{fmt}. These results underscore the importance of the existing gut microbial landscape as a decisive factor to successfully treat \gls{cdi}, and potentially for improvement of the underlying pathophysiology in \gls{ibd}. \subsubsection{Conclusions} \Gls{fmt} leads to a significant change in microbial diversity in patients with recurrent \gls{cdi} and complete resolution of symptoms. Stool donor type (related or unrelated) and degree of engraftment are not key for successful treatment of \gls{cdi} by \gls{fmt}. However, \gls{cdi} patients with \gls{ibd} have higher proportion of the original community after \gls{fmt} and lack of improvement of their \gls{ibd} symptoms and increased episodes of \gls{cdi} on long-term follow-up. \subsection{Background} Gut microbiota play a key role in maintaining homeostatic host functions and deleterious shifts in the gut microbial ecosystem, often referred to as dysbiosis, are associated with \gls{cdi}, \gls{ibd} and other systemic inflammatory conditions \cite{RN1477}. A diverse gut microbial community confers colonization resistance against pathogens such as \textit{C. difficile}, and disruption of a diverse community structure from antibiotics, comorbidities, altered gastrointestinal transit or other risk factors can lead to pathogen colonization and infection \cite{RN1480}. With increasing incidence of community and hospital acquired \gls{cdi}, high rates of recurrent \gls{cdi} (estimated 20-30\% after a first and 50-60\% after a third infection), high mortality (~29,000 deaths annually) in the United States, and an urgent need for newer non-antibiotic therapies has led to the emergence of microbiome based therapies \cite{RN1478}. \Gls{fmt} in \gls{cdi} patients restores phylogenetic diversity to levels more typical of a healthy person, with response rates $>$85\% by enema, oral capsule or endoscopic delivery modes \cite{RN1484, RN1479, RN1481}. A recent study suggests significantly lower response of \gls{cdi} to \gls{fmt} in patients with underlying \gls{ibd} \cite{RN1497}. We have also previously described a higher rate of recurrence of \gls{cdi} following \gls{fmt} in patients with \gls{cdi} and underlying \gls{ibd} \cite{RN1498}. It remains unclear if changes in gut microbial ecology play a role in long-term success of \gls{fmt} in these patients. \Gls{fmt} has not shown consistent success in treating other diseases associated with microbial dysbiosis such as \gls{ibd}. Three clinical trials to treat \gls{uc} with \gls{fmt} have shown conflicting results and one highlighted the potential role of specific gut microbial members in donor stool in determining success after \gls{fmt} in \glspl{uc} \cite{RN3982, RN1019, RN1483}. The underlying host or donor factors that may be important for success of \gls{fmt} in treatment of \gls{ibd} remain unclear. In this study, we assessed the effect of donor type (standard donor versus related donor) and changes in gut microbial ecology on response to \gls{fmt} in \gls{rcdi} with and without underlying \gls{ibd} as well as clinical response to \gls{fmt}. \subsection{Methods} Patient recruitment, sample collection and clinical analysis can be found in Appendix~\ref{appendix_fmt}. Alpha diversity values were calculated using Faith's phylogenetic diversity \cite{RN4007}. To assess differential abundance between the groups, we used ANCOM \cite{RN1513}, as implemented in scikit-bio 0.5.1\footnote{\url{http://scikit-bio.org/docs/0.5.1/}}. This is tested by looking at the individual \glspl{otu} across the patient types (with and without underlying \gls{ibd}); \glspl{otu} of the same genus are grouped for displaying purposes. We note that ANCOM makes the statistical assumption that fewer than 25\% of taxa change, not met in all these comparisons (before \gls{fmt} and post \gls{fmt} communities are expected to be very different \cite{RN1471}). The donor-plane is created using all the donor samples, and serves as a proxy for where their microbiomes are in the ordination space, and how as time goes by this proximity changes. This procedure was originally presented by Halfvarson et al. \cite{RN1515}. Beta diversity matrices were created using unweighted UniFrac \cite{RN83}, and plotted using Emperor \cite{RN79} (all other plotting was done using the Seaborn visualization package). Processed tables and sample information can be found in Qiita\footnote{\url{https://qiita.ucsd.edu}} under study id 10057, alternatively the data can be found under accession number ERP021216 at the European Bioinformatics Institute. \subsubsection{SourceTracker Analysis} To assess the proportion of pre-transplant communities that were retained in the patients' microbiota, we used SourceTracker \cite{RN3995}. The pre-transplant samples and the donor samples were described as \textit{sources}; all the other samples were used as \textit{sinks}. For all samples at day seven and twenty-eight, SourceTracker estimated the proportion of communities that were attributed one of three environments, (1) the donor, (2) the patient pre-transplant, and (3) and unknown community. Using these proportions, we grouped the samples according to their \gls{ibd} status and compared their distributions using the Mann-Whitney test (as implemented in SciPy 0.15.1\cite{RN165}). \subsection{Results} \subsubsection{FMT leads to resolution of CDI} In order to assess gut microbiota changes following \gls{fmt}, 38 patients with recurrent \gls{cdi} were enrolled in the study and a fecal sample was obtained prior to transplant, as well as 7 and 28 days post-transplant. Sample handling, donor and recipient sample collection, sample processing and data analyses are detailed in supplementary methods. \gls{fmt} was accomplished by colonoscopy using fresh donor stools from related (n=12) or unrelated (n=26) donors. None of the \gls{ibd} patients received stool from a related donor. The demographic, disease and treatment characteristics are outlined in Table~\ref{fmt-tab1}. Detailed characteristics of \gls{ibd} patients are shown in supplementary table 1. Twelve patients (31.6\%) had \gls{ibd} (6 with \glspl{uc} and 6 with Crohn's disease), with median age 27.6 years (range, 23.3-74.9), and median \gls{ibd} duration 5 years (range, 2-33). 58.3\% percent of patients were on 5-ASA (amino salicylic acid) agents, 50\% on biologics and 33.3\% on immunomodulators and 58.3\% on steroids. Among patients with \gls{ibd}, at the time of colonoscopy, 2 had normal colonoscopy, 1 had pseudopolyps, 5 had severe pancolitis, 1 had moderate colitis, 1 had mild colitis, 1 had mild procto-sigmoiditis and 1 had moderate ileo-colitis (Supplementary Table 1). \begin{sidewaystable}[hbtp] \centering \renewcommand{\arraystretch}{0.65}% Tighter \caption{Clinical Characteristics} \begin{tabular}{P{9cm}P{2cm}P{2cm}P{2cm}P{2cm}} \toprule & & Overall (n=38) & IBD (n=12) & No IBD (n=26) \\ \midrule \multirow{2}{9cm}{Age} & median & 53.1 & 27.6 & 58.3\\ & (range) & (21.9-82.7) & (23.3-74.9) & (21.9-82.7)\\ \midrule \multicolumn{2}{l}{Sex distribution (\% female)} & 81.6& 66.7& 88.5\\ \midrule \multirow{2}{9cm}{BMI, kg/m2} & median & 24.8 & 25.6 & 23.8\\ &(range) & (14.9-39.9) & (18.5-30.3) & (14.9-39.9)\\ \midrule \multirow{2}{9cm}{Number prior CDI episodes} & median & 5 & 4.5 & 5\\ & (range) & (3-13) & (3-7) & (3-13)\\ \midrule \multirow{2}{9cm}{Number prior metronidazole courses} & median & 1 & 1 & 1\\ &(range) & (0-8) & (0-2) & (0-8)\\ \midrule \multirow{2}{9cm}{Number prior vancomycin 10-14 day courses} & median & 2 & 2 & 2\\ & (range) & (0-4) & (0-4) & (0-3)\\ \midrule \multirow{2}{9cm}{Number prior vancomycin tapers} & median & 1&1&1\\ & (range) & (0-5) &(0-1)&(0-5)\\ \midrule \multirow{2}{9cm}{Number prior fidaxomicin courses} & median & 0 & 0 & 0\\ &(range) & (0-4) & (0-2) & (0-4)\\ \midrule \multicolumn{2}{l}{Recurrent CDI after FMT (\%)}& 13.2& 25& 8.4\\ \bottomrule \end{tabular} \label{fmt-tab1} \end{sidewaystable} \renewcommand{\arraystretch}{1}% Restore to default All patients responded to \gls{fmt} with regards to clinical or microbiologic remission of \gls{cdi} (negative \textit{C. difficile} testing), 92.1\% (n=35) of patient symptoms returned to baseline bowel pattern (as before \gls{cdi}) and resolution of \gls{cdi}, 5.3\% (n=2, both with \gls{ibd}) had worsening diarrhea (\textit{C. difficile} negative), and 2.6\% (n=1) had new onset constipation after \gls{fmt}. Upon long-term follow-up of 24 months; 13.2\% (n=5/38; of these n=1 within 56 days, n=1 from 56 days to 1 year and n=3 beyond 1 year, Supplementary Table 2) had another episode of \gls{cdi} and 10.5\% (n=4/38) required a second \gls{fmt} due to multiply \gls{rcdi}. One patient with \gls{rcdi} was treated with vancomycin. The risk of another episode of \gls{cdi} after \gls{fmt} in \gls{ibd} patients was 25\% (n=3/12) compared to 7.7\% (n=2/26) in non-\gls{ibd} patients (p=0.16, chi-square test). Seven of the 12 patients with \gls{ibd} were on systemic immunosuppression. None of the patients with \gls{ibd} had improvement in their \gls{ibd} course after \gls{fmt}, and none were able to withhold, de-escalate or stop \gls{ibd} treatment. This is not an unexpected finding as one time \gls{fmt} would not be expected to alter the disease course in \gls{ibd} patients. \subsubsection{FMT decreases microbial dysbiosis} \gls{fmt} led to a significant increase in alpha diversity based on Faith's phylogenetic diversity, Shannon's diversity index and observed species, both at day 7 and day 28 (Mann-Whitney p$<$0.05; Supplementary Figure~1, comparing pre- and post-\gls{fmt} in patients with \gls{cdi} with or without underlying \gls{ibd}). Also, patient's stool closely resembled donor stool, as evidenced by a rapid and sustained change in unweighted and weighted UniFrac-based beta diversity following \gls{fmt} at day 7 and 28 post-transplant (Figure~\ref{fmt-fig1}A; PERMANOVA p$<$0.05) \cite{RN83}. \begin{sidewaysfigure}[htbp] \centering \includegraphics[width=0.8\textheight]{fmt-figures/figure-1} \caption[Dysbiosis index and beta-diversity summaries pre- and post-fecal microbiota transplantation]{(A) Principal Coordinates Analysis of the unweighted UniFrac distances, showing change in the phylogenetic diversity between patients with CDI, 7 and 28 days after fecal microbiota transplant. (B) Change in dysbiosis index following fecal microbiota transplant in patients with CDI with or without IBD. (C) Spearman correlation to donor stool 7 and 28 days following fecal microbiota transplantation.} \label{fmt-fig1} \end{sidewaysfigure} To characterize the changes in community composition, we use the \gls{md} index as a reference to describe the dominance of individual taxa (Supplementary Table 3). The \glspl{md} index is composed of 18 taxonomic groups, as defined by Gevers et al, with a higher value correlated with greater disease severity in \gls{ibd}, and lower values associated with healthier states \cite{RN154}. As \gls{cdi} is also associated with dysbiosis and inflammation, we wanted to determine the effect of \gls{fmt} on dysbiosis. The \glspl{md} index values were significantly higher in patients with \gls{cdi} compared to donors (Mann-Whittney's U, p $<$ 0.05, Figure~\ref{fmt-fig1}B). However, on day 7 and 28 after the transplantation, the \glspl{md} index values were similar to donors (Mann-Whitney's U p $>$ 0.05, Figure~\ref{fmt-fig1}B) and this change was independent of whether recipients had \gls{ibd} or not. In order to determine if the changes seen in our subjects following \gls{fmt} were similar to other published studies we compared our samples with recently published data from Weingarden et al. 2015 (Supplementary Figure 2A) wherein 4 patients with \gls{rcdi} (but not \gls{ibd}) received \gls{fmt} from a single donor \cite{RN1471}. Similar to our findings, there was a rapid and sustained change in beta diversity (Supplementary Figure 2A) following \gls{fmt} and the regression to the donor plane (change in microbial composition to resemble healthy donors) following \gls{fmt} was remarkably similar in the two studies (Supplementary Figure 2B). In this context, we refer to the donor plane as a proxy to the region in the\gls{pcoa}: a dimensionality reduction method to visualize beta-diversity distance matrices) space where the donors are located; we do this by fitting a three-dimensional plane (using the least squares method) to the samples from the donors. As the communities change post-\gls{fmt}, the distance to this plane is reduced. \subsubsection{Clinical response of CDI to FMT is independent of engraftment or donor type but underlying IBD influences changes in gut microbial ecology after FMT} In order to determine if the response of \gls{cdi} to \gls{fmt} was dependent on donor stool engraftment, we determined Spearman's correlation coefficient between fecal microbial communities prior to and 7 and 28 days post-transplant. The fecal microbial communities from patients with \gls{cdi} were distinct from donor communities prior to transplant (Spearman's r$<$0.2 for all subjects, Figure~\ref{fmt-fig1}C). Following transplant the communities showed an increase in correlation to donor stool at day 7 (Spearman's r$>$0.4 for 85\% of the subjects, Figure~\ref{fmt-fig1}C) and a spread for all subjects at day 28 ranging from below 0.2 up to 0.6 (Figure~\ref{fmt-fig1}C). Using SourceTracker \cite{RN3995}, we found that after \gls{fmt}, subjects with \gls{ibd} retained a higher proportion of their original communities (Mann-Whitney p $<$ 0.05 at day 7, and p = 0.06 at day 28; Figure~\ref{fmt-fig2}A and \ref{fmt-fig2}B) and a significantly lower proportion of new communities (Mann-Whitney p $<$ 0.05 at day 7 and 28), as compared to the patients without \gls{ibd}. The expansion of new taxa following \gls{fmt} represents a beneficial ecological change following \gls{fmt} as seen in patients without \gls{ibd}, while those with \gls{ibd} are more prone to revert to the original community structure. Consequently, in patients with \gls{ibd} we observed a smaller group of taxa that change significantly seven days after \gls{fmt}. In both groups, \textit{Bacteroides}, and \textit{Faecalibacterium} showed a significant increase in relative abundance, with \textit{Blautia}, only being increased for patients without \gls{ibd}. Additionally, these patients showed a decrease in relative abundance of \textit{Lactobacillus}, \textit{Veillonella}, \textit{Enterobacter}, \textit{Klebsiella}, \textit{Erwina}, \textit{Proteus}, \textit{Salmonella}, and \textit{Trabulsiella} (Figure~\ref{fmt-fig2}C and~\ref{fmt-fig2}D, ANCOM p $<$ 0.05, corrected for multiple comparisons using Bonferroni-Holm's method \cite{RN1513}). \begin{figure}[htbp] \includegraphics[width=0.95\columnwidth]{fmt-figures/figure-2} \caption[SourceTracker and differential abundance comparison between IBD and non-IBD affected subjects.]{(A) and (B) Subjects with IBD retain a higher proportion of their original communities (Mann-Whitney p $<$ 0.05 at day 7, and p = 0.06 at day 28 and a significantly lower proportion of new communities (Mann-Whitney p $<$ 0.05 at day 7 and 28), as compared to the patients without IBD using SourceTracker. (C) Bacterial taxa that change significantly in patients with IBD after FMT (ANCOM p $<$ 0.05, corrected for multiple comparisons using Bonferroni-Holm's method). (D) Bacterial taxa that change significantly in patients without IBD after FMT (ANCOM p $<$ 0.05, corrected for multiple comparisons using Bonferroni-Holm's method). (E) Change in phylogenetic diversity based alpha diversity 7 and 28 days following fecal microbiota transplant in patients with CDI with and without IBD (Mann-Whitney's U p $<$ 0.001). } \label{fmt-fig2} \end{figure} All patients had either clinical or microbiological remission, confirming that initial response of \gls{cdi} to \gls{fmt} is not dependent on the degree of donor stool engraftment. In this small cohort of patients, those with underlying \gls{ibd} had higher number of late relapses of \gls{cdi}. We found no significant differences in gut microbiota composition following \gls{fmt} from standard donors or related donors (Mann-Whitney p$>$ 0.05 at day 7 and 28), suggesting that engraftment of donor stool was independent of donor type. Furthermore as all patients had ongoing clinical remission with microbiological response (if measured), donor type does not appear to affect \gls{cdi} related clinical response. \subsubsection{Change in bacterial diversity after FMT is dependent on underlying IBD.} \gls{ibd} disease course, as measured by the need for specific \gls{ibd} therapies, did not change after \gls{fmt}, and patients with \gls{cdi} and underlying \gls{ibd} retained a higher proportion of the pre-transplant communities and lower proportion of new communities following \gls{fmt}. Thus, underlying \gls{ibd} appears to affect the change in gut microbial ecology resulting in a less significant increase in overall diversity. In subjects without \gls{ibd}, Faith's phylogenetic diversity (which measures the total branch length of a phylogenetic tree that a given sample covers \cite{RN1490}) reached a level comparable to healthy donors (Mann-Whitney's U p < 0.001, Figure 2E). The differences in phylogenetic diversity following \gls{fmt} between subjects with and without \gls{ibd} became evident on day 7 and persisted on day 28 (Mann-Whitney, day -1 p = 0.163, day 7 p = 0.0058, and day 27 p = 0.008, Figure 2E). A linear regression of phylogenetic diversity vs \glspl{md} index (Supplementary Figure 3) shows a significantly lower negative correlation between the increase in phylogenetic diversity and the increase of the \glspl{md} index in patients with \gls{ibd} (Pearson's correlation coefficient, \gls{ibd} R=-0.68, No \gls{ibd} R=-0.83; p $<$ 0.0001; Supplementary Figure 3) suggesting a lack of recovery of phylogenetic diversity in patients with \gls{ibd} as the \glspl{md} index improves. \subsection{Discussion} In this study, we found that gut microbiota diversity changes rapidly following \gls{fmt} for treatment of \gls{cdi} and resembles donor microbiota diversity, similar to previous studies. A successful response of \gls{cdi} to \gls{fmt} was seen with a diverse group of donors and at levels of engraftment (as measured by correlation to donor stool) varying from 50-94\% (at day 7) and 34-93\% (at day 28) based on the proportion of communities attributed to the donor following \gls{fmt} per SourceTracker, suggesting these are not critical factors in determining response. Similarly, a recent study that evaluated pre- and post-\gls{fmt} (for recurrent \gls{cdi}) gut microbiome samples from a subset of patients enrolled in a randomized controlled trial \cite{RN1527}, compared donor \gls{fmt} to autologous \gls{fmt} suggested that complete engraftment of donor bacteria may be not necessary, if functionally critical taxa are present in subjects following initial antibiotic therapy for \gls{cdi} \cite{RN1524}. This study excluded patients with \gls{ibd} but was able to compare autologous to donor \gls{fmt} unlike our study. There was a higher number of \gls{rcdi} following \gls{fmt} in patients with \gls{cdi} and \gls{ibd} but this was not statistically significant, likely given the small sample size. However we have previously reported similar findings in a larger cohort of patients with \gls{cdi} and \gls{ibd} \cite{RN1498}, where gut microbiota changes were not monitored. Interestingly, in this cohort all patients had an initial clinical or microbiological remission of \gls{cdi} following \gls{fmt} and we did not see a difference in initial response reported in a recent study \cite{RN1497}, which is also likely due to the smaller sample size of our study and differences in underlying disease characteristics. We also did not see changes in need for \gls{ibd} therapy in the subset of patients with \gls{ibd} underlying \gls{cdi}. While dynamic variations can be seen in patients following \gls{fmt} \cite{RN1471}, patients with underlying \gls{ibd} in our study show a higher proportion of the original pre-transplant microbial community and lower recovery of phylogenetic diversity following \gls{fmt} compared to those without \gls{ibd}. This lack of beneficial change in microbial ecology may be relevant for long term response of \gls{cdi} in patients with \gls{ibd} and the lack of clinical response of \gls{ibd} to \gls{fmt} seen in our and previous studies \cite{RN1497}. Future studies designed to study the effect of compositional and functional changes in gut microbiota on clinical outcomes following \gls{fmt} in patients with \gls{ibd} will be needed to definitively address the potential importance of changes in microbial ecology, donor selection \cite{RN3982}, underlying disease characteristics and multiple-dose \glspl{fmt}, in correcting the underlying pathophysiology of \gls{ibd}. \subsection{Conclusions} There is a significant increase in microbial diversity in patients with recurrent \gls{cdi} after \gls{fmt}. Both, the degree of microbial engraftment or donor type (related or unrelated) are not key for successful treatment of \gls{rcdi} by \gls{fmt}. Compared to \gls{cdi} patients without \gls{ibd}, \gls{cdi} patients with \gls{ibd} have higher proportion of the original microbial communities after \gls{fmt} and increased episodes of future \gls{cdi} on long-term follow-up. \subsection{Acknowledgments} This section, in full, is a reprint of the material as it appears in ``Changes in microbial ecology after fecal microbiota transplantation for recurrent C. difficile infection affected by underlying inflammatory bowel disease''. S. Khanna, Y. V\'azquez-Baeza, A. Gonz\'alez, S. Weiss, B. Schmidt, D. A. Muñiz-Pedrogo, J. F. Rainey 3rd, P. Kammer, H. Nelson, M. Sadowsky, A. Khoruts, S. L. Farrugia, R. Knight, D. S. Pardi, P. C. Kashyap. \emph{Microbiome}. 5, 2017. The dissertation author was the co-primary investigator and author of this paper.
{ "alphanum_fraction": 0.7707110515, "avg_line_length": 166.7972027972, "ext": "tex", "hexsha": "2ed5b7ef559ab4a6afe4576249b2cc19b29be078", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-10-19T00:52:37.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-19T00:52:37.000Z", "max_forks_repo_head_hexsha": "e6fc60eecad0f57070379d7dcc56521d3b588434", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "ElDeveloper/phd-thesis", "max_forks_repo_path": "chapter_fmt.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e6fc60eecad0f57070379d7dcc56521d3b588434", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "ElDeveloper/phd-thesis", "max_issues_repo_path": "chapter_fmt.tex", "max_line_length": 2030, "max_stars_count": 3, "max_stars_repo_head_hexsha": "e6fc60eecad0f57070379d7dcc56521d3b588434", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "ElDeveloper/phd-thesis", "max_stars_repo_path": "chapter_fmt.tex", "max_stars_repo_stars_event_max_datetime": "2018-08-14T17:37:38.000Z", "max_stars_repo_stars_event_min_datetime": "2017-09-14T16:12:48.000Z", "num_tokens": 6660, "size": 23852 }
%------------------------- % Resume in Latex % Author : Vishal Panwar % License : MIT %------------------------ \documentclass[letterpaper,11pt]{article} \usepackage{latexsym} \usepackage[empty]{fullpage} \usepackage{titlesec} \usepackage{marvosym} \usepackage[usenames,dvipsnames]{color} \usepackage{verbatim} \usepackage{enumitem} \usepackage[hidelinks]{hyperref} \usepackage{fancyhdr} \usepackage[english]{babel} \pagestyle{fancy} \fancyhf{} % clear all header and footer fields \fancyfoot{} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} % Adjust margins \addtolength{\oddsidemargin}{-0.5in} \addtolength{\evensidemargin}{-0.5in} \addtolength{\textwidth}{1in} \addtolength{\topmargin}{-.5in} \addtolength{\textheight}{1.0in} \urlstyle{same} \raggedbottom \raggedright \setlength{\tabcolsep}{0in} % Sections formatting \titleformat{\section}{ \vspace{-4pt}\scshape\raggedright\large }{}{0em}{}[\color{black}\titlerule \vspace{-5pt}] %------------------------- % Custom commands \newcommand{\resumeItem}[2]{ \item\small{ \textbf{#1}{: #2 \vspace{-2pt}} } } \newcommand{\resumeSubheading}[4]{ \vspace{-1pt}\item \begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \textit{\small#3} & \textit{\small #4} \\ \end{tabular*}\vspace{-5pt} } \iffalse \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=black, filecolor=magenta, textbf=true, } \fi \newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}} \renewcommand{\labelitemii}{$\circ$} \newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]} \newcommand{\resumeSubHeadingListEnd}{\end{itemize}} \newcommand{\resumeItemListStart}{\begin{itemize}} \newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}} %------------------------------------------- %%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %----------HEADING----------------- \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r} \textbf {\Large Vishal Panwar} & Email : \href{mailto:[email protected]}{[email protected]}\\ \href{https://www.linkedin.com/in/vishalpanwar/}{https://www.linkedin.com/in/vishalpanwar/} & Mobile : +91-9560729089 \\ \end{tabular*} %-----------EXPERIENCE----------------- \section{Experience} \resumeSubHeadingListStart \resumeSubheading {Microsoft}{Hyderabad, India} {Software Engineer}{July 2017 - Present} \resumeItemListStart \resumeItem{Azure Application Gateway} {Azure Application Gateway works as a layer 7 loader balancer in Microsoft Azure Public cloud.Involved in development of end to end features on top on top of Nginx in C\# and C++ and its interaction with NRP and Gateway Manager in a large scale distributed systems architecture. \\ Link: \href{https://docs.microsoft.com/en-us/azure/application-gateway/overview}{\custombold{//Azure Application Gateway}}} \resumeItem{Virtual Machine Scale Set Migration \& Autoscaling} {Involved in extensive POCs around the performance of Application Gateway on Linux based VMSS instead of Windows based cloud service on Azure. Later in migration of service to Linux based VMSS platform as a V2 version with custom Autoscale workflow. \\ Link: \href{https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview}{\custombold{//Application Gateway AutoScaling v2}}} \resumeItem{Network Performance Monitoring} {Developed a working solution from Microsoft Research Paper which includes implementation of an algorithm to discover all the possible network paths \& measure the health of those paths in near Real time. \\ Link: \href{https://docs.microsoft.com/en-us/azure/azure-monitor/insights/network-performance-monitor}{\custombold{//Network Performance Monitor}}} \resumeItem{Express Route Monitoring} {Extended Network Performance Monitoring solution for the monitoring of network traffic across Express Route in Azure in both public and private peering. \\ Link: \href{https://docs.microsoft.com/en-us/azure/expressroute/how-to-npm}{\custombold{//Network Performance Monitoring for Express Route}}} \resumeItemListEnd \resumeSubheading {Coding Blocks}{New Delhi, India} {C++ Algo Instructor \& Competitive Coding Mentor}{Septermber 2016 - June 2017} \resumeItemListStart \resumeItem{Advanced Data Structures \& Algorithms} {Taught advanced data structure and algorithm course with a class size of 50 students in Coding Blocks Dwarka Centre.} \resumeItem{Problem Setter} {Designed more than 100 problems for coding blocks online platform } \resumeItem{Competitive Coding Bootcamp} {Conducted 3 competitive coding bootcamps teaching introductory advanced topics to around 500 students and compiled a booklet on a variety of topics.Co-authored a book on basics of competitive programming available at Amazon. \\ Link: \href{https://www.amazon.in/Mastering-Competitive-Programming-Coding-Blocks/dp/8193754301}{\custombold{//Mastering Competitive Programming Book}}} \resumeItemListEnd \resumeSubheading {Microsoft}{Hyderabad, India} {Software Engineering Intern}{Jun 2016 – July 2016} \resumeItemListStart \resumeItem{System Center Operations Manager} {Enabled Management Server and Agent communication across untrusted boundaries using Gateway Servers. Simulated and tested the monitoring of IaaS Virtual Machines in the Enterprise Domain by creating virtual networks in Azure using present SCOM components. Used Azure Powershell for automation, currently used by SCOM team to deploy and test changes. \\ Awarded full time Pre Placement Offer for the contributions during the internship. \\ Link: \href{https://social.technet.microsoft.com/wiki/contents/articles/51554.scom-2016-integration-with-operations-management-suite-oms.aspx}{\custombold{//SCOM Integration with Operations Management Suite (OMS)}}} \resumeItemListEnd \resumeSubHeadingListEnd %-----------EDUCATION----------------- \section{Education} \resumeSubHeadingListStart \resumeSubheading {Delhi Technological University}{New Delhi, India} {Bachelor of Technology in Mathematics and Computer Science; GPA: 4.0 (8.75/10.0)}{Aug. 2013 -- June. 2017} \resumeSubHeadingListEnd %-----------PROJECTS----------------- \section{Projects} \resumeSubHeadingListStart \resumeSubItem{Dancing Link Algorithm-X} {Algo-X via Dancing Links to solve sudoku by transforming it to Exact Cover Problem} \resumeSubItem{Flow Chart Automation} {Transform rough figures to properly shaped canvas Flow Chart with equal dimensions} \resumeSubItem{Web crawler} {Create pretty website link map by crawling a specific domain} \resumeSubItem{Webcam Sudoku Puzzle Solver} {Scan and solve Sudoku from image/webcam using openCV and numpy in python. Digits were recognized via an OCR engine implemented using KNN.} \resumeSubHeadingListEnd %-----------EDUCATION----------------- \section{Skills} \resumeSubHeadingListStart \resumeSubItem{Programming} {C, C++(STL \& Boost), C\#, Python, Go, Javascript, Lua, Powershell} \resumeSubItem{Technologies} {Azure, AWS, Windows, Linux, HyperV, VStudio, WireShark, Bash, Nginx, WinDbg, Apache} \resumeSubItem{Competitive Coding} 5 Star Coder@ Codechef {\href{https://codechef.com/users/code_zilla}{\custombold{(//code\_zilla))}}, Expert@ Codeforces \href{https://codechef.com/users/code_zilla}{\custombold{(//al\_chemist))}} } \resumeSubHeadingListEnd % %--------PROGRAMMING SKILLS------------ %\section{Programming Skills} % \resumeSubHeadingListStart % \item{ % \textbf{Languages}{: Scala, Python, Javascript, C++, SQL, Java} % \hfill % \textbf{Technologies}{: AWS, Play, React, Kafka, GCE} % } % \resumeSubHeadingListEnd %------------------------------------------- \end{document}
{ "alphanum_fraction": 0.7006903353, "avg_line_length": 42.9206349206, "ext": "tex", "hexsha": "348fa3c6d70559604721955e5efcd5a973b9a1a6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6d4dc8db94c6b71af045e163e17cefc3b7d7cfde", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "vishalpanwar/Latex-Resume", "max_forks_repo_path": "vishal_resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6d4dc8db94c6b71af045e163e17cefc3b7d7cfde", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "vishalpanwar/Latex-Resume", "max_issues_repo_path": "vishal_resume.tex", "max_line_length": 360, "max_stars_count": null, "max_stars_repo_head_hexsha": "6d4dc8db94c6b71af045e163e17cefc3b7d7cfde", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "vishalpanwar/Latex-Resume", "max_stars_repo_path": "vishal_resume.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2068, "size": 8112 }
\documentclass[fleqn, colorlinks]{goose-article} \title{% Elasto-plastic continuum model based on a manifold of quadratic potentials } \author{Tom W.J.\ de Geus} \hypersetup{pdfauthor={T.W.J. de Geus}} \newcommand\leftstar[1]{\hspace*{-.3em}~^\star\!#1} \newcommand\T[1]{\underline{\bm{{#1}}}} \newcommand\TT[1]{\underline{\mathbb{{#1}}}} \begin{document} \maketitle \begin{abstract} \noindent A microscopic continuum model of plasticity in amorphous solids is proposed. This model uses a strain energy with multiple minima to capture the effect of plasticity. This model was used for the first time in \citet{DeGeus2019} and was partly inspired on the work of \citet{Jagla2017}. \\ \noindent \emph{If you use this model, you are kindly requested to cite \cite{DeGeus2019}.} \\ \keywords{elasto-plasticity; linear elasticity} \end{abstract} \setcounter{tocdepth}{3} \tableofcontents \vfill\newpage \section{General model} The model is constructed such that it behaves linear elastically in the volumetric stress response. The same holds for the deviatoric stress response, whereby plasticity is modelled such that the material starts flowing once a critical strain is reached. After a period of flow, the deviatoric stress response is again linear elastic. Below underlined bold symbols $\T{A}$ are tensors, while normal symbols $A$ are scalars. Strains are denoted by $\varepsilon$ while stresses are denoted by $\sigma$. Subscripts $(.)_\mathrm{m}$ and $(.)_\mathrm{d}$ are used to indicate the volumetric and deviatoric part of the strains and stress. Furthermore, the elastic moduli and the equivalent stress and strain are defined such (i) that the model is equivalent regardless of the number of dimensions, $d$; (ii) for simple shear the equivalent deviatoric strain is equal to $\varepsilon_\mathrm{xy}$; (iii) $\sigma_\mathrm{xy} = G \varepsilon_\mathrm{xy}$, with $G$ the shear modulus. To retrieve another common definition for linear elasticity in $d = 3$, one simply has to rescale the parameters. See the appendices for the full nomenclature, including the parameter transformation. The model is based on a strain energy $W$ that is composed of two parts, a hydrostatic (or volumetric) part $U$ related to the hydrostatic strain $\varepsilon_\mathrm{m}$, and a deviatoric (or shear) part $V$ related to the equivalent shear strain $\varepsilon_\mathrm{d}$, i.e. \begin{equation} W(\T{\varepsilon}) = U(\varepsilon_\mathrm{m}) + V(\varepsilon_\mathrm{d}) \end{equation} The stress response $\T{\sigma}$ is the derivative of this energy with respect to the strain tensor $\T{\varepsilon}$. Before specialising $U$ and $V$ we can already say that \begin{equation} \label{eq:dU-dV:elas} \T{\sigma} = \frac{\partial W}{\partial \T{\varepsilon}} = \frac{\partial U}{\partial \varepsilon_\mathrm{m}} \; \frac{\partial \varepsilon_\mathrm{m}}{\partial \T{\varepsilon}} + \frac{\partial V}{\partial \varepsilon_\mathrm{d}} \; \frac{\partial \varepsilon_\mathrm{d}}{\partial \T{\varepsilon}} = \frac{\partial U}{\partial \varepsilon_\mathrm{m}} \; \frac{1}{d} \T{I} + \frac{\partial V}{\partial \varepsilon_\mathrm{d}} \; \frac{1}{2} \frac{\T{\varepsilon}_\mathrm{d}}{\varepsilon_\mathrm{d}} = \frac{1}{d} \; \frac{\partial U}{\partial \varepsilon_\mathrm{m}} \; \T{I} + \frac{1}{2} \frac{\partial V}{\partial \varepsilon_\mathrm{d}} \; \T{N}_\mathrm{d} \end{equation} Below, both $U(\varepsilon_\mathrm{m})$ and $V(\varepsilon_\mathrm{d})$ will be defined by slowly increasing complexity, departing from linear elasticity. \subsection{Linear elasticity} We start simple by considering linear elasticity. In this case the volumetric strain energy $U$ and the shear strain energy $V$ read \begin{align} \label{eq:W:elas} U (\varepsilon_\mathrm{m}) &= \frac{d}{2} \, K \, \varepsilon_\mathrm{m}^2 \\ \label{eq:V:elas} V (\varepsilon_\mathrm{d}) &= G \, \varepsilon_\mathrm{d}^2 \end{align} The two potentials are plotted in \cref{fig:U-V:elas} (only $\varepsilon_\mathrm{d} \geq 0$ is shown, as it is by definition non-negative). It is trivial to obtain that \begin{align} \frac{\partial U}{\partial \varepsilon_\mathrm{m}} &= d \, K \, \varepsilon_\mathrm{m} \\ \frac{\partial V}{\partial \varepsilon_\mathrm{d}} &= 2 \, G \, \varepsilon_\mathrm{d} \end{align} (plotted in \cref{fig:dU-dV:elas}). From which we obtain the following expression for the stress \begin{equation} \label{eq:sig-elas} \T{\sigma} ( \T{\varepsilon} ) = K \, \varepsilon_\mathrm{m} \, \T{I} + G \, \varepsilon_\mathrm{d} \, \T{N}_\mathrm{d} = K \, \varepsilon_\mathrm{m} \, \T{I} + G \, \T{\varepsilon}_\mathrm{d} \end{equation} where the direction of shear is contained in \begin{equation} \T{N}_\mathrm{d} \equiv \frac{\T{\varepsilon}_\mathrm{d}}{\varepsilon_\mathrm{d}} \end{equation} Note that \begin{equation} \sqrt{\tfrac{1}{2} \T{N}_\mathrm{d} : \T{N}_\mathrm{d}} = 1, \qquad \frac{ \partial \varepsilon_\mathrm{d} }{ \partial \T{\varepsilon}_\mathrm{d} } = \tfrac{1}{2} \T{N}_\mathrm{d} \end{equation} (see \cref{sec:nomenclature:derivatives}). \begin{figure}[htp] \centering \includegraphics[width=1.\textwidth]{figures/potential_U-V_elas} \caption{ Strain energy $W(\T{\varepsilon}) = U(\varepsilon_\mathrm{m}) + V(\varepsilon_\mathrm{d})$ for linear elasticity.} \label{fig:U-V:elas} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=1.\textwidth]{figures/potential_dU-dV_elas} \caption{ Derivative of the hydrostatic strain energy $U$ and the deviatoric strain energy $V$ w.r.t.\ respectively the hydrostatic strain $\varepsilon_\mathrm{m}$ and the equivalent shear strain $\varepsilon_\mathrm{d}$.} \label{fig:dU-dV:elas} \end{figure} Note that a tangent or stiffness tensor can also be defined: \begin{equation} \delta \bm{\sigma} \equiv \mathbb{C} : \delta \bm{\varepsilon} \end{equation} which for the linear model is valid independent of the size of the variation. It is trivial to show that \begin{equation} \mathbb{C} = \frac{K}{d} \bm{I} \bm{I} + G \, \mathbb{I}_d \end{equation} \subsection{Plastic potential -- Parabolic potential with multiple minima} The model is now extended to account for plasticity. The model is defined such that the material responds volumetrically purely elastic, while in shear the model is governed by multiple minima. These minima have the effect that when the material reaches a certain yield stress, it jumps to the next minimum. Around this minimum the elasticity is always the same. When loading is continued the the material again jumps to a new minimum when the next yield stress is reached. The magnitude of the jumps and of the yield stress are thereby related. As described, the volumetric behaviour is simply elastic; whereby the potential is given by \cref{eq:W:elas} and is plotted in \cref{fig:U-V:elas}(a). To attain the desired behaviour in shear, the equivalent shear strain space is divided in a finite number of yield strains $\varepsilon_\mathrm{y}^{(0)}, \varepsilon_\mathrm{y}^{(1)}, \varepsilon_\mathrm{y}^{(2)}, ...$. A parabolic potential is then defined between each pair ($[ \varepsilon_\mathrm{y}^{(0)}, \varepsilon_\mathrm{y}^{(1)} )$, $[ \varepsilon_\mathrm{y}^{(1)}, \varepsilon_\mathrm{y}^{(2)} )$, ...). The shear strain energy is then composed of a manifold of quadratic contributions \begin{equation} \label{eq:V-plas} V \big( \varepsilon_\mathrm{y}^{(i)} \leq \varepsilon_\mathrm{d} < \varepsilon_\mathrm{y}^{(i+1)} \big) = V^{(i)} = G \, \bigg[\, \Big[\, \varepsilon_\mathrm{d} - \varepsilon_\mathrm{min}^{(i)} \,\Big]^2 - \Big[\, \Delta \varepsilon_\mathrm{y}^{(i)} \,\Big]^2 \,\bigg] \end{equation} where the mean of $\varepsilon_\mathrm{y}^{(i)}$ and $\varepsilon_\mathrm{y}^{(i+1)}$ is \begin{equation} \varepsilon_\mathrm{min}^{(i)} = \tfrac{1}{2} \Big[\, \varepsilon_\mathrm{y}^{(i+1)} + \varepsilon_\mathrm{y}^{(i)} \,\Big] \end{equation} which is also the equivalent shear strain at which the shear strain energy reaches its minimum. From this minimum, the distance to $\varepsilon_\mathrm{y}^{(i)}$ and $\varepsilon_\mathrm{y}^{(i+1)}$ is \begin{equation} \Delta \varepsilon_\mathrm{y}^{(i)} = \tfrac{1}{2} \Big[\, \varepsilon_\mathrm{y}^{(i+1)} - \varepsilon_\mathrm{y}^{(i)} \,\Big] \end{equation} The resulting shear strain energy is plotted in \cref{fig:V:plas}(a). The stress response is obtained from \begin{equation} \label{eq:dV-plas} \frac{\partial V^{(i)}}{\partial \varepsilon_\mathrm{d}} = 2 \, G \, \Big[\, \varepsilon_\mathrm{d} - \varepsilon_\mathrm{min}^{(i)} \,\Big] \end{equation} (see \cref{fig:dV:plas}(a)). From which it can be observed that in elasticity the behaviour is identical to above (cf.~\cref{eq:dU-dV:elas}). For the case that $\varepsilon_\mathrm{y}^{(0)} = - \varepsilon_\mathrm{y}^{(1)}$ the responses are even identical until initial yield stress is reached. For completeness, the stress reads \begin{equation} \T{\sigma} ( \T{\varepsilon} ) = K \, \varepsilon_\mathrm{m} \, \T{I} + G \, \Big[\, \varepsilon_\mathrm{d} - \varepsilon_\mathrm{min}^{(i)} \,\Big] \; \T{N}_\mathrm{d} \qquad \mathrm{for} \; \varepsilon_\mathrm{y}^{(i)} \leq \varepsilon_\mathrm{d} < \varepsilon_\mathrm{y}^{(i+1)} \end{equation} whereby one has to assume that when $\varepsilon_\mathrm{d} = 0$ also $\T{\sigma}_\mathrm{d} = \T{0}$ in order to avoid zero division. The response is plotted in \cref{fig:dV:plas}(a), from which it is observed that it exhibits stress jumps between different parabola in the potential, because of the discontinuity in the second derivative of the elastic potential. This can be remedied, such as in the model presented below. Note that within (and only within) the elastic regime the elastic tangent or stiffness tensor holds. \subsection{Plastic potential -- Smooth parabolic potential with multiple minima} The remedy the discontinuity in the second derivative of the potential, it is smoothed as follows: \begin{equation} \label{eq:V-plas-smooth} V \big( \varepsilon_\mathrm{y}^{(i)} \leq \varepsilon_\mathrm{d} < \varepsilon_\mathrm{y}^{(i+1)} \big) = V^{(i)} = - 2 \, G \, \left[ \frac{\Delta \varepsilon_\mathrm{y}^{(i)}}{\pi} \right]^2 \left[ 1 + \cos \left( \frac{ \pi }{ \Delta \varepsilon_\mathrm{y}^{(i)} } \Big[\, \varepsilon_\mathrm{d} - \varepsilon_\mathrm{min}^{(i)} \,\Big] \right) \right] \end{equation} which is plotted in \cref{fig:V:plas}(b). In this case the stress is obtained from \begin{equation} \label{eq:dV-plas-smooth} \frac{\partial V^{(i)}}{\partial \varepsilon_\mathrm{d}} = 2 \, G \, \left[ \frac{\Delta \varepsilon_\mathrm{y}^{(i)}}{\pi} \right] \sin \left( \frac{ \pi }{ \Delta \varepsilon_\mathrm{y}^{(i)} } \Big[\, \varepsilon_\mathrm{d} - \varepsilon_\mathrm{min}^{(i)} \,\Big] \right) \end{equation} (see \cref{fig:dV:plas}(b)). Which is to the first order equal to linear elasticity around its minimum $\varepsilon_\mathrm{min}^{(i)}$. Indeed, the first order Taylor series of \cref{eq:dV-plas-smooth} around $\varepsilon_\mathrm{d} = \varepsilon_\mathrm{min}^{(i)}$, \begin{equation} \frac{\partial V^{(i)}}{\partial \varepsilon_\mathrm{d}} \approx 2 \, G \, \Big[\, \varepsilon_\mathrm{d} - \varepsilon_\mathrm{min}^{(i)} \,\Big] \end{equation} is identical to \cref{eq:dV-plas}. For completeness, also in case the expression for the entire stress tensor \begin{equation} \T{\sigma} ( \T{\varepsilon} ) = K \, \varepsilon_\mathrm{m} \, \T{I} + G \, \left[ \frac{\Delta \varepsilon_\mathrm{y}^{(i)}}{\pi} \right] \sin \left( \frac{ \pi }{ \Delta \varepsilon_\mathrm{y}^{(i)} } \Big[\, \varepsilon_\mathrm{d} - \varepsilon_\mathrm{min}^{(i)} \,\Big] \right) \T{N}_\mathrm{d} \qquad \mathrm{for} \; \varepsilon_\mathrm{y}^{(i)} \leq \varepsilon_\mathrm{d} < \varepsilon_\mathrm{y}^{(i+1)} \end{equation} whereby, again, one has to assume that when $\varepsilon_\mathrm{d} = 0$ also $\T{\sigma}_\mathrm{d} = \T{0}$ in order to avoid zero division. Note that within (and only within) the elastic regime the elastic tangent or stiffness tensor holds. \begin{figure}[htp] \centering \includegraphics[width=1.\textwidth]{figures/potential_V-plas} \caption{ The multi-minima shear strain energy, $V ( \varepsilon_\mathrm{d} )$, that models the effect of plasticity. The multi-parabolic shear strain energy is shown in (a), while its smoothed equivalent is shown in (b).} \label{fig:V:plas} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=1.\textwidth]{figures/potential_dV-plas} \caption{Derivative of the shear strain energy $V$.} \label{fig:dV:plas} \end{figure} \section{Specialized model -- planar shear} \subsection{Motivation} The model as proposed above treats all shear modes equally (because $V$ is function of $\varepsilon_\mathrm{d}$). Consequently it is isotropic. There are, however, cases for which an anisotropic model is more realistic. Below a model is proposed in which the plasticity can only occur along a specific plane. Before that, the claim of isotropy is further motivated in two dimensions. In that case the strain tensor has three independent modes, illustrated in \cref{fig:strain-modes:2d}. \begin{figure}[htp] \centering \includegraphics[width=.7\textwidth]{figures/strain-modes_2d} \caption{The three independent modes described by a 2-d stain tensor $\T{\varepsilon}$.} \label{fig:strain-modes:2d} \end{figure} The first shear mode, in \cref{fig:strain-modes:2d}(b), corresponds to a strain tensor of the following structure \begin{equation} \label{eq:strain-modes:basic} \underline{\underline{\varepsilon}} = \begin{bmatrix} 0 & \gamma \\ \gamma & 0 \end{bmatrix} \end{equation} The second shear mode, in \cref{fig:strain-modes:2d}(c), corresponds to the same shear deformation rotated by $-\pi/4$. It is therefore of the structure \begin{equation} \underline{\underline{\varepsilon}} = \begin{bmatrix} \gamma & 0 \\ 0 & -\gamma \end{bmatrix} \end{equation} In terms of the equivalent shear strain, both modes result in $\varepsilon_\mathrm{d} = |\gamma|$. This is further illustrated by rotating the strain tensor from \cref{eq:strain-modes:basic} by an angle $\theta$: \begin{equation} \underline{\underline{\varepsilon}}^\prime = \underline{\underline{R}} \, \underline{\underline{\varepsilon}} \, \underline{\underline{R}}^T \end{equation} where the rotation matrix depends on the rotation angle $\theta$ as follows \begin{equation} \underline{\underline{R}} = \begin{bmatrix} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{bmatrix} \end{equation} Trivially $\varepsilon_\mathrm{d} = | \gamma |$, independent of $\theta$ -- as plotted in \cref{fig:shear-modes:epseq} in black. The contributions of the two shear modes are examined by examining $\varepsilon^\prime_{xy}$, representative of the first shear mode in \cref{fig:strain-modes:2d}(b), and $\varepsilon^\prime_{xx}$, representative of the second shear mode in \cref{fig:strain-modes:2d}(c). These contributions are plotted in \cref{fig:shear-modes:epseq} respectively in green and red. Indeed, the contributions of the two shear modes vary with $\theta$, while $\varepsilon_\mathrm{d}$ it is oblivious to this rotation. \begin{figure}[htp] \centering \includegraphics[width=.5\textwidth]{figures/strain-modes_2d_epseq} \caption{ Comparison between the equivalent shear strain $\varepsilon_\mathrm{d}^\prime$ the strain along the $x$-plane, $\varepsilon_\mathrm{s}^\prime = \varepsilon_{xy}^\prime = \vec{e}_x \cdot \T{\varepsilon}^\prime \cdot \vec{e}_y$, and the strain perpendicular to it, $\varepsilon_\mathrm{n}^\prime = \varepsilon_{xx}^\prime = \vec{e}_x \cdot \T{\varepsilon}^\prime \cdot \vec{e}_x$. For a 2-d simple shear strain tensor that is rotated by $\theta$ with respect to the $x$-axis, and normal $\vec{n} = \vec{e}_y$.} \label{fig:shear-modes:epseq} \end{figure} \subsection{Model} In this model, the plasticity will be localised on a plane with normal $\vec{n}$ (which has unit length $||\, \vec{n} \,|| \equiv 1$), see \cref{fig:strain-vector-planar}. To do so, the strain deviator $\T{\varepsilon}_\mathrm{d}$ is decomposed in two parts, the strain along the plastic (`weak') plane $\T{\varepsilon}_\mathrm{s}$, and the remaining strain $\T{\varepsilon}_\mathrm{n}$. I.e. \begin{equation} \label{eq:planar:strain:decomposition} \T{\varepsilon}_\mathrm{d} = \T{\varepsilon}_\mathrm{s} + \T{\varepsilon}_\mathrm{n} \end{equation} To compute the former, planar strain tensor $\T{\varepsilon}_\mathrm{s}$, first the direction of the deviatoric stain tensor projected on the plane is determined as \begin{equation} \vec{s}_\mathrm{n} = \frac{ \T{\varepsilon}_\mathrm{d} \cdot \vec{n} } { ||\, \T{\varepsilon}_\mathrm{d} \cdot \vec{n} \,|| } \end{equation} The strain direction along the plane is now found by projecting $\vec{s}_\mathrm{n}$ on it: \begin{equation} \vec{s} = \frac{ \vec{s}_\mathrm{n} - ( \vec{s}_\mathrm{n} \cdot \vec{n} )\, \vec{n} } { ||\, \vec{s}_\mathrm{n} - ( \vec{s}_\mathrm{n} \cdot \vec{n} )\, \vec{n} \,|| } \end{equation} The amount of strain in this direction is finally found to be \begin{equation} \varepsilon_\mathrm{s} = \vec{s} \cdot \T{\varepsilon}_\mathrm{d} \cdot \vec{n} \end{equation} Note that $\varepsilon_\mathrm{s}$ is by definition non-negative, i.e.\ it is oblivious to rotations about $\vec{n}$. The planar deviatoric strain tensor can now be constructed: \begin{equation} \T{\varepsilon}_\mathrm{s} = \varepsilon_\mathrm{s} \big( \vec{s} \otimes \vec{n} + \vec{n} \otimes \vec{s} \big) \end{equation} From this it is obvious that $\varepsilon_\mathrm{s}$ is the equivalent shear strain of this tensor, i.e. \begin{equation} \varepsilon_\mathrm{s} \equiv \sqrt{ \tfrac{1}{2} \T{\varepsilon}_\mathrm{s} : \T{\varepsilon}_\mathrm{s} } \end{equation} (To show this one has to use that $\vec{n} \cdot \vec{n} \equiv 1$ and $\vec{s} \cdot \vec{s} \equiv 1$ while $\vec{n} \cdot \vec{s} \equiv 0$). \begin{figure}[htp] \centering \includegraphics[width=.35\textwidth]{figures/strain-vector-planar} \caption{ Strain along a plane defined by its normal $\vec{n}$: the strain vector $\vec{s}_\mathrm{n}$ and its planar projection $\vec{s}$.} \label{fig:strain-vector-planar} \end{figure} The remaining strain can now be trivially obtained from \cref{eq:planar:strain:decomposition} as \begin{equation} \T{\varepsilon}_\mathrm{n} = \T{\varepsilon}_\mathrm{d} - \T{\varepsilon}_\mathrm{s} \end{equation} Its equivalent shear strain reads \begin{equation} \varepsilon_\mathrm{n} = \sqrt{\tfrac{1}{2} \T{\varepsilon}_\mathrm{n} : \T{\varepsilon}_\mathrm{n}} \end{equation} For plasticity to occur only along the `weak' plane, the shear strain energy is further decomposed in a planar part $V_\mathrm{s}$ that will be plastic, and a non-planar part $V_\mathrm{n}$ that will be elastic: \begin{equation} V = V_\mathrm{s} ( \varepsilon_\mathrm{s} ) + V_\mathrm{n} ( \varepsilon_\mathrm{n} ) \end{equation} Based on the definitions of the elastic and plastic potentials the following expression for the stress is obtained \begin{equation} \T{\sigma} ( \T{\varepsilon} ) = K \, \varepsilon_\mathrm{m} \, \T{I} + G \, \T{\varepsilon}_\mathrm{n} + G \, \left[ \frac{\Delta \varepsilon_\mathrm{y}^{(i)}}{\pi} \right] \sin \left( \frac{ \pi }{ \Delta \varepsilon_\mathrm{y}^{(i)} } \Big[\, \varepsilon_\mathrm{s} - \varepsilon_\mathrm{min}^{(i)} \,\Big] \right) \frac{\T{\varepsilon}_\mathrm{s}}{\varepsilon_\mathrm{s}} \quad \mathrm{for} \; \varepsilon_\mathrm{y}^{(i)} \leq \varepsilon_\mathrm{s} < \varepsilon_\mathrm{y}^{(i+1)} \end{equation} \appendix \vfill\newpage \section{Tensors and tensor products} \label{sec:nomenclature:tensor} \begin{itemize} \item Second order tensor \begin{equation} \T{A} = A_{ij} \vec{e}_i \vec{e}_j \end{equation} \item Dyadic tensor product \begin{align} \T{C} &= \vec{a} \otimes \vec{b} \\ C_{ij} &= a_{i} \, b_{j} \end{align} \item Double tensor contraction \begin{align} C &= \T{A} : \T{B} = \mathrm{tr} \left( \T{A} \cdot \T{B} \right) \\ &= A_{ij} \, B_{ji} \end{align} \end{itemize} \section{Unit tensors} \label{sec:nomenclature:unit} \begin{itemize} \item Second order unit tensor \begin{equation} \T{I} = \delta_{ij} \vec{e}_i \vec{e}_j \end{equation} It is easy to show that it has the property that \begin{equation} \T{I} : \T{A} = \mathrm{tr} ( \bm{A} ) \end{equation} \item Fourth order unit tensor: \begin{equation} \TT{I} : \T{A} \equiv \T{A} \end{equation} i.e. \begin{equation} \delta_{il} \delta_{jk} A_{lk} = A_{ij} \end{equation} hence \begin{equation} \TT{I} = \delta_{il} \delta_{jk} \vec{e}_i \vec{e}_j \vec{e}_k \vec{e}_l \end{equation} \item Deviatoric projection \begin{equation} \TT{I}_\mathrm{d} : \T{A} \equiv \T{A} - \tfrac{1}{d} \mathrm{tr} ( \bm{A} ) \T{I} \end{equation} hence \begin{equation} \TT{I}_\mathrm{d} = \TT{I} - \tfrac{1}{d} \T{I} \otimes \T{I} = \left( \delta_{il} \delta_{jk} - \tfrac{1}{d} \delta_{ij} \delta_{kl} \right) \vec{e}_i \vec{e}_j \vec{e}_k \vec{e}_l \end{equation} \end{itemize} \section{Strain measures} \label{sec:nomenclature::strain} \begin{itemize} \item Volumetric strain (in $d$ dimensions) \begin{equation} \varepsilon_\mathrm{m} = \tfrac{1}{d} \, \mathrm{tr} ( \T{\varepsilon} ) = \tfrac{1}{d} \, \T{\varepsilon} : \T{I} \end{equation} \item Strain deviator \begin{equation} \T{\varepsilon}_\mathrm{d} = \T{\varepsilon} - \tfrac{1}{d} \, \mathrm{tr} ( \T{\varepsilon} ) \, \T{I} = \T{\varepsilon} - \varepsilon_\mathrm{m} \, \T{I} = \TT{I}_\mathrm{d} : \T{\varepsilon} \end{equation} \item Equivalent shear strain \begin{equation} \varepsilon_\mathrm{d} = \; \sqrt{ \tfrac{1}{2} \, \T{\varepsilon}_\mathrm{d} : \T{\varepsilon}_\mathrm{d} } \end{equation} \end{itemize} \section{Stress measures} \label{sec:nomenclature::stress} \begin{itemize} \item Hydrostatic stress (in $d$ dimensions) \begin{equation} \sigma_\mathrm{m} = \tfrac{1}{d} \, \mathrm{tr} ( \T{\sigma} ) = \tfrac{1}{d} \, \T{\sigma} : \T{I} \end{equation} \item Stress deviator \begin{equation} \T{\sigma}_\mathrm{d} = \T{\sigma} - \tfrac{1}{d} \, \mathrm{tr} ( \T{\sigma} ) \, \T{I} = \T{\sigma} - \sigma_\mathrm{m} \, \T{I} = \TT{I}_\mathrm{d} : \T{\sigma} \end{equation} \item Equivalent shear stress \begin{equation} \sigma_\mathrm{d} = \sqrt{ 2 \, \T{\sigma}_\mathrm{d} : \T{\sigma}_\mathrm{d} } \end{equation} Note that this measure is not used by the model itself. It is just provided as for convenience, its definition being such that it is work-conjugate with the equivalent strain in the case of simple shear. I.e.\ for $\bm{\varepsilon} = \gamma (\vec{e}_x \vec{e}_y + \vec{e}_y \vec{e}_x)$, in the the case of elasticity, $\bm{\sigma} : \bm{\varepsilon} = \sigma_\mathrm{d} \varepsilon_\mathrm{d}$. \end{itemize} \section{Derivatives} \label{sec:nomenclature:derivatives} \begin{itemize} \item Trace of the strain \begin{equation} \frac{ \partial \, \mathrm{tr} ( \T{\varepsilon} ) }{ \partial \T{\varepsilon} } = \frac{ \partial }{ \partial \T{\varepsilon} } \left( \T{\varepsilon} : \T{I} \right) = \frac{ \partial \T{\varepsilon} }{ \partial \T{\varepsilon} } : \T{I} = \TT{I} : \T{I} = \T{I} \end{equation} \item Strain deviator \begin{equation} \frac{\partial \T{\varepsilon}_\mathrm{d}}{\partial \T{\varepsilon}} = \frac{\partial}{\partial \T{\varepsilon}} \left( \T{\varepsilon} - \tfrac{1}{d} \mathrm{tr} ( \T{\varepsilon} ) \T{I} \right) = \TT{I} - \tfrac{1}{d} \T{I} \otimes \T{I} = \TT{I}_\mathrm{d} \end{equation} \item Equivalent shear strain \begin{equation} \frac{ \partial \varepsilon_\mathrm{d} }{ \partial \T{\varepsilon} } = \frac{\partial}{\partial \T{\varepsilon}} \sqrt{\tfrac{1}{2}\, \T{\varepsilon}_\mathrm{d} : \T{\varepsilon}_\mathrm{d}} = \frac{1}{2 \varepsilon_\mathrm{d}} \frac{1}{2} \left[\, \frac{\partial \T{\varepsilon}_\mathrm{d}}{\partial \T{\varepsilon}} : \T{\varepsilon}_\mathrm{d} + \T{\varepsilon}_\mathrm{d} : \frac{\partial \T{\varepsilon}_\mathrm{d}}{\partial \T{\varepsilon}} \,\right] = \frac{1}{4 \varepsilon_\mathrm{d}} \big[\, \TT{I}_\mathrm{d} : \T{\varepsilon}_\mathrm{d} + \T{\varepsilon}_\mathrm{d} : \TT{I}_\mathrm{d} \,\big] = \frac{1}{2} \frac{\T{\varepsilon}_\mathrm{d}}{\varepsilon_\mathrm{d}} \equiv \tfrac{1}{2} \T{N}_d \end{equation} \end{itemize} \section{Conversion to common parameters for linear elasticity} The parameters are confronted to their common definitions (denoted here with a tilde) for linear elasticity in $d = 3$: \begin{itemize} \item Bulk modulus \begin{equation} \kappa = K / 3 \end{equation} \item Shear modulus \begin{equation} \mu = G / 2 \end{equation} \end{itemize} This results in a stress response \begin{equation} \T{\sigma} ( \T{\varepsilon} ) = \kappa \, \mathrm{tr} ( \T{\varepsilon} ) \T{I} + 2 \mu \left[ \T{\varepsilon} - \tfrac{1}{3} \mathrm{tr} ( \T{\varepsilon} ) \T{I} \right] = \kappa \, \mathrm{tr} ( \T{\varepsilon} ) \T{I} + 2 \mu \, \T{\varepsilon}_\mathrm{d} \end{equation} \bibliographystyle{unsrtnat} \bibliography{library} \end{document}
{ "alphanum_fraction": 0.6463157115, "avg_line_length": 34.4738186462, "ext": "tex", "hexsha": "f52034b6f11e551c494129f5697d78c093a63043", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-08-16T20:12:14.000Z", "max_forks_repo_forks_event_min_datetime": "2018-08-16T20:12:14.000Z", "max_forks_repo_head_hexsha": "fd6f9dcfd11cf3e4c65fd41eb974ed7fb7f83bbd", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tdegeus/ElastoPlasticQPot", "max_forks_repo_path": "docs/notes/readme.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "fd6f9dcfd11cf3e4c65fd41eb974ed7fb7f83bbd", "max_issues_repo_issues_event_max_datetime": "2018-09-07T14:58:28.000Z", "max_issues_repo_issues_event_min_datetime": "2018-04-11T09:00:19.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tdegeus/ElastoPlasticQPot", "max_issues_repo_path": "docs/notes/readme.tex", "max_line_length": 100, "max_stars_count": null, "max_stars_repo_head_hexsha": "fd6f9dcfd11cf3e4c65fd41eb974ed7fb7f83bbd", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tdegeus/ElastoPlasticQPot", "max_stars_repo_path": "docs/notes/readme.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9013, "size": 26993 }
\documentclass[a4paper]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{tocloft,siunitx,amsmath,graphicx,subcaption,float} \usepackage[top=3cm,left=3cm,right=3cm]{geometry} \graphicspath{{img/}} \renewcommand\cftsecfont{\normalfont} \renewcommand\cftsecpagefont{\normalfont} \renewcommand{\cftsecleader}{\cftdotfill{\cftsecdotsep}} \renewcommand\cftsecdotsep{\cftdot} \renewcommand\cftsubsecdotsep{\cftdot} \renewcommand\cftsubsubsecdotsep{\cftdot} \title{Lab 1: Ohmeter,Voltmeter and Ampmeter Usage} \author{ Sebastián Nava López\\ \and Ericka Sabrina Pensamiento R.\\ \and Salvador Palos Gil } \captionsetup[subfigure]{justification=raggedright} \begin{document} \begin{titlepage} \centering {\Huge Instituto Politécnicno Nacional}\\[3ex] {\huge Escuela Superior de Cómputo}\\[8ex] {\huge Fundamental Circuit Analysis}\\[12ex] {\Large Lab 1: Ohmeter,Voltmeter and Ampmeter usage}\\[20ex] {\Large Group: 1CV7 Team: 7 \\[8ex] Sebastian Nava López\\[4ex] Sabrina Erika Pensamiento Robledo\\[4ex] Salvador Palos Gil\\[18ex] } \large{Elaboration: February 27,2018\hspace{8em} Due date: March 6,2018} \end{titlepage} \tableofcontents \newpage \section{Introduction} In the realm of electrical engineering, we tend to work with conductive materials which have very little resistance to the pass of an electric current, this physical property of materials is called Electrical Resistance and its unit is the ohm (\si{\ohm}). We can also measure the rate at which electrical charges are flowing through the material in relation to the time at a certain point, this is called Electric Current and it is measured in ampere (A). Finally, conductive materials have another physical property called Electronic Potential or Voltage, this is described as the energy or work required to move a unit of charge between two points and it is measured in volts (V). These three elements are used and/or required to make a proper functioning electric circuit which we can define, to this specific context, as an electrical connection than can serve many, many uses. All those properties can be measured in a circuit using the appropriate instruments. This instruments can be direct-reading(analog) or digital, the main difference between this two resides in the way they display the information; the first uses the deflection of a needle while the other uses a digital display to show the variable values. The ammeter is the instrument used to measure current, it uses a pair of terminals called probes which are usually colored with red and black. When we need to measure the current in a circuit, we need to open the circuit in the desired point, then we connect both probes in series in said point, then the ammeter will measure the current going from the red to the black probe. The voltmeter is used to measure voltage and it works similarly to the ammeter, but instead of connecting the probes in series, we need to connect the probes in parallel to the segment of the circuit that we want to measure. The ohmmeter measures the resistance and, like the ammeter and voltmeter, also has a pair of probes. In order to get the resistance value, we put the probes in the segment we want to measure, then, if there is continuity between the leads and the circuit is not shorted, it will show the resistance value. It is worth noting that, to make the ohmmeter work properly, the circuit that is being measured must be de-energized. Usually, these three instruments come in a single unit called multimeter. This three meters are very useful when we want to analyze circuits as well as when we want to design new ones. \newpage \section{Development} \subsection{Ohmmeter Usage} In the first part of the experiment we calculated the resistance value of all the resistors using the color code, then we connected the resistors into the breadboard, and,with the resistance function of the multimeter, measured the resistance value for each resistor 12 times. \subsubsection{Calculations} In $R_1$, given that the first strip is green(digit 5),the next is blue(digit 6),and the third is brown(digit 1) ,the value of resistance is: \[\SI{56e1}=\SI{560}{\ohm}\] In R2, given that the first strip is brown(digit 1),the next is black(digit 0),and the third is red(digit 2) ,the value of resistance is: \[\SI{10e2}=\SI{1000}{\ohm}\] In R3, given that the first strip is orange(digit 3),the next is orange(digit 3),and the third is brown(digit 1) ,the value of resistance is: \[\SI{33e1}=\SI{330}{\ohm}\] In R4, given that the first strip is blue(digit 6),the next is grey(digit 8),and the third is brown(digit 1) ,the value of resistance is: \[\SI{68e1}=\SI{680}{\ohm}\] \subsubsection{Measurements} \begin{center} \begin{tabular}{|c|c|c|} \hline Resistor & Digital Ohmmeter & Color code values\\ \hline $R_1$ & $\SI{549}{\ohm}$ & $\SI{56e1}{\ohm}=\SI{560}{\ohm}$ \\ \hline $R_2$ & $\SI{987}{\ohm}$ & $\SI{10e2}{\ohm}=\SI{1}{\kilo\ohm}$ \\ \hline $R_3$ & $\SI{325}{\ohm}$ & $\SI{33e1}{\ohm}=\SI{330}{\ohm}$ \\ \hline $R_4$ & $\SI{671}{\ohm}$ & $\SI{68e1}{\ohm}=\SI{680}{\ohm}$ \\ \hline \end{tabular} \end{center} \subsubsection{Simulations} \begin{figure}[H] \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth,height=7.4cm]{ohm_560} \caption{$\SI{560}{\ohm}$ Resistor ($R_1$)} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth,height=7.4cm]{ohm_1k} \caption{$\SI{1}{\kilo\ohm}$ Resistor ($R_2$)} \end{subfigure} \caption{Ohmmeter simulation for $R_1$ and $R_2$} \label{fig:1} \end{figure} \begin{figure}[H] \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{ohm_330} \caption{$\SI{560}{\ohm}$ Resistor ($R_1$)} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{ohm_680} \caption{$\SI{1}{\kilo\ohm}$ Resistor ($R_2$)} \end{subfigure} \caption{Ohmmeter simulation for $R_1$ and $R_2$} \label{fig:2} \end{figure} \subsection{Voltmeter usage} In the second part of the experiment, using the \SI{1}{\kilo\ohm} and the \SI{330}{\ohm} resistors connected in series in the breadboard and powered by a voltage supply set at \SI{1}{\volt} , we measured the voltage in each resistor and in the equivalent resistor twelve times, augmenting \SI{1}{\volt} of the value of the voltage supply each time. \subsubsection{Calculations} Using KVL(Kirchhoff’s Voltage Law) we know that \[-V+VR_1+VR_2=0\] Given that the circuit is connected in series, the current $i$ is the same for both resistors \begin{align*} V&=iR_1+iR_2\\ V&=i(R_1+R_2) \end{align*} Isolating the current $i$: \[i=\frac{V}{R_1+R_2}\] Also, if the circuit is connected in series, the voltage for each resistor is different, so : \begin{align*} V_1&=iR_1\\ V_2&=iR_2 \end{align*} Plugging the value of $i$ in $V_1$ and $V_2$, we have: \begin{align*} V_1&=\frac{R_1V}{R_1+R_2}\\ V_2&=\frac{R_2V}{R_1+R_2} \end{align*} Finally, if we want the voltage in the equivalent resistor, is the same value as the one specified in the voltage supply. For each voltage we have:\\ %1V $E=\SI{1}{\volt}$ \[V_1=\frac{\SI{1}{\volt}\cdot\SI{1}{\kilo\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{0.752}{\volt} \qquad V_2=\frac{\SI{1}{\volt}\cdot\SI{330}{\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{0.248}{\volt} \qquad V_{t}=\SI{1}{\volt} \] %2V $E=\SI{2}{\volt}$ \[V_1=\frac{\SI{2}{\volt}\cdot\SI{1}{\kilo\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{1.504}{\volt} \qquad V_2=\frac{\SI{2}{\volt}\cdot\SI{330}{\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{0.496}{\volt} \qquad V_{t}=\SI{2}{\volt} \] %3V $E=\SI{3}{\volt}$ \[V_1=\frac{\SI{3}{\volt}\cdot\SI{1}{\kilo\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{2.255}{\volt} \qquad V_2=\frac{\SI{3}{\volt}\cdot\SI{330}{\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{0.744}{\volt} \qquad V_{t}=\SI{3}{\volt} \] %4V $E=\SI{4}{\volt}$ \[V_1=\frac{\SI{4}{\volt}\cdot\SI{1}{\kilo\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{3.01}{\volt} \qquad V_2=\frac{\SI{4}{\volt}\cdot\SI{330}{\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{0.992}{\volt} \qquad V_{t}=\SI{4}{\volt} \] %5V $E=\SI{5}{\volt}$ \[V_1=\frac{\SI{5}{\volt}\cdot\SI{1}{\kilo\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{3.759}{\volt} \qquad V_2=\frac{\SI{5}{\volt}\cdot\SI{330}{\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{1.241}{\volt} \qquad V_{t}=\SI{5}{\volt}\] %6V $E=\SI{6}{\volt}$ \[V_1=\frac{\SI{6}{\volt}\cdot\SI{1}{\kilo\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{4.511}{\volt} \qquad V_2=\frac{\SI{6}{\volt}\cdot\SI{330}{\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{1.489}{\volt} \qquad V_{t}=\SI{6}{\volt} \] %7V $E=\SI{7}{\volt}$ \[V_1=\frac{\SI{7}{\volt}\cdot\SI{1}{\kilo\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{5.263}{\volt} \qquad V_2=\frac{\SI{7}{\volt}\cdot\SI{330}{\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{1.737}{\volt} \qquad V_{t}=\SI{7}{\volt} \] %8V $E=\SI{8}{\volt}$ \[V_1=\frac{\SI{8}{\volt}\cdot\SI{1}{\kilo\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{6.015}{\volt} \qquad V_2=\frac{\SI{8}{\volt}\cdot\SI{330}{\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{1.985}{\volt} \qquad V_{t}=\SI{8}{\volt} \] %9V $E=\SI{9}{\volt}$ \[V_1=\frac{\SI{9}{\volt}\cdot\SI{1}{\kilo\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{6.767}{\volt} \qquad V_2=\frac{\SI{9}{\volt}\cdot\SI{330}{\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{2.233}{\volt} \qquad V_{t}=\SI{9}{\volt} \] %10V $E=\SI{10}{\volt}$ \[V_1=\frac{\SI{10}{\volt}\cdot\SI{1}{\kilo\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{7.519}{\volt} \qquad V_2=\frac{\SI{10}{\volt}\cdot\SI{330}{\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{2.481}{\volt} \qquad V_{t}=\SI{10}{\volt} \] %11V $E=\SI{11}{\volt}$ \[V_1=\frac{\SI{11}{\volt}\cdot\SI{1}{\kilo\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{8.271}{\volt} \qquad V_2=\frac{\SI{11}{\volt}\cdot\SI{330}{\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{2.729}{\volt} \qquad V_{t}=\SI{11}{\volt} \] %12V $E=\SI{12}{\volt}$ \[V_1=\frac{\SI{12}{\volt}\cdot\SI{1}{\kilo\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{9.022}{\volt} \qquad V_2=\frac{\SI{12}{\volt}\cdot\SI{330}{\ohm}}{\SI{1}{\kilo\ohm}+\SI{330}{\ohm}}=\SI{2.977}{\volt} \qquad V_{t}=\SI{12}{\volt} \] \subsubsection{Measurements} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Voltage & Voltage - $R_1$ and $R_2$ & Voltage - $R_1$ & Voltage - $R_2$\\ \hline $E=\SI{1}{\volt}$ & $\SI{0.977}{\volt}$ & $\SI{0.735}{\volt}$ & $\SI{0.242}{\volt}$ \\ \hline $E=\SI{2}{\volt}$ & $\SI{1.960}{\volt}$ & $\SI{1.510}{\volt}$ & $\SI{0.492}{\volt}$ \\ \hline $E=\SI{3}{\volt}$ & $\SI{2.970}{\volt}$ & $\SI{2.221}{\volt}$ & $\SI{0.731}{\volt}$ \\ \hline $E=\SI{4}{\volt}$ & $\SI{3.920}{\volt}$ & $\SI{1.929}{\volt}$ & $\SI{0.964}{\volt}$ \\ \hline $E=\SI{5}{\volt}$ & $\SI{4.950}{\volt}$ & $\SI{3.720}{\volt}$ & $\SI{1.167}{\volt}$ \\ \hline $E=\SI{6}{\volt}$ & $\SI{5.980}{\volt}$ & $\SI{4.570}{\volt}$ & $\SI{1.480}{\volt}$ \\ \hline $E=\SI{7}{\volt}$ & $\SI{6.910}{\volt}$ & $\SI{5.190}{\volt}$ & $\SI{1.711}{\volt}$ \\ \hline $E=\SI{8}{\volt}$ & $\SI{7.950}{\volt}$ & $\SI{5.980}{\volt}$ & $\SI{1.969}{\volt}$ \\ \hline $E=\SI{9}{\volt}$ & $\SI{8.960}{\volt}$ & $\SI{6.780}{\volt}$ & $\SI{2.256}{\volt}$ \\ \hline $E=\SI{10}{\volt}$ & $\SI{9.940}{\volt}$ & $\SI{7.450}{\volt}$ & $\SI{2.458}{\volt}$ \\ \hline $E=\SI{11}{\volt}$ & $\SI{10.910}{\volt}$ & $\SI{8.190}{\volt}$ & $\SI{2.704}{\volt}$ \\ \hline $E=\SI{12}{\volt}$ & $\SI{11.980}{\volt}$ & $\SI{8.880}{\volt}$ & $\SI{2.932}{\volt}$ \\ \hline \end{tabular} \end{center} \subsubsection{Simulations} \begin{figure}[H] \begin{subfigure}{0.48\textwidth} \includegraphics[width=.9\linewidth]{volts_1} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=.9\linewidth]{volts_2} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=.9\linewidth]{volts_3} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=.9 \linewidth]{volts_4} \end{subfigure} \caption{Voltmeter simulation for $\SI{1}{\volt}$ to $\SI{4}{\volt}$} \end{figure} \begin{figure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.03\linewidth]{volts_5} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.03\linewidth]{volts_6} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.03\linewidth]{volts_7} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.03\linewidth]{volts_8} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.03\linewidth]{volts_9} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.03\linewidth]{volts_10} \end{subfigure} \caption{Voltmeter simulation for $\SI{5}{\volt}$ to $\SI{10}{\volt}$} \end{figure} \begin{figure}[H] \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.05\linewidth]{volts_11} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.05\linewidth]{volts_12} \end{subfigure} \caption{Voltmeter simulation for $\SI{11}{\volt}$ and $\SI{12}{\volt}$} \label{fig:3} \end{figure} \subsection{Ammeter usage} In the last part we used the $\SI{560}{\ohm}$ and $\SI{680}{\ohm}$ resistors to measure current in all the resistors and in the equivalent resistor, augmenting the value of the voltage supply in the same way as the previous part. \subsubsection{Calculations} If the resistors are connected in series, each one receives the same voltage supplied by $E$ but the current flowing through them is different and is given, using Ohm's law, by: \begin{align*} I_1&=\frac{V}{R_1}\\ I_2&=\frac{V}{R_2} \end{align*} Then, the current in the equivalent resistor is given by: \[I_t=\frac{V}{R_{eq}}\] Where Req is given by: \begin{align*} R_{eq}&=\frac{1}{\frac{1}{R_1}\frac{1}{R_2}}\\ R_{eq}&=\Big(\frac{R_1+R_2}{R_1 R_2}\Big)^{-1}\\ R_{eq}&=\frac{R_1R_2}{R_1+R_2} \end{align*} Finally, \[I_t=V\cdot\Big(\frac{R_1+R_2}{R_1\cdot_2}\Big)\] Then, for each voltage we have:\\ %1V $E=\SI{1}{\volt}$ \[I_1=\frac{\SI{1}{\volt}}{\SI{560}{\ohm}}=\SI{1.786}{\milli\ampere} \quad I_2=\frac{\SI{1}{\volt}}{\SI{680}{\ohm}}=\SI{1.470}{\milli\ampere} \quad I_{t}=\SI{1}{\volt}\cdot\Big(\frac{\SI{680}{\ohm}+\SI{560}{\ohm}}{\SI{1}{\ohm}\cdot\SI{560}{\ohm}}\Big)=\SI{3.256}{\milli\ampere} \] %2V $E=\SI{2}{\volt}$ \[I_1=\frac{\SI{2}{\volt}}{\SI{560}{\ohm}}=\SI{3.571}{\milli\ampere} \quad I_2=\frac{\SI{2}{\volt}}{\SI{680}{\ohm}}=\SI{2.941}{\milli\ampere} \quad I_{t}=\SI{2}{\volt}\cdot\Big(\frac{\SI{680}{\ohm}+\SI{560}{\ohm}}{\SI{1}{\ohm}\cdot\SI{560}{\ohm}}\Big)=\SI{6.513}{\milli\ampere} \] %3V $E=\SI{3}{\volt}$ \[I_1=\frac{\SI{3}{\volt}}{\SI{560}{\ohm}}=\SI{5.357}{\milli\ampere} \quad I_2=\frac{\SI{3}{\volt}}{\SI{680}{\ohm}}=\SI{4.412}{\milli\ampere} \quad I_{t}=\SI{3}{\volt}\cdot\Big(\frac{\SI{680}{\ohm}+\SI{560}{\ohm}}{\SI{1}{\ohm}\cdot\SI{560}{\ohm}}\Big)=\SI{9.769}{\milli\ampere} \] %4V $E=\SI{4}{\volt}$ \[I_1=\frac{\SI{4}{\volt}}{\SI{560}{\ohm}}=\SI{7.143}{\milli\ampere} \quad I_2=\frac{\SI{4}{\volt}}{\SI{680}{\ohm}}=\SI{5.882}{\milli\ampere} \quad I_{t}=\SI{4}{\volt}\cdot\Big(\frac{\SI{680}{\ohm}+\SI{560}{\ohm}}{\SI{1}{\ohm}\cdot\SI{560}{\ohm}}\Big)=\SI{13.025}{\milli\ampere} \] %5V $E=\SI{5}{\volt}$ \[I_1=\frac{\SI{5}{\volt}}{\SI{560}{\ohm}}=\SI{8.928}{\milli\ampere} \quad I_2=\frac{\SI{5}{\volt}}{\SI{680}{\ohm}}=\SI{7.353}{\milli\ampere} \quad I_{t}=\SI{5}{\volt}\cdot\Big(\frac{\SI{680}{\ohm}+\SI{560}{\ohm}}{\SI{1}{\ohm}\cdot\SI{560}{\ohm}}\Big)=\SI{16.281}{\milli\ampere} \] %6V $E=\SI{6}{\volt}$ \[I_1=\frac{\SI{6}{\volt}}{\SI{560}{\ohm}}=\SI{10.714}{\milli\ampere} \quad I_2=\frac{\SI{6}{\volt}}{\SI{680}{\ohm}}=\SI{8.823}{\milli\ampere} \quad I_{t}=\SI{6}{\volt}\cdot\Big(\frac{\SI{680}{\ohm}+\SI{560}{\ohm}}{\SI{1}{\ohm}\cdot\SI{560}{\ohm}}\Big)=\SI{19.538}{\milli\ampere} \] %7V $E=\SI{7}{\volt}$ \[I_1=\frac{\SI{7}{\volt}}{\SI{560}{\ohm}}=\SI{12.5}{\milli\ampere} \quad I_2=\frac{\SI{7}{\volt}}{\SI{680}{\ohm}}=\SI{10.294}{\milli\ampere} \quad I_{t}=\SI{7}{\volt}\cdot\Big(\frac{\SI{680}{\ohm}+\SI{560}{\ohm}}{\SI{1}{\ohm}\cdot\SI{560}{\ohm}}\Big)=\SI{22.794}{\milli\ampere} \] %8V $E=\SI{8}{\volt}$ \[I_1=\frac{\SI{8}{\volt}}{\SI{560}{\ohm}}=\SI{14.286}{\milli\ampere} \quad I_2=\frac{\SI{8}{\volt}}{\SI{680}{\ohm}}=\SI{11.765}{\milli\ampere} \quad I_{t}=\SI{8}{\volt}\cdot\Big(\frac{\SI{680}{\ohm}+\SI{560}{\ohm}}{\SI{1}{\ohm}\cdot\SI{560}{\ohm}}\Big)=\SI{26.050}{\milli\ampere} \] %9V $E=\SI{9}{\volt}$ \[I_1=\frac{\SI{9}{\volt}}{\SI{560}{\ohm}}=\SI{16.071}{\milli\ampere} \quad I_2=\frac{\SI{9}{\volt}}{\SI{680}{\ohm}}=\SI{13.235}{\milli\ampere} \quad I_{t}=\SI{9}{\volt}\cdot\Big(\frac{\SI{680}{\ohm}+\SI{560}{\ohm}}{\SI{1}{\ohm}\cdot\SI{560}{\ohm}}\Big)=\SI{29.307}{\milli\ampere} \] %10V $E=\SI{10}{\volt}$ \[I_1=\frac{\SI{10}{\volt}}{\SI{560}{\ohm}}=\SI{17.857}{\milli\ampere} \quad I_2=\frac{\SI{10}{\volt}}{\SI{680}{\ohm}}=\SI{14.706}{\milli\ampere} \quad I_{t}=\SI{10}{\volt}\cdot\Big(\frac{\SI{680}{\ohm}+\SI{560}{\ohm}}{\SI{1}{\ohm}\cdot\SI{560}{\ohm}}\Big)=\SI{32.563}{\milli\ampere} \] %11V $E=\SI{11}{\volt}$ \[I_1=\frac{\SI{11}{\volt}}{\SI{560}{\ohm}}=\SI{19.643}{\milli\ampere} \quad I_2=\frac{\SI{11}{\volt}}{\SI{680}{\ohm}}=\SI{16.176}{\milli\ampere} \quad I_{t}=\SI{11}{\volt}\cdot\Big(\frac{\SI{680}{\ohm}+\SI{560}{\ohm}}{\SI{1}{\ohm}\cdot\SI{560}{\ohm}}\Big)=\SI{35.819}{\milli\ampere} \] %12V $E=\SI{12}{\volt}$ \[I_1=\frac{\SI{12}{\volt}}{\SI{560}{\ohm}}=\SI{21.428}{\milli\ampere} \quad I_2=\frac{\SI{12}{\volt}}{\SI{680}{\ohm}}=\SI{17.647}{\milli\ampere} \quad I_{t}=\SI{12}{\volt}\cdot\Big(\frac{\SI{680}{\ohm}+\SI{560}{\ohm}}{\SI{1}{\ohm}\cdot\SI{560}{\ohm}}\Big)=\SI{39.076}{\milli\ampere} \] \subsubsection{Measurements} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Voltage & Current - $R_1$ and $R_2$ & Resistor - $R_1$ & Resistor - $R_2$\\ \hline $E=\SI{1}{\volt}$ & $\SI{2.640}{\milli\ampere}$ & $\SI{2.150}{\milli\ampere}$ & $\SI{1.100}{\milli\ampere}$ \\ \hline $E=\SI{2}{\volt}$ & $\SI{5.910}{\milli\ampere}$ & $\SI{3.400}{\milli\ampere}$ & $\SI{2.880}{\milli\ampere}$ \\ \hline $E=\SI{3}{\volt}$ & $\SI{8.320}{\milli\ampere}$ & $\SI{5.120}{\milli\ampere}$ & $\SI{4.280}{\milli\ampere}$ \\ \hline $E=\SI{4}{\volt}$ & $\SI{10.580}{\milli\ampere}$ & $\SI{6.820}{\milli\ampere}$ & $\SI{5.670}{\milli\ampere}$ \\ \hline $E=\SI{5}{\volt}$ & $\SI{15.160}{\milli\ampere}$ & $\SI{8.530}{\milli\ampere}$ & $\SI{7.170}{\milli\ampere}$ \\ \hline $E=\SI{6}{\volt}$ & $\SI{19.150}{\milli\ampere}$ & $\SI{10.190}{\milli\ampere}$ & $\SI{8.210}{\milli\ampere}$ \\ \hline $E=\SI{7}{\volt}$ & $\SI{21.910}{\milli\ampere}$ & $\SI{12.010}{\milli\ampere}$ & $\SI{9.790}{\milli\ampere}$ \\ \hline $E=\SI{8}{\volt}$ & $\SI{25.440}{\milli\ampere}$ & $\SI{13.680}{\milli\ampere}$ & $\SI{11.330}{\milli\ampere}$ \\ \hline $E=\SI{9}{\volt}$ & $\SI{28.630}{\milli\ampere}$ & $\SI{15.490}{\milli\ampere}$ & $\SI{12.690}{\milli\ampere}$ \\ \hline $E=\SI{10}{\volt}$ & $\SI{32.510}{\milli\ampere}$ & $\SI{17.390}{\milli\ampere}$ & $\SI{14.190}{\milli\ampere}$ \\ \hline $E=\SI{11}{\volt}$ & $\SI{36.210}{\milli\ampere}$ & $\SI{18.970}{\milli\ampere}$ & $\SI{15.700}{\milli\ampere}$ \\ \hline $E=\SI{12}{\volt}$ & $\SI{39.410}{\milli\ampere}$ & $\SI{20.790}{\milli\ampere}$ & $\SI{17.190}{\milli\ampere}$ \\ \hline \end{tabular} \end{center} \subsubsection{Simulations} \begin{figure}[H] \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.16\linewidth]{amps_1} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.16\linewidth]{amps_2} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.16\linewidth]{amps_3} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.16\linewidth]{amps_4} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.16\linewidth]{amps_5} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.16\linewidth]{amps_6} \end{subfigure} \caption{Ammeter simulation for $\SI{1}{\volt}$ to $\SI{6}{\volt}$} \label{fig:4} \end{figure} \begin{figure}[H] \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.16\linewidth]{amps_7} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.16\linewidth]{amps_8} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.16\linewidth]{amps_9} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.16\linewidth]{amps_10} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.16\linewidth]{amps_11} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1.16\linewidth]{amps_12} \end{subfigure} \caption{Ammeter simulation for $\SI{7}{\volt}$ to $\SI{12}{\volt}$} \label{fig:5} \end{figure} \newpage \section{Questions} \textit{What is a series circuit main feature?}\\ When only two elements share a single node, transmitting the same current. \textit{What is a parallel circuit main feature?}\\ When two or more elements share a node (non-exclusively). These elements undergo different current values but the same voltage.\\ \textit{What’s the difference between a digital and an analog multimeter?}\\ The analog meter emits sinusoidal pulses and has a needle-like interface, while the digital meter transforms those pulses or signal into a binary format, having also a digital interface.\\ \textit{Why does an ammeter must not be connected in parallel?}\\ Because the ammeter becomes another element in the circuit, and connecting it incorrectly can lead into malfunctioning of the meter or damaging it\\ \textit{Why does a circuit must be de-energized when checking the resistance of an electrical circuit?}\\ Because the current flow would lead to mistakes while trying to measure the resistance. \section{Conclusions} {\large Sabrina:}\\[2ex] To practice the correct usage of the multimeter is vital to an engineer’s education, so he or she can monitor every element’s functionality in a circuit, as well as to evaluate their quality. Nevertheless, it is also essential for the student to study well the theoretical part of circuit analysis and construction to have an integral idea to what is really happening in the laboratory\\[2ex] {\large Salvador:}\\[2ex] The evidence shown above shows that the resistances measured had values different from the calculations previously made, but they were approximate values with minimal differences. The correct use of multimeter showed us this minimal differences from calculations and practice\\[2ex] {\large Sebastián:}\\[2ex] The experiment helped us to learn the usage of this instruments as well as how they could help us to verify the accuracy of our theoretical designs when they become real world models \end{document}
{ "alphanum_fraction": 0.6683463732, "avg_line_length": 46.263803681, "ext": "tex", "hexsha": "0f35e43107baced3bddcd129899603ae60be4015", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "11af186f0b6f6be09dd62820af9ba7b78cd7b5b1", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "c0rrigan/docs-repo", "max_forks_repo_path": "circuitos/practica1/practica1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "11af186f0b6f6be09dd62820af9ba7b78cd7b5b1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "c0rrigan/docs-repo", "max_issues_repo_path": "circuitos/practica1/practica1.tex", "max_line_length": 717, "max_stars_count": null, "max_stars_repo_head_hexsha": "11af186f0b6f6be09dd62820af9ba7b78cd7b5b1", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "c0rrigan/docs-repo", "max_stars_repo_path": "circuitos/practica1/practica1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9219, "size": 22623 }
\documentclass[12pt, titlepage]{article} \usepackage{booktabs} \usepackage{tabularx} \usepackage{graphicx} \usepackage{hyperref} \hypersetup{ colorlinks, citecolor=black, filecolor=black, linkcolor=red, urlcolor=blue } \usepackage[round]{natbib} \title{SE 3XA3: Software Requirements Specification\\DNA Says} \author{Team 10, Team Name: DNA \\ Kareem Abdel Mesih (abdelk2) \\ John-Paul Dakran (dakranj) \\ Shady Nessim (nessimss) } \date{\today} \begin{document} \maketitle \pagenumbering{roman} \tableofcontents \listoftables \listoffigures \begin{table}[bp] \caption{\bf Revision History} \begin{tabularx}{\textwidth}{p{3cm}p{2cm}X} \toprule {\bf Date} & {\bf Version} & {\bf Notes}\\ \midrule 2016/10/10 & 1.0 & Completion of sub-section 1 \& 2\\ 2016/10/10 & 2.0 & Completion of sub-section 4 \\ 2016/10/11 & 3.0 & Completion of sub-section 3 \\ 2016/12/02 & 4.0 & Revision 1 \\ \bottomrule \end{tabularx} \end{table} \newpage \pagenumbering{arabic} \section{Project Drivers} \subsection{The Purpose of the Project} Video games have always been one of the top choices with regards to entertainment. They are also named as one of the great ways to overcome boredom. This project is a redevelopment of the famous digital game Simon Says, with a slight modification that makes DNA Says unique while keeping the integrity of the game consistent with the original version. This interactive game serves the purpose of allowing people of all ages, whether bored or simply having a break, to enjoy a fun and an interactive game. The main basis of Simon Says is to remember a given pattern, and iterate it back. In addition, this project will aid in the enhancement of ones visual and auditory memory. \subsection{The Stakeholders} \subsubsection{The Client} This pogram is developped as the final project for McMaster University's Software Engineer 3XA3 - Software Project Management. Therefore, the client for this project is Dr. Spencer Smith, the Professor of that course. \subsubsection{The Customers} The customers for this project are the general public who will operate the game DNA Says. A typical customer will be any person ranging from five years of age and older, who can access and operate a computer. \subsubsection{Other Stakeholders} \begin{itemize} \item The Development Team - Kareem, John-Paul and Shady. \item Previous and future developers as they possess the power to modify and publish this program as they desire. \end{itemize} \subsection{Mandated Constraints} \subsubsection{Solution Constraints} Description: This game is OS independant. It is compatible with Windows, Mac OS X, and Linux operating systems.\\ \\ Rationale: The client will be using any of the operating systems listed above.\\ \\ Fit Criterion: During the user testing phase, all of the operating systems mentioned above were tested. \subsubsection{Partner or Collaborative Applications} This project is a redevelopment of the digital game Simon Says in which its Python open source code is available online. The new game DNA Says supports the current game's user platform. \subsubsection{Budget Constraints} The operating budget of the project is \$0. All resources needed to develop this game are currently owned by the developers. \subsubsection{Scheduling Constraints} The project must be fully completed by December 7, 2016. This includes the implementation, testing and documentation. \subsubsection{Enterprise Constraints} This game is free and accessible to all users who have exposure to a computer. \subsection{Naming Conventions and Terminology} . \begin{table}[h!] \centering \caption{List of Terminology} \label{tab:table3} \begin{tabular}{ll} \hline Term & Definiton\\ \hline OS & Short for operating systems.\\ Windows & Microsoft's operating system.\\ Mac OS X & Apple's operating system.\\ Linux & A Unix operating system.\\ Python & A programming language.\\ IDLE & Integrated development environment\\ LaTeX & A document preparation system. \\ Mode & Different subsections of the game.\\ GUI & Graphical user interface.\\ Gantt Chart & Chart outlining the timeline of the project.\\ \hline \end{tabular} \end{table} \subsection{Relevant Facts and Assumptions} \subsubsection{Relevant Facts} Python is used to develop this project. It runs in its basic IDLE Version 3.5. Framework is tested in an automated fashion to validate the different cases and outcomes of the game. Family and friends have tested the overall functionality and performance of the game, as well as its non-functional requirements. LaTeX is to generate required documents.\\ \\ The previous implementation of this game has approximately two hundred and fifty lines of code. That implementation only has one mode, however the version implemented has three different modes and a menu. Therefore the number of lines of code is greater than the original version's. The original implementation has no licenses that need to be acquired by the team or McMaster University. \subsubsection{Assumptions} It is assumed that the user has downloaded any version of Python along with its corresponding Pygame version. \\ It is also assumed that the user has basic understanding of operating a computer. The user must be able to open an application and follow simple instructions to interact with the GUI.\\ \\ Finally, the user's computer must have enough processing speed and storage to effectively run and host the application. \section{Functional Requirements} \subsection{The Scope of the Work and the Product} \subsubsection{The Context of the Work} . \begin {figure}[h] \includegraphics [width = \linewidth] {Context_Of_Work.png} \caption {Context of Work Diagram} \label {Figure: Context of Work} \end {figure} \newpage \subsubsection{Work Partitioning} \begin{table}[h] \centering \caption{List of Events} \label{tab:table2} \begin{tabular}{clll} \hline \# & Event & Input & Output\\ \hline 1. & DNA Says Creation & Developer code & Executable file\\ 2. & DNA Says Audio & None & Audio output device\\ 3. & DNA Says GUI & Developer code & Monitor\\ 4. & Open the file & User input & New window\\ 5. & Select a mode & User input & Buttons appear\\ 6. & Click a correct disk & User input & Light \& sound\\ 7. & Click an incorrect disk & User input & Light \& sound\\ 8. & Repeat pattern successfully & User input & Addition to the pattern\\ 9. & Exit to main menu & User input & Main menu appears\\ 10. & Exit game & User input & Window termination\\ \hline \end{tabular} \end{table} \subsubsection{Individual Product Use Cases} \begin{itemize} \item Use Case \#1 \begin{itemize} \item Name: Open the executable file. \item Trigger: The user selects to open the file. \item Precondition: The DNA Says icon must be available on the desktop. \item Postcondition: The main menu will open. \end{itemize} \item Use Case \#2 \begin{itemize} \item Name: Select a mode. \item Trigger: The user selects to choose one of the three modes. \item Precondition: The user must be in the main menu. \item Postcondition: The user will be able to view the three buttons and begin the game. \end{itemize} \item Use Case \#3 \begin{itemize} \item Name: Click a correct button. \item Trigger: The user selects a button that was part of the pattern displayed in the correct playing order. \item Precondition: The user must be in a mode and the computer has displayed the pattern. \item Postcondition: The button will light up and make a sound. \end{itemize} \item Use Case \#4 \begin{itemize} \item Name: Click an incorrect button. \item Trigger: The user selects a button that was not part of the pattern displayed. \item Precondition: The user must be in a mode and the computer has displayed the pattern. \item Postcondition: The game will make a specific sound indicating an incorrect move and the screen will flash. \end{itemize} \item Use Case \#5 \begin{itemize} \item Name: Successfully repeat the pattern. \item Trigger: The user selects the series of button that composed the pattern displayed in order. \item Precondition: The user must be in a mode and the computer has displayed the pattern. \item Postcondition: The next pattern will be displayed to the user. \end{itemize} \item Use Case \#6 \begin{itemize} \item Name: Exit to the main menu. \item Trigger: The user selects main menu icon. \item Precondition: The user must be in a mode. \item Postcondition: The user will leave a mode and the main menu will open. \end{itemize} \item Use Case \#7 \begin{itemize} \item Name: Exit game. \item Trigger: The user selects the exit game icon \item Precondition: The user must be in the main menu. \item Postcondition: The appllicaion will be terminated \end{itemize} \end{itemize} \section{Functional Requirements} \begin{itemize} \item Requirement \#1 \begin{itemize} \item Description: The user will be able to open the executable file. \item Rationale: The user must be able to open the program. \item Fit Criterion: A new window will open on the user's computer screen. \end{itemize} \item Requirement \#2 \begin{itemize} \item Description: The interface will open in a new window. \item Rationale: The program will be operated in a separate window. \item Fit Criterion: A new window will appear on the user's computer screen \end{itemize} \item Requirement \#3 \begin{itemize} \item Description: The game will have three separate modes - Kareem Says, JP Says and Shady Says. \item Rationale: The game is designed to have three distinct modes. \item Fit Criterion: The three different modes will be displayed on the main menu of the game. \end{itemize} \item Requirement \#4 \begin{itemize} \item Description: The user will be able to select one of the three modes to play. \item Rationale: The user must be able to play one mode at a time. \item Fit Criterion: The user will be able to select one of the three modes displayed in the main menu of the game. \end{itemize} \item Requirement \#5 \begin{itemize} \item Description: The main menu will display the three different modes. \item Rationale: The user must be able to view which mode they wish to select. \item Fit Criterion: Three distinct icons will be displayed in the main menu. \end{itemize} \item Requirement \#6 \begin{itemize} \item Description: If Kareem Says is selected, then a piano will be displayed on the screen, otherwise nine squared buttons will show up for JP Says, and four for Shady Says. \item Rationale: The game is designed to have different interfaces for each mode. \item Fit Criterion: When a user selects a mode - accordingly, a piano will show up, nine buttons or four buttons. \end{itemize} \item Requirement \#7 \begin{itemize} \item Description: Each button will light up and produce a different sound when clicked. \item Rationale: This gives the user the ability to detect the pattern that will be displayed. \item Fit Criterion: When the user clicks a button, the button will light up and produce a sound. \end{itemize} \item Requirement \#8 \begin{itemize} \item Description: The user will be able to exit the game at any time and go back to the main menu. \item Rationale: The user must have a means of exiting an ongoing game and return to the main menu. \item Fit Criterion: When the user clicks the main menu button, they will find their screen in the main menu window. \end{itemize} \item Requirement \#9 \begin{itemize} \item Description: Every time a user passes a level, the score goes up by one point. \item Rationale: A record of a user's score must be kept. \item Fit Criterion: At level N, the score = N. \end{itemize} \item Requirement \#10 \begin{itemize} \item Description: Every time a user fails a level, the score is reset to zeo. \item Rationale: When a user fails a level, the game must restart from level one. \item Fit Criterion: Whenever the user makes a mistake, the score text will reset to zero. \end{itemize} \item Requirement \#11 \begin{itemize} \item Description: There will be a score icon in the top right corner. \item Rationale: The user must be able to view their score. \item Fit Criterion: When the user selects a mode, the score icon will be set to zero. \end{itemize} \item Requirement \#12 \begin{itemize} \item Description: At level N, a random pattern of N disks will light up and be displayed to the user. \item Rationale: The pattern's length will increase as the levels progress. \item Fit Criterion: During level one, one random button will light up and sound. \end{itemize} \item Requirement \#13 \begin{itemize} \item Description: The user cannot click the button while the pattern is being displayed. \item Rationale: The pattern must be displayed to the user in full effect. \item Fit Criterion: The program will not record clicks the user inputs during this time. \end{itemize} \item Requirement \#14 \begin{itemize} \item Description: The user will be able to click the buttons once the pattern has been displayed. \item Rationale: The user must repeat the pattern correctly to pass the level. \item Fit Criterion: The program will monitor the user's input clicks to determine if the entry is correct or not. \end{itemize} \item Requirement \#15 \begin{itemize} \item Description: A level is passed if the user repeats the pattern correctly. \item Rationale: The user will be able to progress through the game. \item Fit Criterion: The score will be increased by 1 when the user is successful. \end{itemize} \item Requirement \#16 \begin{itemize} \item Description: If the user fails, the game will restart - I.e. N = 1. \item Rationale: The user must restart from the beginning of the game when a mistake is made. \item Fit Criterion: Whenever a mistake is made, the user will be directed to level one. \end{itemize} \end{itemize} \section{Non-functional Requirements} \subsection{Look and Feel Requirements} \subsubsection{Appearance Requirements} \begin{itemize} \item Requirement \#1 \begin{itemize} \item Description: The product shall have an appealing colorful appearance. \item Rationale: The display should always be engaging so as to keep user interested in game. The product must be aesthetically pleasing and easy to use to benefit the end-users \item Originator: Shady Nessim \item Fit Criterion: Stakeholder satisfaction regarding the appearance, user attraction to game. \item Priority: High \item History: Created October 5, 2016 \\ \end{itemize} \item Requirement \#2 \begin{itemize} \item Description: The buttons must be well designed and colored. \item Rationale: The game revolves around pressing buttons in a pattern. It is the main entity of the game and must thus be aesthetically pleasing to attract user interest \item Originator: Shady Nessim \item Fit Criterion: User reaches high levels as a result of uniqueness and beauty of buttons. \item Priority: High \item History: Created October 5, 2016 \\ \end{itemize} \item Requirement \#3 \begin{itemize} \item Description: The product shall have attractive sound patterns. The associated sounds with buttons must be well constructed and notes must follow harmonically. \item Rationale: The user follows a pattern based on colors and sounds, the sounds must thus be well designed to be easy to follow. When user hears an attractive pattern, naturally they are inclined to repeat it. \item Originator: Shady Nessim \item Fit Criterion: User shall be invested in game and spend a lot of time playing the game. \item Priority: High \item History: Created October 5, 2016 \\ \end{itemize} \end{itemize} \subsubsection{Style Requirements} \begin{itemize} \item Requirement \#4 \begin{itemize} \item Description: The product shall have enough buttons to keep game engaging but not too many as to make the screen feel cluttered. DNA Says will appear to be a bright upbeat game. \item Rationale: The game must induce a style and feel to the user that will be a driving factor to use the game if the user likes the style of the game \item Originator: Shady Nessim \item Fit Criterion: Stakeholder satisfaction regarding the style, user attraction to game. \item Priority: Medium \item History: Created October 5, 2016 \end{itemize} \end{itemize} \subsection{Usability and Humanity Requirements} \subsubsection{Ease of Use Requirements} \begin{itemize} \item Requirement \#5 \begin{itemize} \item Description: The product shall be easy to use for people of all ages, including children. \item Rationale: The game involves no reading or writing, it does not involve intelligence either. The game involves short term memory. As such it should be easy to use for all people to improve their short term memory. \item Originator: Shady Nessim \item Fit Criterion: User figures out how to play the game within the first couple of minutes of use. \item Priority: High \item History: Created October 5, 2016 \end{itemize} \item Requirement \#6 \begin{itemize} \item Description: The product shall be easy to install for all users. \item Rationale: This product is simply a game so the user will probably not go through the trouble of downloading and installing the game if it is not an easy process. \item Originator: Shady Nessim \item Fit Criterion: User easily downloads and installs the game in a timely manner. \item Priority: High \item History: Created October 5, 2016 \end{itemize} \end{itemize} \subsubsection{Personalization and Internationalization Requirements} \begin{itemize} \item Requirement \#7 \begin{itemize} \item Description: The product shall operate with the English language. \item Rationale: The application is intended for use by English and non-English speakers, however with minimal required text use, this game can easily be figured out and used by non-English speakers \item Originator: Shady Nessim \item Fit Criterion: User easily understands objective of game and how to play. \item Priority: Medium \item History: Created October 5, 2016 \end{itemize} \end{itemize} \subsubsection{Learning Requirements} \begin{itemize} \item Requirement \#8 \begin{itemize} \item Description: The application shall not require a tutorial and shall be clear and simple enough in early levels to communicate to the user how the game is played. \item Rationale: The application is intended for use by people of all ages. Must thus be easy to understand. \item Originator: Shady Nessim \item Fit Criterion: User easily understands objective of game and how to play. \item Priority: Medium \item History: Created October 5, 2016 \end{itemize} \end{itemize} \subsubsection{Understandability and Politeness Requirements} \begin{itemize} \item Requirement \#9 \begin{itemize} \item Description: The application shall not produce ugly sound patterns or offensive visual patterns to respect all users. \item Rationale: The application is intended for entertainment and as a cure for boredom, if user feels uncomfortable or offended they will not use the game. \item Originator: Shady Nessim \item Fit Criterion: User feels good about game and patterns are appealing and attractive. \item Priority: Medium \item History: Created October 5, 2016 \end{itemize} \item Requirement \#10 \begin{itemize} \item Description: The product shall produce a friendly indication when user loses or wins a level. \item Rationale: The application is intended for entertainment and as a cure for boredom, if user feels uncomfortable or offended they will not use the game. \item Originator: Shady Nessim \item Fit Criterion: User feels good about level progression and is encouraged to play again. \item Priority: Medium \item History: Created October 5, 2016 \end{itemize} \end{itemize} \subsubsection{Accessibility Requirements} \begin{itemize} \item Requirement \#11 \begin{itemize} \item Description: The product shall produce patterns both visually and auditory so as to accommodate for users with visual or auditory problems that they can use an alternative pattern means. \item Rationale: The application is intended all users, should be easy to use for someone by just following visual patterns or just following auditory patterns. \item Originator: Shady Nessim \item Fit Criterion: User with auditory or visual problems feel comfortable playing the game. \item Priority: Medium \item History: Created October 5, 2016 \end{itemize} \end{itemize} \subsection{Performance Requirements} \subsubsection{Speed and Latency Requirements} \begin{itemize} \item Requirement \#12 \begin{itemize} \item Description: The application should be able to recognize whether the user has entered the right pattern as soon as they finish pressing the last button. \item Rationale: The user should not have to wait for the application to calculate whether their input was correct or not. \item Originator: Shady Nessim \item Fit Criterion: Application should respond immediately to user input and the upcoming pattern should start soon after user input ends. \item Priority: High \item History: Created October 5, 2016 \end{itemize} \end{itemize} \subsubsection{Safety Critical Requirements} There are none applicable to this project. \subsubsection{Precision of Accuracy Requirements} \begin{itemize} \item Requirement \#13 \begin{itemize} \item Description: The application must be specific to each button press. Button press confusion or mistake must not be tolerated. \item Rationale: The purpose of the game is to produce exact same pattern shown by application. If program does not detect a mistake even if it is just one wrong button, then that defeats the fairness and purpose of the game. \item Originator: Shady Nessim \item Fit Criterion: Application should perceive and evaluate user pattern input impeccably. \item Priority: High \item History: Created October 5, 2016 \end{itemize} \end{itemize} \subsubsection{Reliability and Availability Requirements} \begin{itemize} \item Requirement \#14 \begin{itemize} \item Description: The application must be available at all times. \item Rationale: The purpose of the game is to defeat boredom which may come at any time and thus the game must be available at all times. \item Originator: Shady Nessim \item Fit Criterion: User should be able to play the game whenever they are bored. \item Priority: High \item History: Created October 5, 2016 \end{itemize} \end{itemize} \subsubsection{Capacity Requirements} \begin{itemize} \item Requirement \#15 \begin{itemize} \item Description: The application must be able to produce and receive patterns as long as 25 buttons. \item Rationale: In order to make the game challenging enough, patterns including but not limited to 25 in length should be produced and received by program. \item Originator: Shady Nessim \item Fit Criterion: User should be able to reach level 25 in each mode. \item Priority: Medium \item History: Created October 5, 2016 \end{itemize} \end{itemize} \subsubsection{Scalability Requirements} There are none applicable to the project. \subsubsection{Longevity Requirements} There are none applicable to the project. \subsection{Operational and Environmental Requirements} \subsubsection{Expected Physical Environment} \begin{itemize} \item Requirement \#16 \begin{itemize} \item Description: The product should be able to be used on laptops and desktops. \item Rationale: The clients will use the product from these devices. \item Originator: Shady Nessim \item Fit Criterion: User should be able to run the game on any laptop or desktop. \item Priority: High \item History: Created October 8, 2016 \end{itemize} \end{itemize} \subsubsection{Release Requirements} \begin{itemize} \item Requirement \#17 \begin{itemize} \item Description: The product will be revised yearly and updated according to changing demands and needs of the client. The product will undergo maintenance upon realization of any errors in gameplay behavior. \item Rationale: The game has to stay updated and problems have to be handled in order to maintain user interest and usage. \item Originator: Shady Nessim \item Fit Criterion: App should be updated at least annually. \item Priority: Medium \item History: Created October 8, 2016 \end{itemize} \end{itemize} \subsection{Maintainability and Support Requirements} \subsubsection{Maintenance Requirement} \begin{itemize} \item Requirement \#18 \begin{itemize} \item Description: The source code for the application shall be visible to the public. \item Rationale: This enhances the ability to monitor and maintain the system. \item Originator: Shady Nessim \item Fit Criterion: Source code is available in a public repository. \item Priority: Low \item History: Created October 10, 2016 \end{itemize} \end{itemize} \subsubsection{Supportability Requirements} None applicable for this project. \subsubsection{Adaptability Requirements} \begin{itemize} \item Requirement \#19 \begin{itemize} \item Description: The product shall run on Windows, Linux and Mac OS X environments. \item Rationale: The users may be using any of these operating systems. \item Originator: Shady Nessim \item Fit Criterion: The product works on listed platforms in the test groups. \item Priority: Medium \item History: Created October 10, 2016 \end{itemize} \end{itemize} \subsection{Security Requirements} \subsubsection{Privacy Requirements} \begin{itemize} \item Requirement \#20 \begin{itemize} \item Description: The application shall not store, transmit, or upload any user data. \item Rationale: This is required in order to protect the privacy of users. \item Originator: Shady Nessim \item Fit Criterion: No functionality to perform these tasks is implemented in the application. \item Priority: Low \item History: Created October 10, 2016 \end{itemize} \end{itemize} \subsection{Cultural Requirements} \begin{itemize} \item Requirement \#21 \begin{itemize} \item Description: The application shall not contain any imagery or text that can be reasonably foreseen as potentially offensive to users of all cultures, backgrounds and ethnicities. \item Rationale: User satisfaction will be greatly reduced if they notice any offensive patterns in the game. \item Originator: Shady Nessim \item Fit Criterion: Application does not contain offensive patterns or references. \item Priority: Medium \item History: Created October 10, 2016 \end{itemize} \end{itemize} \subsection{Legal Requirements} There are none applicable to this project. \section{Project Issues} \subsection{Open Issues} There has been no open issues in the duration of this project. The undocumented code has been carefully anlyzed and each line has been assessed and a solid understanding of the program has been gained. \subsection{Off-the-Shelf Solutions} In general, there are many games that share the similar nature and purpose. However, with respect to Simon Says, there is the original oral game in which one designated person speaks out an action for the others to do, having them only do the action if they say ?Simon says? before it. As for digital versions, there exists multiple ones online. \subsection{New Problems} The only problem that could arise from this project is addiction. As there could be individuals that instead of enjoying this game during their free time or as a short break from their schedule, they would consume their other priorities? time to play. This game could potentially be the reason behind a missed deadline, or anything in that manner. \subsection{Tasks} All tasks that need to be accomplished are covered within this group's Gantt Chart, including their start and end dates found here: \begin{itemize} \item \href{run:GanttChart.gan} {Gantt Chart}\\ \end{itemize} \subsection{Migration to the New Product} There will not be issues transferring from another version of this game to this current version. Only the user's preferences matter for this, and that will be discussed as a risk in the section below. \subsection{Risks} The initial risk has been that the user's preferences might conflict with the team's preferences that the game was built upon. That risk has been taken into consideration and to counter it, the team has developed a user survey. That survey was taken by various users that played the game and their opinions were analyzed and the game has been updated accordingly. \subsection{Costs} There are no monetary costs included in this project. \subsection{User Documentation and Training} Instructions are always available at the bottom left corner of the screen. The game is very simple and does not require more than one line of explanation per mode. \subsection{Waiting Room} At this point, the team is improving all documentation for the next round of marking. \subsection{Ideas for Solutions} To make sure that most of the users will enjoy this game, a survey was conducted to collect different thoughts and preferences as to what the users would like to see in this game, what they are looking forward to, and what they expect. That survey included the desired colors, sounds, interface and functionality. The current implementation of this project is modified to suit those preferences. \bibliographystyle{plainnat} \bibliography{SRS} \newpage \section{Appendix} This section contains no related information for this document. \subsection{Symbolic Parameters} N represents any integer and does not have an upper bound. It is used throughout the document to represent level numbers, the number of elements in a given pattern and the score. \end{document}
{ "alphanum_fraction": 0.7791965858, "avg_line_length": 41.4081346424, "ext": "tex", "hexsha": "24e72722de9532172abedfa753c077f8fcb37449", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8456be3b6c4e7a0e83ac233d0800517b39cd2bb0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "JP-Dakran/DNA_Says", "max_forks_repo_path": "Doc/SRS/SRS.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8456be3b6c4e7a0e83ac233d0800517b39cd2bb0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "JP-Dakran/DNA_Says", "max_issues_repo_path": "Doc/SRS/SRS.tex", "max_line_length": 676, "max_stars_count": null, "max_stars_repo_head_hexsha": "8456be3b6c4e7a0e83ac233d0800517b39cd2bb0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "JP-Dakran/DNA_Says", "max_stars_repo_path": "Doc/SRS/SRS.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7138, "size": 29524 }
\section{GROOVE formalisation} \label{sec:formalisations:groove_formalisation} This section discusses a (partial) formalisation of GROOVE. This formalisation is limited to type graphs and instance graphs, as discussed in \cref{sec:background:groove}. These are the only GROOVE graph types that are relevant to this thesis. \input{tex/03_formalisations/03_groove_formalisation/01_definitions.tex} \input{tex/03_formalisations/03_groove_formalisation/02_type_graphs.tex} \input{tex/03_formalisations/03_groove_formalisation/03_instance_graphs.tex}
{ "alphanum_fraction": 0.8500914077, "avg_line_length": 68.375, "ext": "tex", "hexsha": "cc41a2927d88558d5f09580772761d27b85f6d11", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca", "max_forks_repo_licenses": [ "AFL-3.0" ], "max_forks_repo_name": "RemcodM/thesis-ecore-groove-formalisation", "max_forks_repo_path": "thesis/tex/03_formalisations/03_groove_formalisation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "AFL-3.0" ], "max_issues_repo_name": "RemcodM/thesis-ecore-groove-formalisation", "max_issues_repo_path": "thesis/tex/03_formalisations/03_groove_formalisation.tex", "max_line_length": 243, "max_stars_count": null, "max_stars_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca", "max_stars_repo_licenses": [ "AFL-3.0" ], "max_stars_repo_name": "RemcodM/thesis-ecore-groove-formalisation", "max_stars_repo_path": "thesis/tex/03_formalisations/03_groove_formalisation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 150, "size": 547 }
\chapter{Keyboard Design} \label{keyboard_design} \section{Word-gesture Keyboard Implementation} \subsection{Lacking Word-recognition} \label{lacking_word_recognition} Recreating a word-gesture keyboard from scratch with the limited, non-commercial information available was considered to be outside of the scope of this thesis. Word-recognition has already been proven to work and benefits word-gesture keyboards, including mid-air, by providing high recognition rates of word-gesture shapes (e.g., low error rate and high text-entry rates) \cite{ref_shape_writing,ref_the_word_gesture_keyboard,ref_shapewriter_iphone,ref_shark_wgk,ref_shorthand_writing,ref_vulture}. Word-recognition is a requirement for traditional word-gesture keyboards because the intended word-gestures that are being generated are \textit{unknown}. However, the word-gestures that were generated in this thesis were \textit{known} in advance. Because the software has preexisting knowledge of the intended word-gesture shapes, it was concluded that word-recognition was not absolutely necessary to build a fully featured word-gesture keyboard. Therefore, a pseudo word-gesture keyboard implementation based on the presented gesture-shapes was used. Using a pseudo word-gesture keyboard implementation presented its own challenges and limitations. One possible limitation was that character production, including detected errors, occurred mid-gesture. This meant that participants were more likely to stop mid-gesture and interrupt the gesturing-process to correct errors rather than following through with the full word-gesture shape as in traditional word-gesture keyboards. This was expected to slow text-entry rates. Another possible issue was that since the pseudo word-gesture keyboard did not implement shape-recognition, the gesture-path had to be analyzed as it was being drawn. Therefore, key ``presses'' had to be interpreted through path changes or by the user passing through the expected key. Section~\ref{design} explains how this works in greater detail. Lacking shape-recognition could lead to higher error rates. Additionally, because gesture-shapes were not recognized against a compendium of common words, these limitations allowed user-generation of non-words (e.g., words not in any dictionary). Although these limitations existed, the results of this thesis can be extrapolated to traditional word-gesture keyboards. This claim is justified by the Leap Pinch-air Keyboard ($M = 11.3$ WPM) performing consistently with the pinching-method from Vulture ($M = 11.8$ WPM) for text-entry rates in a single session with no training \cite{ref_vulture}. Additionally, the Leap Pinch-air Keyboard reached 58\% of the text-entry rate of direct touch input, which was proportional to Vulture at 59\% of the text-entry rate of direct touch input \cite{ref_vulture}. Many of these changes and limitations occurred on the back-end (software) while still presenting a similar experience for the user to traditional word-gesture keyboards. \subsection{Design} \label{design} The pseudo word-gesture implementation was created by analyzing the user's generated gesture compared to the expected gesture as it was being drawn to determine which keys were being pressed. The assumptions for detecting a key press were based off the known word being gestured and noticeable deviations made in the gesture's direction. To reduce the chance of erroneous keys being pressed, the sizes of the characters expected to be pressed were exaggerated and the deviation threshold in gestures were lowered between key paths. The displayed keys were 64x64 pixels in size with a gap of 10 pixels between each key. The actual size of the keys was dependent on the display device being used. The next expected letter to be pressed in a word was changed into a circular key with a radius of 76.8 pixels, 20\% larger than the key widths, in order to make hitting keys even easier. Figure~\ref{key_bloating} shows how keys were changed behind the scenes. However, the software presented no visual feedback to the participant of the increased key sizes. \begin{figure}[!t] \centering \includegraphics[width=5in]{Figures/fig_bloat_key} \caption[Larger Key Example]{The next expected key to be pressed was increased in size to make pressing it easier. The above visual shows how it was interpreted by the software, but there was no visual feedback presented to users.} \label{key_bloating} \end{figure} An interpolated trail with points at a minimum of 16 pixels apart was used in determining deviations in word-gesturing. The angle of detecting a deviation was 165 degrees for all areas that were not on the expected path to the next key and was 90 degrees while on the expected path. Deviations in gesture path had to be at least 48 pixels away from each other to be counted as a ``press''. The expected path between two keys comprised an area from the previous expected key to the next expected key with a width 62.5\% larger than key size, or 104 pixels. Figure~\ref{protected_path} shows how the expected path protects against natural deviations when moving from one key to the next. \begin{figure}[!t] \centering \begin{minipage}[t]{3in} \includegraphics[width=2.5in]{Figures/fig_path_no_protection} \subcaption{No Path Protection\ \ \ \ \ \ \ \ \ } \end{minipage} \begin{minipage}[t]{2.5in} \includegraphics[width=2.5in]{Figures/fig_path_no_error} \subcaption{Path Protection} \end{minipage} \begin{minipage}[t]{2.5in} \includegraphics[width=2.5in]{Figures/fig_path_with_error} \subcaption{Error with Path Protection} \end{minipage} \caption[Protected Path Example]{A path between the currently pressed letter and the next letter significantly reduces the chance of detecting erroneous input. (a) shows an error detected with an angle less than 165 degrees; (b) shows how as long as the user stayed on the path, errors were significantly reduced; and (c) shows that errors could still be detected on a protected path if a deviation with an angle less than 90 degrees was detected.} \label{protected_path} \end{figure} The specific values for detecting key presses were found using trial and error. These were used to create an experience as close to a traditional word-gesture keyboard as possible. Whereas the word-recognition implementation showed the transcribed word after the completed gesture, the pseudo-implementation showed participants real-time updates of detected character presses along the keyboard path. This was determined to be an acceptable limitation as explained in Section~\ref{lacking_word_recognition}. \subsection{Display} \subsubsection{Keyboard layout} The keyboard layout, seen in Figure~\ref{keyboard_layout}, was a typical QWERTY keyboard with key sizes of 64x64 pixels and gaps of 10 pixels. All special keys and number keys were removed to simplify the keyboard and a backspace key added to the keyboard's right side to allow for erroneous character deletion. \begin{figure}[t] \centering \includegraphics[width=6in]{Figures/fig_final_keyboard} \caption[Display: Keyboard Layout]{The keyboard layout used during the full study.} \label{keyboard_layout} \end{figure} \subsubsection{Text area} Figure~\ref{text_area} shows how two text areas were used to display text to participants. The top text-area displayed a presented word, shown in Figure~\ref{text_a}, and the bottom text-area displayed the presented word's transcription, shown in Figure~\ref{text_b}. Both the presented word and transcription's characters were colored green when correctly matched. If errors were made during transcription, only characters in the transcribed text would display in red. The participant could then use the backspace to correct errors. When the transcription matched the presented word, both text-areas were highlighted in green, as seen in Figure~\ref{text_c}. \begin{figure}[!b] \centering \begin{minipage}[t]{1.9in} \includegraphics[width=2in]{Figures/fig_idle_keyboard} \subcaption{Displayed Word} \label{text_a} \end{minipage} \begin{minipage}[t]{1.9in} \includegraphics[width=2in]{Figures/fig_error_keyboard} \subcaption{Transcribed Error} \label{text_b} \end{minipage} \begin{minipage}[t]{1.9in} \includegraphics[width=2in]{Figures/fig_correct_keyboard} \subcaption{Completed Word} \label{text_c} \end{minipage} \caption[Display: Text Area]{Examples of how the text areas change when showing transcribed text. (a) shows the word to be transcribed as it first appears; (b) shows how correct letters were colored green in both text areas and transcription errors were colored red; and (c) shows the user-generated text and presented word highlighted to indicate correct transcription.} \label{text_area} \end{figure} \subsubsection{Real-time updates} As a participant was drawing the gesture-shape of a word, their progress was tracked in real-time, as shown in Figure~\ref{display_area}. For the keyboards that tracked a finger or stylus, the software displayed a cylinder to indicate the input's position and direction in 3-dimensional space relative to the virtual keyboard as shown in Figure~\ref{update_a}. In addition, it displayed which letter the user's input hovered over by projecting a blue dot to the corresponding virtual keyboard location. As seen in Figure~\ref{update_b}, the software showed the participant the gesture-path that they were traveling in addition to the letters that had been pressed. The gesture-trail decayed over time in order to not clutter the display. \begin{figure}[!t] \centering \begin{minipage}[t]{2.9in} \includegraphics[width=3in]{Figures/fig_update1_keyboard} \subcaption{Prior to pressing `d'} \label{update_a} \end{minipage} \begin{minipage}[t]{2.9in} \includegraphics[width=3in]{Figures/fig_update2_keyboard} \subcaption{During the word-gesture} \label{update_b} \end{minipage} \caption[Display: Real-time Updates]{Examples of the real-time display for word-gesturing. (a) shows the user just about to press the first character; and (b) shows the word gesturing process for the word ``decent.''} \label{display_area} \end{figure} \subsection{Calibration} Each mid-air keyboard, and the Leap Surface Keyboard, could be calibrated in a manner similar to Personal Space \cite{ref_alvin_thesis}, seen in Figure~\ref{calibration_in_progress}. Many of the calibration spaces, however, required direct interaction instead of projecting the inputs onto the interaction plane. Default calibrations were adequate for most participants, but recalibration was optional. However, calibration had less of a lasting effect because many participants repositioned the Leap Motion controller itself. Participants were encouraged to adjust the controller's position to promote usability. This was sometimes a greater factor than motor space calibration for translating precise movement to the virtual keyboard. Because this thesis was not an accessibility study, participants were allowed to calibrate the keyboard with their arms rested or raised. Therefore, this thesis did not address the ``Gorilla Arm Syndrome'' mentioned in Section~\ref{gorilla_arm_syndrome}. \begin{figure}[!t] \centering \begin{minipage}[t]{4in} \begin{minipage}[t]{1.9in} \includegraphics[width=2in]{Figures/fig_calib_1} \end{minipage} \begin{minipage}[t]{1.9in} \includegraphics[width=2in]{Figures/fig_calib_2} \end{minipage} \end{minipage} \begin{minipage}[t]{4in} \begin{minipage}[t]{1.9in} \includegraphics[width=2in]{Figures/fig_calib_3} \end{minipage} \begin{minipage}[t]{1.9in} \includegraphics[width=2in]{Figures/fig_calib_4} \end{minipage} \end{minipage} \caption[Calibration]{Participants were able to follow on-screen instructions to calibrate the interaction space using their finger.} \label{calibration_in_progress} \end{figure} \subsubsection{Motor space and display space} The researcher attempted to place the Leap Motion controller to adjust the calibrated interaction plane, or the motor space, to be as parallel to the screen, or display space, as possible. However, participants were allowed to adjust the Leap Motion controller to a position that felt most comfortable to them. Moving the controller typically resulted in the motor space being oriented perpendicular to a participant's arm rather than parallel to the display space. Also to note, when working with keyboards that fully utilized the 3rd-dimension, an interaction plane angled away from a participant was sometimes more effective than a straight plane perpendicular to the floor. Figure~\ref{plane_angle} shows the difference between a straight plane and angled plane. \begin{figure}[!t] \centering \begin{minipage}[t]{4in} \begin{minipage}[t]{1.9in} \includegraphics[width=2in]{Figures/fig_plane_straight} \subcaption{Straight Plane} \end{minipage} \begin{minipage}[t]{1.9in} \includegraphics[width=2in]{Figures/fig_plane_angled} \subcaption{Angled Plane} \end{minipage} \end{minipage} \caption[Angled Plane]{Examples of a straight plane versus an angled plane.} \label{plane_angle} \end{figure} The size of the motor space was dependent on either the device the keyboard was presented on or the calibration of the keyboard's interaction plane. For all of the keyboards, the motor space was mapped to a display space of 952x212 pixels and keys that were 64x64 pixels with gaps of 10 pixels. The real-world size of the keyboard display was dependent on the screen being used. Figure~\ref{motor_space_size} shows the average sizes of the motor spaces. \begin{figure}[!t] \centering \begin{minipage}[t]{2.5in} \includegraphics[width=2.5in]{Figures/fig_calibration_touch} \subcaption{Touch Screen} \label{fig_calibration_touch} \end{minipage} \begin{minipage}[t]{2.5in} \includegraphics[width=2.5in]{Figures/fig_calibration_surface} \subcaption{Leap Surface} \label{fig_calibration_surface} \end{minipage} \begin{minipage}[t]{2.5in} \includegraphics[width=2.5in]{Figures/fig_calibration_static} \subcaption{Static/Predictive/Bimodal} \label{fig_calibration_static} \end{minipage} \begin{minipage}[t]{2.5in} \includegraphics[width=2.5in]{Figures/fig_calibration_pinch} \subcaption{Leap Pinch-air} \label{fig_calibration_pinch} \end{minipage} \caption[Motor Space Comparison]{The average size of the motor spaces for each keyboard with a gradient color scale showing the plane orientation. The closest areas are represented by \textit{yellow} and the farthest by \textit{blue}. (a) shows the standard Touch Sceen motor space; (b) shows the average calibration for the Leap Surface motor space; (c) shows the average calibration for the Leap Static-air, Predictive-air, and Bimodal-air motor spaces (a single calibration was typically used for all three); and (d) shows the average calibration for the Leap Pinch-air motor space.} \label{motor_space_size} \end{figure} \subsection{Dictionary Creation} \label{dictionary_creation} For the purposes of this thesis, the term ``dictionary'' denotes the pool of words presented to the participant. To make each keyboard experience as similar as possible, a custom dictionary was created for each keyboard interaction style. While different words were used for each dictionary, the words were selected by using a custom gesture-shape dissimilarity algorithm. This algorithm minimized dissimilarity between word gesture-shapes as shown in Appendix~\ref{dictionary_sets}. The algorithm's results were further reduced to common words between 3 and 6 characters in length. These unique word sets became each keyboard's dictionary. \subsubsection{Deviating from the standard phrases} To evaluate text-entry, typically predefined phrases were generated and used \cite{ref_phrase_sets}. However, due to the limited number of trials and the abundance of different conditions, single words were chosen as opposed to randomly selecting from a compendium of predefined phrase sets. The goal was to create new word dictionaries, avoiding whole phrases to prevent confusing language, using a custom algorithm to minimize the dissimilarity of different gesture-shapes. No previous research existed on words with similar gesture-shapes, or their benefit, but this thesis's purposeful deviation created similar keyboard experiences to standardize results across many conditions and few trials. \subsubsection {Gesture-shape dissimilarity} \label{gesture_shape_dissimilarity} Originally, this thesis considered the Fr\'echet Distance to find similar word gesture-shapes using sets of words with minimal distance between each letter within a gesture-shape. While Fr\'echet Distance gave acceptable results, there were noticeable differences in \textit{some} of the gesture-shape sets. Figure~\ref{fig_words_frechet} demonstrates these differences, which appears to show more than one primary gesture-shape returning for the set. \begin{figure}[!b] \centering \includegraphics[width=5in]{Figures/fig_words_frechet} \caption[Fr\'echet Word Set]{An example of a gesture-shape set generated by the Fr\'echet Distance algorithm: `crass', `creed', `crews', `feeds', `feted', `fetes'. The Fr\'echet Distance, at times, generated more than one gesture-shape pattern per word set.} \label{fig_words_frechet} \end{figure} In order to achieve gesture-shapes that were more similar than shapes found by the Fr\'echet Distance, the custom dissimilarity algorithm was created. The words were pulled from the Oxford English Dictionary. The dissimilarity between two words' gesture-shapes was defined by the formula \begin{equation} dissimilarity(P,\ Q) = \frac{\sum\limits_{i = 2}^{N} \frac{1}{2} \left(\left(\frac{\mid dist(P_{i},\ P_{i-1}) - dist(Q_{i},\ Q_{i-1})\mid}{max\ distance}\right) + \left(\frac{angle(P_{i} - P_{i-1},\ Q_{i} - Q_{i-1})}{\pi}\right)\right)}{N - 1} \end{equation} where $P$ and $Q$ were two words of $N$ characters in length, $i$ was a particular character of $P$ or $Q$, $P_i$ and $Q_i$ were the vector locations on the virtual keyboard, $max\ distance$ was the maximum distance between any two letters on the virtual keyboard, $dist(...)$ was the distance between two vector locations, and $angle(...)$ was the angle between two vectors. The dissimilarity formula generated values between the range [0, 1] and treated every pair of paths between two letters of two words with equal weight. The objective was then to find the sets of words with the lowest dissimilarity. \section{Word-gesture Keyboards} All of the word-gesture keyboards created used the same word-gesturing implementation but differentiated by their interaction method and how touch was handled as a delimiter between words. \subsection{Touch Screen Keyboard} \subsubsection{Interaction method} The Touch Screen Keyboard was implemented to mirror the de facto method for word-gesture keyboards, which are generally touch-based. The user interacts directly with a touch screen surface. \subsubsection{Word separation} Figure~\ref{touch_screen_press_comparison} shows that word separation for the Touch Screen Keyboard worked in the same way as typical word-gesture keyboards for phones and tablets. Touch was simulated simply by pressing a finger against the surface, drawing the word-gesture, and then removing the finger from the surface. \begin{figure}[h] \centering \begin{minipage}[t]{5.8in} \begin{minipage}[t]{2.85in} \includegraphics[width=2.9in]{Figures/fig_touch_screen_hover} \end{minipage} \begin{minipage}[t]{2.9in} \includegraphics[width=2.9in]{Figures/fig_touch_screen_press} \end{minipage} \end{minipage} \caption[Touch Screen Word Separation]{A touch was simulated when the tabletop screen was touched with the user's pointer finger.} \label{touch_screen_press_comparison} \end{figure} \subsubsection{Size of the motor space} The motor space for the Touch Screen Keyboard was larger than the other keyboard motor spaces because the display device was intrinsically larger and the Touch Screen's motor space and display space are coupled together. The display device used was a C4667PW boasting a $46^{\prime\prime}$ display space and a maximum resolution of 1920x1080 pixels. Figure~\ref{fig_3m_display} shows the C4667PW, a 3M\textsuperscript{TM} Multi-touch Display. When scaled for the maximum resolution, the Touch Screen display space and motor space were both 50.49x11.24 $cm$, with keys that were 3.39x3.39 $cm$ and gaps between keys of 0.53 $cm$. If higher resolutions were available, a higher resolution would have been chosen to decrease the overall size of the display space and motor space to match the other keyboards more closely. Though larger than desired, the 3M\textsuperscript{TM} Multi-touch Display was still preferred over using very small touch-based, word-gesture keyboards such as those on phones or tablets. A similar sized motor space helped standardize results between touch and mid-air. \begin{figure}[!b] \centering \includegraphics[width=5in]{Figures/fig_3m_display} \caption[3M\textsuperscript{TM} Multi-touch Display]{The $46^{\prime\prime}$ C4667PW, a 3M\textsuperscript{TM} multi-touch tabletop display.} \label{fig_3m_display} \end{figure} \subsection{Leap Surface Keyboard} \label{leap_surface} \subsubsection{Interaction method} The Leap Surface Keyboard used the Leap Motion controller to track a wooden stylus for interaction. It was designed so that it would simulate a touch screen using a mid-air plane projected onto a surface. This was done by inserting the Leap Motion controller into a custom holder, shown in Figure~\ref{fig_leap_holder}, and projecting the mid-air keyboard over a keyboard printed on paper. As an added note, the Leap Surface Keyboard works in the exact same way as the Static-air Keyboard in Section~\ref{static_air} by being calibrated to a surface rather than mid-air. A stylus was chosen to be used as an interaction tool to allow for accurate surface emulation because the Leap Motion controller was in a position that made it difficult to successfully track a participants hand or finger. Unfortunately, the Leap Controller hardware, at the time of this thesis, was only designed to recognize hands from one direction, necessitating the controller be positioned at the bottom of the holder rather than the top. \begin{figure}[h] \centering \includegraphics[width=5in]{Figures/fig_leap_holder} \caption[Leap Surface Holder]{The custom built holder, projecting an interaction plane onto a printed keyboard surface.} \label{fig_leap_holder} \end{figure} \subsubsection{Word separation} Word separation for the Leap Surface Keyboard worked in a similar way to how touch was simulated using a stylus for a phone or tablet. Figure~\ref{leap_surface_press_comparison} shows how touch was simulated by pressing the tip of the stylus against the surface of the printed keyboard. The word-gestures were drawn and then the stylus removed from the surface to complete the action. \begin{figure}[!t] \centering \begin{minipage}[t]{5.8in} \begin{minipage}[t]{2.85in} \includegraphics[width=2.9in]{Figures/fig_surface_hover} \end{minipage} \begin{minipage}[t]{2.9in} \includegraphics[width=2.9in]{Figures/fig_surface_touch} \end{minipage} \end{minipage} \caption[Leap Surface Word Separation]{A touch was simulated when the stylus hits the paper surface.} \label{leap_surface_press_comparison} \end{figure} \subsubsection{Size of the motor space} Figure~\ref{fig_calibration_surface} shows the average calibrated motor space for the Leap Surface Keyboard. The average keyboard was 22.28x5.41 $cm$, with keys that were 1.50x1.50 $cm$ and gaps between keys of 0.23 $cm$. \subsection{Leap Static-air Keyboard} \label{static_air} \subsubsection{Interaction method} The Leap Static-air Keyboard used the Leap Motion controller to track the pointer finger of either hand for interaction. It was designed to simulate a virtual touch screen in mid-air by projecting a quadrilateral plane directly above the interactive surface. The pointer finger would then be used to penetrate the plane to simulate touch. \subsubsection{Word separation} Word separation for the Leap Static-air Keyboard worked in a similar way as any ordinary touch-based word-gesture keyboard, However, the simulated touch plane was in mid-air. Touch was simulated by using either pointer finger and penetrating the mid-air interaction plane as seen in Figure~\ref{static_press_comparison}. While maintaining the intersection, the pointer finger was used to draw the word-gesture. By pulling the finger away from the mid-air interaction plane, touch was released. \begin{figure}[!t] \centering \begin{minipage}[t]{5.8in} \begin{minipage}[t]{2.85in} \includegraphics[width=2.9in]{Figures/fig_static_hover} \end{minipage} \begin{minipage}[t]{2.9in} \includegraphics[width=2.9in]{Figures/fig_static_touch} \end{minipage} \end{minipage} \caption[Leap Static-air Word Separation]{A touch was simulated by penetrating the interaction plane.} \label{static_press_comparison} \end{figure} \subsubsection{Size of the motor space} Figure~\ref{fig_calibration_static} shows the average calibrated motor space for the Leap Static-air Keyboard. The average keyboard was 13.71x10.07 $cm$, with keys that were 0.92x0.92 $cm$ and gaps between keys of 0.14 $cm$, the same as the Predictive-air and Bimodal-air keyboards. \subsection{Leap Predictive-air Keyboard} \label{predictive_air_keyboard} \subsubsection{Interaction method} The Leap Predictive-air Keyboard used the Leap Motion controller to track the pointer finger of either hand for interaction. It was designed to simulate a virtual touch screen by projecting a quadrilateral plane in the air. However, instead of having to interact with a static, unchanging plane, the Predictive-air Keyboard associates the interaction plane with the participant's pointer finger. As the pointer finger moves forward or backward, the plane follows. By analyzing forward and backward hand gestures in the $z$-direction, the Predictive-air Keyboard tries to move the interaction plane to the pointer finger by predicting when a touch was simulated. Slow gestures serve mostly to move the plane, whereas quick gestures generally snap the plane to the pointer finger. The Leap Motion API provided the predictor values for forward and backward hand gestures. \subsubsection{Word separation} Word separation for the Predictive-air Keyboard worked in a similar way as any ordinary touch-based word-gesture keyboard. However, the simulated touch plane was in mid-air. This plane was kept at a consistent distance away from the tracked pointer finger until a forward hand gesture was detected, simulating a touch. The pointer finger could then be used to draw the word-gesture until it was completed. Finally, by making a backward hand gesture away from the interaction plane, the simulated touch was released. The plane interaction was visually similar to Figure~\ref{static_press_comparison}, the Leap Static-air Keyboard interaction. \subsubsection{Size of the motor space} Figure~\ref{fig_calibration_static} shows the average calibrated motor space for the Leap Predictive-air Keyboard. The average keyboard was 13.71x10.07 $cm$, with keys that were 0.92x0.92 $cm$ and gaps between keys of 0.14 $cm$, the same as the Static-air and Bimodal-air keyboards. \subsection{Leap Bimodal-air Keyboard} \subsubsection{Interaction method} The Leap Bimodal-air Keyboard was designed to utilize two inputs: the Leap Motion controller and a standard keyboard. The Leap Motion controller tracked the pointer finger of either hand by projecting a quadrilateral plane in the air and snapping the movements of the pointer finger to the plane. A touch was simulated by using the secondary input; in this case, a standard keyboard's space bar. \subsubsection{Word separation} In order to move from one word to the next for the Leap Bimodal-air Keyboard, the user activated a secondary input: the standard keyboard's space bar. The interaction plane for simulated touch, as seen in Figure~\ref{bimodal_press}, was still projected in mid-air. Touch was simulated by using either pointer finger to determine the position over the interaction plane in the $x$ and $y$ directions and then by pressing and holding the space bar. While holding down the space bar, the pointer finger was used to draw the word-gesture and finally the space bar was released to end the touch. \begin{figure}[!t] \centering \includegraphics[width=5in]{Figures/fig_leap_bimodal} \caption[Leap Bimodal-air Word-separation]{A touch was simulated by pressing the space bar on the keyboard.} \label{bimodal_press} \end{figure} \subsubsection{Size of the motor space} Figure~\ref{fig_calibration_static} shows the average calibrated motor space for the Leap Bimodal-air Keyboard. The average keyboard was 13.71x10.07 $cm$, with keys that were 0.92x0.92 $cm$ and gaps between keys of 0.14 $cm$, the same as the Static-air and Predictive-air keyboards. \subsection{Leap Pinch-air Keyboard} \subsubsection{Interaction method} The Leap Pinch-air Keyboard used the Leap Motion controller to track the palm of either hand for interaction. It was designed to project a quadrilateral plane in mid-air and snapped the palm position to the plane in the $z$-direction. The hand could then be used to form a pinch-gesture to simulate touch. The Leap Motion API provided the predictor values for recognizing pinching-gestures. It is important to note that unlike in Vulture \cite{ref_vulture}, no glove was required and many different pinch-gestures were recognized. \subsubsection{Word separation} A pinching-gesture was used to move to the next word in the sequence for the Leap Pinch-air Keyboard. However, the interaction plane was still projected in mid-air. Touch was simulated by using either hand and then forming and holding a pinching-gesture, as shown in Figure~\ref{pinch_press_comparison}. While pinching, the word-gesture was drawn and then released to end the ``touch.'' \subsubsection{Size of the motor space} Figure~\ref{fig_calibration_pinch} shows the average calibrated motor space for the Leap Pinch-air Keyboard. The average keyboard was 16.86x9.04 $cm$, with keys that were 1.13x1.13 $cm$ and gaps between keys of 0.18 $cm$. \begin{figure}[h] \centering \begin{minipage}[t]{5.8in} \begin{minipage}[t]{2.85in} \includegraphics[width=2.9in]{Figures/fig_pinch_hover} \end{minipage} \begin{minipage}[t]{2.9in} \includegraphics[width=2.9in]{Figures/fig_pinch_touch} \end{minipage} \end{minipage} \caption[Leap Pinch-air Word Separation]{A touch was simulated by making a pinching gesture.} \label{pinch_press_comparison} \end{figure}
{ "alphanum_fraction": 0.7971632129, "avg_line_length": 93.1484848485, "ext": "tex", "hexsha": "ec208d722aaa6a1eb74e3e57f2d556d9affafac9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a87684c7e9c1d7250922d00f37f31ae242dcc363", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "Malificiece/Leap-Motion-Thesis", "max_forks_repo_path": "docs/Thesis/ch3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a87684c7e9c1d7250922d00f37f31ae242dcc363", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "Malificiece/Leap-Motion-Thesis", "max_issues_repo_path": "docs/Thesis/ch3.tex", "max_line_length": 1796, "max_stars_count": null, "max_stars_repo_head_hexsha": "a87684c7e9c1d7250922d00f37f31ae242dcc363", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "Malificiece/Leap-Motion-Thesis", "max_stars_repo_path": "docs/Thesis/ch3.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7488, "size": 30739 }
\documentclass[10pt,english]{article} %\usepackage[T1]{fontenc} %\usepackage[utf8]{inputenc} \usepackage[a4paper,left=0.4in, right=0.4in,top=0.5in,bottom=0.6in]{geometry} \usepackage{amsmath} \usepackage{amssymb} \usepackage{subcaption} \usepackage{epstopdf} \usepackage{graphicx} \usepackage{mathrsfs} \usepackage{epstopdf} \usepackage{mathtools} \usepackage{textcomp} \usepackage{graphicx} \usepackage{xcolor} \usepackage{epstopdf} %\usepackage{multicol} %\usepackage{tikz} \usepackage{hyperref} % code listing settings \usepackage{listings} \lstset{ language=Python, basicstyle=\ttfamily\small, aboveskip={1.0\baselineskip}, belowskip={1.0\baselineskip}, columns=fixed, extendedchars=true, breaklines=true, tabsize=4, prebreak=\raisebox{0ex}[0ex][0ex]{\ensuremath{\hookleftarrow}}, frame=lines, showtabs=false, showspaces=false, showstringspaces=false, keywordstyle=\color[rgb]{0.627,0.126,0.941}, commentstyle=\color[rgb]{0.133,0.545,0.133}, stringstyle=\color[rgb]{01,0,0}, numbers=left, numberstyle=\small, stepnumber=1, numbersep=10pt, captionpos=t, escapeinside={\%*}{*)} } \author{Steven Porretta \\ Student \# 100756494} \title{Assignment \#4} \date{} \begin{document} \maketitle Code can be found at \url{https://github.com/0xSteve/learning_automata_simulator} \section{Question 1} In this section we will examine some code snippets from the first question. \begin{lstlisting}[label={list:first},caption=Testbench code for the Tsetlin.] c2 = 0.7 c1 = 0.05 for i in range(0, 7): print("c1 = " + str(c1) + ", c2 = " + str(c2) + ", N = 13.") a = la.Tsetlin(13, 2, [c1, c2]) a.simulate(50, 30001) b = ala.Tsetlin.stationary_probability_analytic([c1, c2], 13) c = ala.Tsetlin.number_of_states_estimate([c1, c2]) print("Tsetlin P1(infinity) = " + str(b) + "(Analytic)") print("Tsetlin P1(infinity) = " + str(a.action_average[0]) + "(Simulated)") print("Tsetlin # of states required = " + str(c) + "(Estimate)") c1 += 0.1 c1 = round(c1, 2) \end{lstlisting} This excerpt of code generates the entire quantity of required code for this question. As can be seen in the following code-snippet. \begin{lstlisting}[label={list:first},caption=Testbench output.] c1 = 0.05, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.9999999890725503(Analytic) Tsetlin P1(infinity) = 1.0(Simulated) Tsetlin # of states required = 3(Estimate) c1 = 0.15, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.9999778874501014(Analytic) Tsetlin P1(infinity) = 1.0(Simulated) Tsetlin # of states required = 4(Estimate) c1 = 0.25, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.9990142374252533(Analytic) Tsetlin P1(infinity) = 0.999998000067(Simulated) Tsetlin # of states required = 6(Estimate) c1 = 0.35, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.9865794150962881(Analytic) Tsetlin P1(infinity) = 0.999797340089(Simulated) Tsetlin # of states required = 9(Estimate) c1 = 0.45, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.9151468144874294(Analytic) Tsetlin P1(infinity) = 0.980783973868(Simulated) Tsetlin # of states required = 18(Estimate) c1 = 0.55, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.7453193640776348(Analytic) Tsetlin P1(infinity) = 0.786514449518(Simulated) Tsetlin # of states required = 0(Estimate) c1 = 0.65, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.5680008401435711(Analytic) Tsetlin P1(infinity) = 0.570874970834(Simulated) Tsetlin # of states required = 0(Estimate) \end{lstlisting} Now that it is seen working as one would expect, considering rounding errors from python 3.6, it is time to take a look at the useful snippets of code governing the functionality of the Tsetlin, and Krylov automata. \begin{lstlisting}[label={list:first},caption=Tsetlin core code.] def next_state_on_reward(self): '''Find the next state of the learner, given that the teacher rewarded.''' if (self.current_state mod (self.N / self.R) != 1): self.current_state -= 1 def next_state_on_penalty(self): '''Find the next state of the learner, given that the teacher penalized.''' if(self.current_state mod (self.N / self.R) != 0): self.current_state += 1 elif(self.current_state mod (self.N / self.R) == 0): # Don't really add states, just cycle through N, 2N, 4N, etc. if(self.current_state != self.N): a = (self.N / self.R) mod self.N self.current_state = a + self.current_state else: self.current_state = self.N / self.R # Determine the next state as the teacher. def environment_response(self): '''Determine the next state of the learner from the perspective of the teacher.''' response = uniform(0, 1) penalty_index = 1 if(self.current_state <= self.n): self.actions[0] += 1 penalty_index = 0 else: self.actions[1] += 1 if(response > self.c[penalty_index]): # Reward. self.next_state_on_reward() else: # Penalty. self.next_state_on_penalty() \end{lstlisting} The above is the core code of the Tsetlin machine, governing state translations and action choices. Essentially, whenever it is in the range 1 to N, it chooses action $\alpha_1$ and $\alpha_2$ otherwise. This code is essentially the same for the Krylov machine, which we will see in the next section. \section{Question 2} \begin{lstlisting}[label={list:first},caption=testbench for the Krylov 2-action.] for i in range(0, 7): print("c1 = " + str(c1) + ", c2 = " + str(c2) + ", N = 13.") a = la.Tsetlin(13, 2, [c1/2, c2/2]) a.simulate(50, 30001) b = ala.Tsetlin.stationary_probability_analytic([c1, c2], 13) c = ala.Tsetlin.number_of_states_estimate([c1, c2]) d = la.Krylov(13, 2, [c1, c2]) d.simulate(10, 50000) e = ala.Tsetlin.stationary_probability_analytic([c1, c2], 13) f = ala.Tsetlin.number_of_states_estimate([c1, c2]) print("Tsetlin P1(infinity) = " + str(b) + "(Analytic)") print("Tsetlin P1(infinity) = " + str(a.action_average[0]) + "(Simulated)") print("Tsetlin # of states required = " + str(c) + "(Estimate)") print("Krylov P1(infinity) = " + str(e) + "(Analytic)") print("Krylov P1(infinity) = " + str(d.action_average[0]) + "(Simulated)") print("Krylov # of states required = " + str(f) + "(Estimate)") c1 += 0.1 c1 = round(c1, 2) \end{lstlisting} As can be seen from the code, this test bench looks very similar to the test bench of question 1, however, note the $c$ vector for the Tsetlin automaton is now $c_1 / 2 $, $c_2/2$. As is expected, both automata behave in the same manner. as can be seen from the output code snippet. \begin{lstlisting}[label={list:first},caption=testbench output for the Krylov 2-action.] c1 = 0.05, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.9999999890725503(Analytic) Tsetlin P1(infinity) = 1.0(Simulated) Tsetlin # of states required = 3(Estimate) Krylov P1(infinity) = 0.9999999890725503(Analytic) Krylov P1(infinity) = 1.0(Simulated) Krylov # of states required = 3(Estimate) c1 = 0.15, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.9999778874501014(Analytic) Tsetlin P1(infinity) = 0.999999333356(Simulated) Tsetlin # of states required = 4(Estimate) Krylov P1(infinity) = 0.9999778874501014(Analytic) Krylov P1(infinity) = 1.0(Simulated) Krylov # of states required = 4(Estimate) c1 = 0.25, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.9990142374252533(Analytic) Tsetlin P1(infinity) = 1.0(Simulated) Tsetlin # of states required = 6(Estimate) Krylov P1(infinity) = 0.9990142374252533(Analytic) Krylov P1(infinity) = 1.0(Simulated) Krylov # of states required = 6(Estimate) c1 = 0.35, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.9865794150962881(Analytic) Tsetlin P1(infinity) = 1.0(Simulated) Tsetlin # of states required = 9(Estimate) Krylov P1(infinity) = 0.9865794150962881(Analytic) Krylov P1(infinity) = 1.0(Simulated) Krylov # of states required = 9(Estimate) c1 = 0.45, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.9151468144874294(Analytic) Tsetlin P1(infinity) = 1.0(Simulated) Tsetlin # of states required = 18(Estimate) Krylov P1(infinity) = 0.9151468144874294(Analytic) Krylov P1(infinity) = 1.0(Simulated) Krylov # of states required = 18(Estimate) c1 = 0.55, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.7453193640776348(Analytic) Tsetlin P1(infinity) = 0.983473884204(Simulated) Tsetlin # of states required = 0(Estimate) Krylov P1(infinity) = 0.7453193640776348(Analytic) Krylov P1(infinity) = 0.999998(Simulated) Krylov # of states required = 0(Estimate) c1 = 0.65, c2 = 0.7, N = 13. Tsetlin P1(infinity) = 0.5680008401435711(Analytic) Tsetlin P1(infinity) = 0.670734975501(Simulated) Tsetlin # of states required = 0(Estimate) Krylov P1(infinity) = 0.5680008401435711(Analytic) Krylov P1(infinity) = 0.870114(Simulated) Krylov # of states required = 0(Estimate) \end{lstlisting} Observing the code for the Krylov machine, one notices that most of the code is inherited from the Tsetlin machine, except the state translations. It is incredible, that the only major distinction is that a penalty is treated as a penalty with 50\% probability and a success otherwise. Literally, all other code for the Krylov machine is inherited from the Tsetlin. \clearpage \begin{lstlisting}[label={list:first},caption=Krylov core code.] def next_state_on_penalty(self): '''Find the next state of the learner, given that the teacher penalized.''' # If this number is greater than 0.5, then penalize the learner. is_penalty = uniform(0, 1) if(is_penalty >= 0.5): Tsetlin.next_state_on_penalty(self) else: Tsetlin.next_state_on_reward(self) \end{lstlisting} \section{Question 3} First let us consider state changes in the $L_{R-I}$, since there really are no states, but instead just an interval, $\lbrace0, 1\rbrace$, of possibilities. Consider the following code-snippet. \begin{lstlisting}[label={list:first},caption=State Translation in the $L_{R-I}$ automaton.] def do_reward(self, action): if(action == 2): self.p1 = self.k_r * self.p1 else: self.p1 = 1 - (self.k_r * self.p2) self.p2 = 1 - self.p1 def do_penalty(self): pass \end{lstlisting} It can be seen from the above code-snippet, that when the environment rewards, an action is updated based on the action selected. When the environment issues a penalty, nothing happens. The python command \textit{pass} is command that simply does nothing, and is usually used for prototyping. In this case, pass is included to explicitly show that a penalty does nothing. From this automaton some interesting things can be observed. First, let us consider the output of the testbench.py file. Time complexity has been measured in terms of discrete steps instead of actual time spent in the processor. The physical time, does not represent the number of actions being computed, as many operations can be processed simulatneously, in the background of a system, the time is not accurate. A corolary to this event is that the number of discrete calls to the program, accurately represent the time complexity, when each action is considered a time unit. To better understand this, observe the following code snippet. \begin{lstlisting}[label={list:first},caption=State Translation in the $L_{R-I}$ automaton.] ============================================================= The optimal K_r value is: 0.74995 The optimal lambda_r value is: 0.25005 The accuracy for k_r = 0.74995 is: 0.963 The computation time in iterations is: 18 ============================================================= ============================================================= The optimal K_r value is: 0.7811937499999999 The optimal lambda_r value is: 0.2188062500000001 The accuracy for k_r = 0.7811937499999999 is: 0.958 The computation time in iterations is: 26 ============================================================= ============================================================= The optimal K_r value is: 0.7811937499999999 The optimal lambda_r value is: 0.2188062500000001 The accuracy for k_r = 0.7811937499999999 is: 0.965 The computation time in iterations is: 16 ============================================================= ============================================================= The optimal K_r value is: 0.8124374999999999 The optimal lambda_r value is: 0.18756250000000008 The accuracy for k_r = 0.8124374999999999 is: 0.953 The computation time in iterations is: 45 ============================================================= ============================================================= The optimal K_r value is: 0.874925 The optimal lambda_r value is: 0.12507500000000005 The accuracy for k_r = 0.874925 is: 0.976 The computation time in iterations is: 57 ============================================================= ============================================================= The optimal K_r value is: 0.90616875 The optimal lambda_r value is: 0.09383125000000003 The accuracy for k_r = 0.90616875 is: 0.969 The computation time in iterations is: 160 ============================================================= ============================================================= The optimal K_r value is: 0.96865625 The optimal lambda_r value is: 0.031343750000000004 The accuracy for k_r = 0.96865625 is: 0.977 The computation time in iterations is: 1298 ============================================================= \end{lstlisting} It is observable that as $c1$ approaches $c2$ the number of operations increase exponentially. It is also observable that as $n \to \infty$, then $\lambda_R \to 0$. Corolary: $c_1 \to c_2$, then $\lambda_R \to 0$ or $k_R \to \infty$. To get a better understanding of this effect, observe the following graph, Figure \ref{blah}. It can be seen that $\lambda_R$ approaches $0$ as the time complexity goes to infinity. This is representative of the difficulty of the system. When the percent difference between penalties of choosing action 1 and action 2 approaches 0, then the time complexity approaches infinity, and the $\lambda_R$ required becomes an undesirably low value. \begin{figure}[h!] \centering \includegraphics{plot.png} \caption{$\lambda_R$ Vs. $n$} \label{blah} \end{figure} \end{document}
{ "alphanum_fraction": 0.6373553719, "avg_line_length": 49.7532894737, "ext": "tex", "hexsha": "5505407fe07f3f45c1b95d0df71192f4911a6b03", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ca60bb3a2d3ecb7d345f9a0fcccb53a2e2d7856d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "0xSteve/learning_automata_simulator", "max_forks_repo_path": "writeup.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ca60bb3a2d3ecb7d345f9a0fcccb53a2e2d7856d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "0xSteve/learning_automata_simulator", "max_issues_repo_path": "writeup.tex", "max_line_length": 678, "max_stars_count": null, "max_stars_repo_head_hexsha": "ca60bb3a2d3ecb7d345f9a0fcccb53a2e2d7856d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "0xSteve/learning_automata_simulator", "max_stars_repo_path": "writeup.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4520, "size": 15125 }
\documentclass[12pt]{article} \usepackage{listings} \usepackage{xcolor} \usepackage{geometry} \usepackage{hyperref} \hypersetup{colorlinks=true,filecolor=magenta} % \geometry{papersize={210mm,297mm},hmargin=2cm,tmargin=1.0cm,bmargin=1.5cm} \geometry{papersize={297mm,210mm},hmargin=2cm,tmargin=2.0cm,bmargin=2.0cm} % \geometry{papersize={210mm,297mm},hmargin=2cm,tmargin=1.5cm,bmargin=2.0cm} % \geometry{papersize={210mm,297mm},hmargin=2.5cm,tmargin=1.5cm,bmargin=2.5cm} \definecolor{grey95}{rgb}{0.95,0.95,0.95} \def\bgcolour#1{\lstset{backgroundcolor=\color{#1}}} \lstset{% basicstyle={\small\tt},basewidth={0.50em}, numbers=none,numberstyle=\tiny,numbersep=10pt, aboveskip=10pt,belowskip=0pt, frame=single,framesep=2pt,framerule=0pt} \bgcolour{grey95} \parindent=0pt \parskip=6pt plus 3pt minus 2pt \begin{document} \thispagestyle{empty} \section*{TextMate} TextMate \href{https://github.com/textmate/textmate}{https://github.com/textmate/textmate} is an exceptional editor available exclusively on macOS. It is simple to use, has an elegant interface and is very easily customised. If you are using TextMate on macOS then you can use a modified LaTeX bundle (and friends) to provide syntax highlighting as well as shortcuts for compilation and inserting the language specific environment blocks. To install these bundles, first install TextMate's own bundles for LaTeX, Mathematica and Matlab (open Textmate/Preferences/Bundles, then click the appropriate check boxes). Next quit TextMate, then copy the following bundles to the TextMate bundles directory. \begin{lstlisting} cp -rf bundles/LaTeX.tmbundle $HOME/Library/Application\ Support/TextMate/Bundles/ cp -rf bundles/Cadabra.tmbundle $HOME/Library/Application\ Support/TextMate/Bundles/ cp -rf bundles/Maple.tmbundle $HOME/Library/Application\ Support/TextMate/Bundles/ \end{lstlisting} Upon re-staring TextMate you should now have extra entries in the LaTeX menu (shortcuts and tab triggers) as well as syntax highlighting. Note that you only need to install the bundles for the languages that you intend to use (with the LaTeX bundle being the obvious bare minimum). To compile a Python-LaTeX file you can press {\tt\small control-apple-p}. This will run the {\tt\small pylatex.sh} script on the given file. This script will call Python so you may also need to adjust TextMate's version of {\tt\small PATH} to include the appropriate directory (use Textmate/Preferences/Variables and edit the {\tt\small PATH}). This same issue applies to the other languages. You might also need to tell TextMate where to find your Python scripts. You can do so by setting {\tt\small PYTHONPATH} in Textmate/Preferences/Variables. To revert to the original TextMate bundles, simply delete the each of the above bundles by first quitting TextMate then running \begin{lstlisting} rm -rf $HOME/Library/Application\ Support/TextMate/Bundles/LaTeX.tmbundle rm -rf $HOME/Library/Application\ Support/TextMate/Bundles/Cadabra.tmbundle rm -rf $HOME/Library/Application\ Support/TextMate/Bundles/Maple.tmbundle \end{lstlisting} and finish by restarting TextMate. If the new bundles fail to appear (or remains after deletion) then you will also need to delete the {\tt\small BundlesIndex.binary}. First quit TextMate then \begin{lstlisting} rm $HOME/Library/Caches/com.macromates.TextMate/BundlesIndex.binary \end{lstlisting} and once again finish by restarting TextMate. \end{document}
{ "alphanum_fraction": 0.7900345622, "avg_line_length": 50.3188405797, "ext": "tex", "hexsha": "3e5993f4e677f7282db6596bbae2334ac8437be8", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-03-30T17:17:18.000Z", "max_forks_repo_forks_event_min_datetime": "2021-06-27T03:29:40.000Z", "max_forks_repo_head_hexsha": "2debaf3f97eb551928d08dc4baded7ef7a4ab29a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "leo-brewin/hybrid-latex", "max_forks_repo_path": "textmate/textmate.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2debaf3f97eb551928d08dc4baded7ef7a4ab29a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "leo-brewin/hybrid-latex", "max_issues_repo_path": "textmate/textmate.tex", "max_line_length": 548, "max_stars_count": 16, "max_stars_repo_head_hexsha": "2debaf3f97eb551928d08dc4baded7ef7a4ab29a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "leo-brewin/hybrid-latex", "max_stars_repo_path": "textmate/textmate.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-31T23:16:08.000Z", "max_stars_repo_stars_event_min_datetime": "2018-10-12T06:31:49.000Z", "num_tokens": 933, "size": 3472 }
\documentclass[12pt]{article} \usepackage[vmargin=1in,hmargin=1in]{geometry} \usepackage{amsmath} \usepackage[parfill]{parskip} \usepackage{hyperref} \usepackage{natbib} \usepackage{bm} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{abstract} \usepackage{lineno} \usepackage{setspace} \hypersetup{pdfstartview={Fit},hidelinks} \usepackage{caption} \captionsetup[figure]{labelformat=empty}% redefines the caption setup of the figures environment in the beamer class. \title{Ecology Appendix S2 \\ Simulation study accompanying the paper: \\ \it Modeling abundance, distribution, movement, and space use with camera and telemetry data} \author{Richard B. Chandler$^1$\footnote{Corresponding author: [email protected]}, Daniel A. Crawford$^2$, Elina P. Garrison$^3$, \\ Karl V. Miller$^1$, Michael J. Cherry$^2$} \begin{document} \maketitle \vspace{12pt} \begin{description}%[labelindent=1pt]%[leftmargin=1cm]%,labelwidth=\widthof{\bfseries Example:}] % \large \item[$^1$] Warnell School of Forestry and Natural Resources, University of Georgia %\\ \item[$^2$] Caesar Kleberg Wildlife Research Institute at Texas A\&M University-Kingsville %\\ \item[$^3$] Florida Fish and Wildlife Conservation Commission %\\ \end{description} \clearpage \section*{Introduction and Methods} We conducted a small simulation study to evaluate the performance of the spatial capture-recapture model with an explicit movement process described in the manuscript. The design and parameter values were chosen to resemble the estimates from the deer example in the manuscript. A uniform capture process was simulated to resemble aerial capture and transmitter deployment. The camera design in the simulation study was the same as in the deer example, with 60 cameras spaced by 200--500 m. We simulated 90 occasions and used a fix rate of 1 location every 3 occasions. Parameters (defined in the manuscript) were $N=100$, $p^{\rm cap}=0.25$, $\lambda_0=2$, $\sigma^{{\rm det}}=50$, $\sigma^{\rm move}=600$. We considered 5 scenarios in which the autocorrelation parameter ($\rho$) of the Ornstein-Uhlenbeck movement model was assigned values: 0.55, 0.65, 0.75, 0.85, and 0.95. For each scenario, we simulated 100 datasets and fit both the data generating model (SCR-move) and a mis-specified SCR model (SCR0) with no movement process. Inference was made using 10,000 MCMC samples from the joint posterior following a 2,000 iteration burn-in. Code to reproduce the simulation study can be found at \url{10.5281/zenodo.5167653}. % \begin{table}[h!] % \centering % \caption{Scenarios considered in the simulation study} % \label{tab:sims} % \end{table} \section*{Results and Discussion} The number of individuals captured and outfitted with telemetry devices ranged from 13--35 in the simulated datasets. As with the deer example, only a small fraction (ranging from 0--10) were detected by cameras. Bias was reduced in all 5 cases when switching from the SCR0 model to %the SCR-move model (Fig.~\ref{fig:bias}). Improvement in bias ranged the SCR-move model (Figure S1). Improvement in bias ranged from 2--6\%. Bias of the SCR+move model was $\le2\%$ for the first two scenarios. For the other three scenarios (with $\rho=$0.75, 0.85, and 0.95), bias of the SCR+move model was 5--8\%, although some of this was likely attributable to Monte Carlo error resulting from the small number of simulated datasets. Coverage of 95\% credible intervals was close to 0.95 for the SCR+move model in all scenarios and was better than the SCR0 model %(Fig.~\ref{fig:bias}). Variance of the estimator was greater for the (Figure S1). Variance of the estimator was greater for the SCR+move model than for the mis-specified SCR0 model. \clearpage \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth, trim=0mm 15mm 0mm 15mm, clip]{../R/sims/bias-N.pdf} \\ \includegraphics[width=0.5\textwidth, trim=0mm 15mm 0mm 15mm, clip]{../R/sims/cover-N.pdf} \\ \includegraphics[width=0.5\textwidth, trim=0mm 0mm 0mm 15mm, clip]{../R/sims/var-N.pdf} \caption{Figure S1. Bias, 95\% CI coverage, and variance of the posterior mode as a point estimator of population size ($N$) under five values of the parameter $\rho$ controlling autocorrelation in movement. The data generating value of abundance was $N=100$. } \label{fig:bias} \end{figure} \end{document}
{ "alphanum_fraction": 0.7560199909, "avg_line_length": 37.3050847458, "ext": "tex", "hexsha": "87b87219864ac78d69052f8992fded1aba207b3e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "30d3ed9f8c3f554b6867f6dc923a6fa143de4551", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rbchan/scr-move", "max_forks_repo_path": "supp/Appendix-S2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "30d3ed9f8c3f554b6867f6dc923a6fa143de4551", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rbchan/scr-move", "max_issues_repo_path": "supp/Appendix-S2.tex", "max_line_length": 140, "max_stars_count": 3, "max_stars_repo_head_hexsha": "30d3ed9f8c3f554b6867f6dc923a6fa143de4551", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rbchan/scr-move", "max_stars_repo_path": "supp/Appendix-S2.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-25T06:45:50.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-08T20:07:08.000Z", "num_tokens": 1254, "size": 4402 }
The \mf Time-Varying Hydraulic Conductivity (TVK) and Time-Varying Storage (TVS) packages allow hydraulic conductivity, specific storage and specific yield properties of model cells to be varied transiently throughout a simulation. This can be useful for modeling caved rock, void and spoil in mining applications, or for other physical changes to a system that can reasonably be represented by changing material properties. Changes are made on a cell-by-cell basis in TVK and TVS package input files by specifying new values for elements of NPF package arrays K, K22 and K33, and STO package arrays SS and SY. New values may be applied at the start of each stress period, or alternatively interpolated via time series to determine new values at each time step. Changes are only made to those model cells explicitly specified in the TVK and TVS package input files; other cells retain their original NPF and STO values. Additionally, a change may be made to a cell's value for one property independently without affecting other property values at the same cell, e.g. SS may be changed for a cell without affecting SY, if desired. Where a property value change is given by a time series, the value continues to change at each time step until the last entry in the time series is reached. Otherwise, once a cell property value has been changed, it remains at its new value until subsequently changed in the TVK or TVS files for a later period, or until the end of the simulation if no further changes are enacted. By default, when the TVS package is used to change SS or SY values, the \mf storage formulation is modified to integrate these changes such that the head solution correctly reflects changes in pressure due to the corresponding increase or decrease in stored water volume. The modifications are described in the \hyperref[sec:sci-ss]{``Storage Change Integration: Specific Storage''} and \hyperref[sec:sci-sy]{``Storage Change Integration: Specific Yield''} sections below. If this functionality is not desired, storage change integration may be disabled by activating the DISABLE\_STORAGE\_CHANGE\_INTEGRATION option in the TVS package input file. \subsection{Storage Change Integration: Specific Storage} \label{sec:sci-ss} Revisiting the derivation of the revised storage formulation in the \hyperref[ch:sto-mod]{Storage Package Modifications chapter}, changes in specific storage are introduced by first separating equation~\ref{eqn:storage-ss-final} into two separate equations: \begin{equation} \label{eqn:tvs-vss-old} V_{SS}^\told = SC1^\told \, S_F^\told \left( h^\told - BOT - \frac{\Delta z}{2} S_F^\told \right) , \end{equation} \noindent giving the volume of water in compressible storage at time $\told$, and \begin{equation} \label{eqn:tvs-vss-new} V_{SS}^t = SC1^t \, S_F^t \left( h^t - BOT - \frac{\Delta z}{2} S_F^t \right) , \end{equation} \noindent giving the volume of water in compressible storage at time $t$. The volumetric flow rate from compressible storage taking into account changes in specific storage is then \begin{equation} \label{eqn:tvs-qss} \begin{aligned} Q_{SS} = & \frac{V_{SS}^\told - V_{SS}^t}{\Delta t} \\ = & \frac{SC1^\told}{\Delta t} \, S_F^\told \left( h^\told - BOT - \frac{\Delta z}{2} S_F^\told \right) - \frac{SC1^t}{\Delta t} \, S_F^t \left( h^t - BOT - \frac{\Delta z}{2} S_F^t \right) . \end{aligned} \end{equation} \subsubsection{Standard Formulation} Following the same process used to arrive at equation~\ref{eqn:STOeq-rev-fd} in the \hyperref[ch:sto-mod]{Storage Package Modifications chapter}, equation~\ref{eqn:tvs-qss} leads to the following additions to the left- and right-hand sides of the discretized groundwater flow equation: \begin{equation} \label{eqn:tvs-Ab-std} \begin{aligned} A_{n,n} \leftarrow & A_{n,n} - \frac{SC1_n^t}{\Delta t} S_{F_n}^\kmo \\ b_n \leftarrow & b_n - \frac{SC1_n^\told}{\Delta t} \, S_{F_n}^\told \left( h_n^\told - BOT_n - \frac{\Delta z_n}{2} S_{F_n}^\told \right) + \frac{SC1_n^t}{\Delta t} \, S_{F_n}^\kmo \left( BOT_n + \frac{\Delta z_n}{2} S_{F_n}^\kmo \right) . \end{aligned} \end{equation} \noindent In the absence of specific storage changes, i.e. for $SC1_n^\told = SC1_n^t = SC1_n$, equation~\ref{eqn:tvs-Ab-std} simplifies to equation~\ref{eqn:STOeq-rev-fd}. \subsubsection{Newton-Raphson Formulation} Evaluating equation~\ref{eqn:tvs-qss} cellwise with subscript ``$n$'' and applying quadratically smoothed cell saturations $S_F^*$ results in \begin{equation} \label{eqn:tvs-qss-n} Q_{SS_n} = \frac{SC1_n^\told}{\Delta t} \, S_{F_n}^\stold \left( h_n^\told - BOT_n - \frac{\Delta z_n}{2} S_{F_n}^\stold \right) - \frac{SC1_n^t}{\Delta t} \left[ S_{F_n}^\st \left( h_n^t - BOT_n \right) + \frac{\Delta z_n}{2} \left( S_{F_n}^\st \right)^2 \right] . \end{equation} \noindent Upon differentiation of equation~\ref{eqn:tvs-qss-n} with respect to $h_n^t$, all terms involving $SC1_n^\told$ disappear. The result is equivalent to equation~\ref{eqn:STOeq-rev-derv-simp} with $SC1_n = SC1_n^t$: \begin{equation} \label{eqn:tvs-qss-nr-deriv} \frac{\partial Q_{SS_n}}{\partial h_n} = -\frac{SC1_n^t}{\Delta t} S_{F_n}^\st - \frac{SC1_n^t}{\Delta t} \frac{\partial S_{F_n}^\st}{\partial h_n} \left( h_n^t - BOT_n \right) + \frac{SC1_n^t}{\Delta t} \Delta z_n S_{F_n}^\st \frac{\partial S_{F_n}^\st}{\partial h_n} . \end{equation} \noindent where the superscript ``$t$'' has been omitted from $h_n^t$ in the derivatives for clarity. Replacement of $h_n^t$ and $S_{F_n}^\st$ by their previous iterates, $h_n^\kmo$ and $S_{F_n}^\skmo$, in equations~\ref{eqn:tvs-qss-n} and~\ref{eqn:tvs-qss-nr-deriv} and substitution of those equations into equation~\ref{eqn:STOeq-nr} yields the following contributions to $A_{n,n}$ and $b_n$: \begin{equation} \label{eqn:tvs-Ab-nr} \begin{aligned} A_{n,n} \leftarrow & A_{n,n} + \biggl[ - \frac{SC1_n^t}{\Delta t} S_{F_n}^\skmo - \frac{SC1_n^t}{\Delta t} \frac{\partial S_{F_n}^\skmo}{\partial h_n} \left( h_n^\kmo - BOT_n \right) + \frac{SC1_n^t}{\Delta t} \Delta z_n S_{F_n}^\skmo \frac{\partial S_{F_n}^\skmo}{\partial h_n} \biggr] \\ b_n \leftarrow & b_n - \frac{SC1_n^\told}{\Delta t} \, S_{F_n}^\told \left( h_n^\told - BOT_n - \frac{\Delta z_n}{2} S_{F_n}^\told \right) + \frac{SC1_n^t}{\Delta t} \, S_{F_n}^\skmo \left( BOT_n + \frac{\Delta z_n}{2} S_{F_n}^\skmo \right) \\ & \phantom{b_n} + \biggl[ - \frac{SC1_n^t}{\Delta t} \frac{\partial S_{F_n}^\skmo}{\partial h_n} \left( h_n^\kmo - BOT_n \right) + \frac{SC1_n^t}{\Delta t} \Delta z_n S_{F_n}^\skmo \frac{\partial S_{F_n}^\skmo}{\partial h_n} \biggr] h_n^\kmo . \end{aligned} \end{equation} \noindent In the absence of storage changes ($SC1_n^\told = SC1_n^t = SC1_n$), equation~\ref{eqn:tvs-Ab-nr} simplifies to equation~\ref{eqn:STOeq-rev-nr-simp}. \subsection{Storage Change Integration: Specific Yield} \label{sec:sci-sy} For constant specific yield, \mf calculates the specific yield contribution to groundwater flow \citep[eq. 5--10]{modflow6gwf} as \begin{equation} \label{eqn:tvs-qsy-original} Q_{Sy_n} = \frac{SC2_n \, \Delta z_n}{\Delta t} \left( S_{F_n}^\told - S_{F_n}^t \right) , \end{equation} \noindent where $Q_{Sy_n}$ is the volumetric flow rate from specific yield ($L^3/T$) and $SC2_n = Sy_n \cdot A_n$ is the secondary storage capacity for cell $n$ with specific yield $Sy_n$ and horizontal cell area $A_n$. When specific yield changes transiently, the secondary storage capacity term is expressed in terms of its new value $SC2_n^t$ and its old value $SC2_n^\told$, resulting in \begin{equation} \label{eqn:tvs-qsy-new} Q_{Sy_n} = \frac{SC2_n^\told \, \Delta z_n}{\Delta t} \, S_{F_n}^\told - \frac{SC2_n^t \, \Delta z_n}{\Delta t} \, S_{F_n}^t . \end{equation} \subsubsection{Standard Formulation} Rearranging equation~\ref{eqn:tvs-qsy-new} for solution at the current iteration $k$ in terms of $h_n^k$ instead of saturation $S_{F_n}^t$ gives \begin{equation} \label{eqn:tvs-qsy-new-k} Q_{Sy_n}^k = \frac{SC2_n^\told \, \Delta z_n}{\Delta t} \, S_{F_n}^\told - \frac{SC2_n^t \, \Delta z_n}{\Delta t} \frac{h_n^k - BOT_n}{\Delta z_n} , \end{equation} \noindent which results in the following contributions to $A_{n,n}$ and $b_n$: \begin{equation} \label{eqn:tvs-sy-Ab-std} \begin{aligned} A_{n,n} \leftarrow & A_{n,n} - \frac{SC2_n^t}{\Delta t} \\ b_n \leftarrow & b_n - \frac{SC2_n^\told}{\Delta t} \, \Delta z_n \, S_{F_n}^\told - \frac{SC2_n^t}{\Delta t} \, BOT_n . \end{aligned} \end{equation} \noindent As in the base formulation \citep[Chapter 5]{modflow6gwf}, for cells where the head at the end of the time step is at or above the top of the cell, $S_{F_n}^t = 1$ and the specific yield contribution is known. In these cases, no terms are added to $A_{n,n}$ and the right-hand side contribution instead becomes \begin{equation} \label{eqn:tvs-sy-b-fullsat} b_n \leftarrow b_n - \frac{SC2_n^\told}{\Delta t} \, \Delta z_n \, S_{F_n}^\told + \frac{SC2_n^t}{\Delta t} \, \Delta z_n . \end{equation} \subsubsection{Newton-Raphson Formulation} As all $SC2_n^\told$ terms are eliminated by differentiation, the derivative of equation~\ref{eqn:tvs-qsy-new} at iteration $k$, and with quadratically smoothed cell saturations $S_F^*$ applied, is equivalent to that of the base formulation \citep[eq. 5--14]{modflow6gwf} with $SC2_n = SC2_n^t$: \begin{equation} \label{eqn:tvs-qsy-nr-deriv} \frac{\partial Q_{Sy_n}}{\partial h_n} = - \frac{SC2_n^t \, \Delta z_n}{\Delta t} \frac{\partial S_{F_n}^\skmo}{\partial h_n} . \end{equation} \noindent The fully implicit Newton-Raphson formulation for specific yield storage contribution in cell $n$ is \begin{equation} \label{eqn:tvs-sy-nr} \frac{\partial Q_{Sy_n}}{\partial h_n} h_n^k = -Q_{Sy_n}^k + \frac{\partial Q_{Sy_n}}{\partial h_n} h_n^\kmo . \end{equation} \noindent Substitution of equations~\ref{eqn:tvs-qsy-new} and~\ref{eqn:tvs-qsy-nr-deriv} into equation~\ref{eqn:tvs-sy-nr} results in the following general expression of the Newton-Raphson formulation for the contribution of specific yield storage to cell $n$: \begin{equation} \label{eqn:tvs-sy-nr-expanded} \begin{aligned} \biggl[ - \frac{SC2_n^t \, \Delta z_n}{\Delta t} \frac{\partial S_{F_n}^\skmo}{\partial h_n} \biggr] h_n^k = & - \biggl[ \frac{SC2_n^\told \, \Delta z_n}{\Delta t} \, S_{F_n}^\stold - \frac{SC2_n^t \, \Delta z_n}{\Delta t} \, S_{F_n}^\skmo \biggr] \\ & + \biggl[ - \frac{SC2_n^t \, \Delta z_n}{\Delta t} \frac{\partial S_{F_n}^\skmo}{\partial h_n} \biggr] h_n^\kmo , \end{aligned} \end{equation} \noindent which yields the following contributions to $A_{n,n}$ and $b_n$: \begin{equation} \label{eqn:tvs-sy-Ab-std} \begin{aligned} A_{n,n} \leftarrow & A_{n,n} - \frac{SC2_n^t \, \Delta z_n}{\Delta t} \frac{\partial S_{F_n}^\skmo}{\partial h_n} \\ b_n \leftarrow & b_n - \frac{SC2_n^\told \, \Delta z_n}{\Delta t} \, S_{F_n}^\stold + \frac{SC2_n^t \, \Delta z_n}{\Delta t} \, S_{F_n}^\skmo - \frac{SC2_n^t \, \Delta z_n}{\Delta t} \frac{\partial S_{F_n}^\skmo}{\partial h_n} h_n^\kmo . \end{aligned} \end{equation} \noindent For cells where the head at the end of the time step is at or above the top of the cell, the derivative is zero. In these cases, no terms are added to $A_{n,n}$ and the right-hand side contribution reverts to the standard formulation in equation~\ref{eqn:tvs-sy-b-fullsat}.
{ "alphanum_fraction": 0.7125659051, "avg_line_length": 67.3372781065, "ext": "tex", "hexsha": "75afe13eae6e1f3f5ff78db40d9706b5c0d2438a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a185d95b91985e965f8a04ae353305dff19b9637", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "kzeiler/modflow6", "max_forks_repo_path": "doc/SuppTechInfo/tvk-tvs.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a185d95b91985e965f8a04ae353305dff19b9637", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "kzeiler/modflow6", "max_issues_repo_path": "doc/SuppTechInfo/tvk-tvs.tex", "max_line_length": 704, "max_stars_count": null, "max_stars_repo_head_hexsha": "a185d95b91985e965f8a04ae353305dff19b9637", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "kzeiler/modflow6", "max_stars_repo_path": "doc/SuppTechInfo/tvk-tvs.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3922, "size": 11380 }
\chapter{Algorithms} \label{content:algorithms} There are four common packages, \pkg{algorithmic}, \pkg{algorithm2e}, \pkg{algorithmicx}, and \pkg{program}, for typesetting algorithms in form of pseudocode. They provide stylistic enhancements over a uniform style (i.e., all in typewriter font) so that constructs such as loops or conditional statements are visually separated from other text. In this chapter, we introduce \pkg{algorithm2e}, which is loaded by \elsatoolbox{}. For the other packages, please check out the Wikibooks of \LaTeX{}\footnote{\url{https://en.wikibooks.org/wiki/LaTeX/Algorithms}}. The \pkg{algorithm2e} package provides a floating \env{algorithm} environment for wrapping pseudocode and provides commands for writing pseudocode. Please note that this package is not compatible with the following packages: \pkg{algorithm}, \pkg{algorithmic}, and \pkg{algpseudocode}. In order to disable \pkg{algorithm2e}, add the option \opt{noalgo} to \elsatoolbox{} as the following: \begin{center} \verb|\usepackage[noalgo]{elsatoolbox}|. \end{center} \section*{Usage} Typically, each statement of algorithms should be ended with \verb|\;|. In the following paragraphs, we list the commonly used commands provided by \pkg{algorithm2e}. For more advanced usage, please check out the documentation on CTAN\footnote{\url{https://www.ctan.org/pkg/algorithm2e}}. \paragraph{Customization} \begin{tabular}{lll} \verb|\DontPrintSemicolon| & \verb|\SetAlgoCaptionSeparator[s]{<sep>}| \end{tabular} \paragraph{Input, Output, Basic Keywords} \begin{tabular}{lll} \verb|\KwIn{<input>}| & \verb|\KwData{<input>}| & \verb|\KwTo| \\ \verb|\KwOut{<output>}| & \verb|\KwResult{<output>}| & \verb|\KwRet{<value>}| \end{tabular} \paragraph{Control Flow} \begin{tabular}{ll} \verb|\If{<condition>}{<block>}| & \verb|\For{<condition>}{<loop>}| \\ \verb|\ElseIf{<condition>}{<block>}| & \verb|\While{<condition>}{<loop>}| \\ \verb|\Else{<block>}| & \verb|\ForEach{<condition>}{<loop>}| \\ \verb|\Switch{<condition>}{<block>}| & \verb|\ForAll{<condition>}{<loop>}| \\ \verb|\Case{<case>}{<block>}| & \verb|\Repeat{<condition>}{<loop>}| \end{tabular} \section*{Example} The following example creates Algorithm~\ref{algo:howto}. \bigskip \lstinputlisting{algorithms/howto.tex} \bigskip \input{algorithms/howto.tex}
{ "alphanum_fraction": 0.7244595167, "avg_line_length": 47.18, "ext": "tex", "hexsha": "9b2c65194d569d71fce167617c7ef844c1086b72", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2f0f078ab3f9307d02a7a5a8880a2d1a46ef908d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "elsa-lab/ELSAToolbox", "max_forks_repo_path": "contents/algorithms.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2f0f078ab3f9307d02a7a5a8880a2d1a46ef908d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "elsa-lab/ELSAToolbox", "max_issues_repo_path": "contents/algorithms.tex", "max_line_length": 560, "max_stars_count": null, "max_stars_repo_head_hexsha": "2f0f078ab3f9307d02a7a5a8880a2d1a46ef908d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "elsa-lab/ELSAToolbox", "max_stars_repo_path": "contents/algorithms.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 698, "size": 2359 }
\vspace*{0.5in} \singlespacing \begin{center} \section{Conclusions} \label{sec:conc} \end{center} \doublespacing
{ "alphanum_fraction": 0.7543859649, "avg_line_length": 12.6666666667, "ext": "tex", "hexsha": "b525ff5c1bb56544e52b68ce022eaa59922e1694", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "541aafd65d232137f26b8debe504885de7690e61", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sunipkmukherjee/umlthesis", "max_forks_repo_path": "conc.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "541aafd65d232137f26b8debe504885de7690e61", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sunipkmukherjee/umlthesis", "max_issues_repo_path": "conc.tex", "max_line_length": 21, "max_stars_count": null, "max_stars_repo_head_hexsha": "541aafd65d232137f26b8debe504885de7690e61", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sunipkmukherjee/umlthesis", "max_stars_repo_path": "conc.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 42, "size": 114 }
\subsection{Difference from Method Of Moments (MOM)} More conditions than data. \subsection{Generalised Method of Moments (GMM)} We have a function on the output and a parameter: \(g(y, \theta )\) A moment condition is that the expectation of such a function is \(0\). \(m(\theta )=E[g(y, \theta )]=0\) To do GMM, we estimate this using: \(\hat m(\theta )=\dfrac{1}{n}\sum_ig(y_i, \theta )\) We define: \(\Omega = E[g(y, \theta )g(y, \theta)^T]\) \(G=E[\Delta_\theta g(y, \theta)]\) And then minimise the norm: \(||\hat m(\theta )||^2_W=\hat m(\theta )^TW\hat m(\theta )\) Where \(W\) is a positive definite matrix for the norm. \(\Omega ^{-1}\) is most efficient. But we don't know this. It depends on \(\theta \). We can estimate it if IID: \(\hat W(\hat \theta )= (\dfrac{1}{n}\sum_i g(y, \hat \theta)g(y, \hat \theta)^T)^{-1}\) \subsection{Two-step feasible GMM} Estimate using \(\mathbf W=\mathbf I\) Consistent, but not efficient. \subsection{Moment conditions} OLS: \(E[x(y-x\theta)]=0\) WLS \(E[x(y-x\theta)/\sigma^(x)]=0\) IV \(E[z(y-x\theta)]=0\) MLE \(E[\Delta_\theta \ln f(x, \theta)]=0\) \subsection{New GMM} \(m(\theta_0)=E[g(\mathbf x_i, \theta_0]\) We replace this with sample moment \(\hat m(\theta)=\frac{1}{n}\sum_ig(\mathbf x_i, \theta)\) We have the "score" \(\nabla_\theta g(\mathbf x_i, \theta_0)\) Information \(G=E[\nabla_\theta g(\mathbf x_i, \theta_0)]\) Variance-covariance loss matrix \(\Omega =E[g(\mathbf x_i, \theta_0)g(\mathbf x_i, \theta_0)^T]\) We want to minimise moment loss \(||\hat m(\theta)||^2_W=\hat m(\theta )^TW\hat m(\theta)\) \(\hat \theta = argmin_\theta (\frac{1}{n}\sum_ig(\mathbf x_i, \theta))^T\hat W(\frac{1}{n}\sum_ig(\mathbf x_i, \theta))\) \subsection{Asymptotic} CLT means normal. They are consistent IF moment condition is true. There is an explicit formula for variance. \(\sqrt n (\hat \theta -\theta_0)\rightarrow^d N[0, (G^TWG)^{-1}G^TW\Omega W^TG(G^TW^TG)^{-1}]\) If we choose \(W\propto \Omega^{-1}\) then: \(\sqrt n (\hat \theta -\theta_0)\rightarrow^d N[0, (G^T\Omega^{-1} G)^{-1}]\) Problem: we need to estimate \(\Omega \) and \(G\). \(\Omega \): estimate from sample. allows us to choose estimator, but still leaves variance unidentified. Do the above from OLS? This is where robust etc stuff comes from If it is specified. Moment conditions are equal to the number of moments, then \(W\) doesn't matter. This is normal Method of Moments. Estimating the weighting matrix \subsection{Iterated GMM} \subsection{Moment-covariance matrix} \subsection{Bias and variance of the GMM estimator} page on Bias and variance of the GMM estimator (cluster assumption should be part of moment condition?) part of later calculation of weighting? Can do robust, hac, clustering as part of GMM too.
{ "alphanum_fraction": 0.6687943262, "avg_line_length": 21.5267175573, "ext": "tex", "hexsha": "8adf9b4e334ce4339257edbf8876b2f411e88154", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/statistics/GMM/01-01-GMM.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/statistics/GMM/01-01-GMM.tex", "max_line_length": 143, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/statistics/GMM/01-01-GMM.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 916, "size": 2820 }
\documentclass[a4paper]{jpconf} \bibliographystyle{iopart-num} \usepackage{amsmath} \usepackage{citesort} \usepackage{subfigure} \usepackage{graphicx} \graphicspath{{fig/}} \usepackage{ifpdf} \ifpdf\usepackage{epstopdf}\fi \usepackage[export]{adjustbox} %----------------------------------------------------- %\usepackage{soul,ulem,color,xspace,bm} % Suggest to remove %\newcommand{\asrm}[1]{{\color{magenta}\sout{#1}}} % Suggest to insert %\newcommand{\as}[1]{\color{cyan}#1\xspace\color{black}} % Suggest to replace %\newcommand{\asrp}[2]{\asrm{#1} \as{#2}} % Comment %\newcommand{\ascm}[1]{{\color{green}\;AS: #1}} %------------------------------------------------------ \def\apj{ApJ} \def\mnras{MNRAS} \def\nat{Nat} \def\prd{Phys. Rev. D} \def\araa{ARA\&A} % "Ann. Rev. Astron. Astrophys." \def\aap{A\&A} % "Astron. Astrophys." \def\aaps{A\&AS} % "Astron. Astrophys. Suppl. Ser." \def\aj{AJ} % "Astron. J." \def\apjs{ApJS} % "Astrophys. J. Suppl. Ser." \def\pasp{PASP} % "Publ. Astron. Soc. Pac." \def\apjl{ApJ} % letter at ApJ \def\pasj{PASJ} \def\apss{Astroph. Space Sci.} \def\aplett{Astroph. Lett} \def\ssr{Space Sci. Rev.} \def\aapr{Astron. Astroph. Reviews} \def\physrep{Phys. Reports} \def\memsai{Mem. Societa Astronom. Italiana} \def\jgr{JGR} \begin{document} \title{Modelling of electron acceleration in relativistic supernovae} \author{V I Romansky$^{1}$, A M Bykov$^{1,2}$ and S M Osipov$^1$} \address{$^1$ Ioffe Institute, 26 Politekhnicheskaya st., St. Petersburg 194021, Russia} \address{$^2$ Peter the Great St. Petersburg Polytechnic University, 29 Politekhnicheskaya st., St. Petersburg 195251, Russia} \ead{[email protected]} \begin{abstract} Radio and X-ray observations revealed a rare but a very interesting class of supernovae (SNe) with a sizeable fraction of the kinetic energy of ejecta moving with a trans-relativistic speed. These relativistic SNe are comprising a population of the objects intermediate between the numerous core collapse SNe expanding with non-relativistic velocities and the gamma-ray bursts with highly relativistic ejecta. An interpretation of the observed non-thermal emission from relativistic SNe requires a model of electron acceleration in trans-relativistic shocks. In this paper we present numerical Particle-in-Cell (PIC) simulation of electron spectra in trans-relativistic shock waves propagating in clumped stellar winds of the SN progenitors. It is shown here that the presence of background magnetic fluctuations has a drastical effect on the electron acceleration by the trans-relativistic shocks propagating transverse to the regular magnetic field in the clumped wind of a massive progenitor star. \end{abstract} \section{Introduction} Shock waves of young galactic supernova remnants (SNRs) are known for a long time as the efficient electron accelerators producing synchrotron radiation observed from the radio to X-rays \cite{GS64,Helder12}. While most of the SNe both thermonuclear and core collapsed are ejecting most of their kinetic energy in the shell moving with typical speed below 10,000 km/s, there are a few SNe where apparently a sizeable part of the kinetic energy was in the form of trans-relativistic outflow with the 4-speed $\gamma \beta \sim$ 1.5. Extragalactic supernova SN2009bb \cite{2010Natur.463..513S} apparently represents the class of relativistic SNe. These SNe are of especial interest since they may provide a connection between the bulk population of supernovae and the gamma-ray bursts and shed a light on the nature of the central engine producing the relativistic outflows \cite{Margutti2014,2016ApJ...832..108M}. To model the observed non-thermal radiation from SNe \cite{1998ApJ...499..810C} the knowledge of relativistic electrons distribution is needed. Moreover, the relativistic SNe were proposed as the possible sources of very high energy CRs \cite{2007PhRvD..76h3009W,2011NatCo...2E.175C,2008ApJ...673..928B,2013ApJ...776...46E,BEMO18}. Diffusive shock acceleration \cite{Bell78}, \cite{Blandford78} in SNRs is considered as the most likely mechanism of cosmic ray production in a wide range of energies below EeV \cite{BEMO18}. The process of acceleration at the shock front is nonlinear problem and it's efficiency depends on a number of factors. In this work we studied the trans-relativistic shocks because they are probably the most effective particle accelerators. Energy that particle gains when crosses the shock front increases with shock velocity, but in ultra-relativistic shocks only small fraction of particles can return from downstream and efficiency is relatively low. In trans-relativistic shocks, with Lorentz-factor about 1.5, energy, gained in each crossing of the front, is large however the accelerated particles can still return from the downstream and cross shock several times. This was demonstrated in the case of quasi-parallel trans-relativistic shocks in Monte Carlo simulations \cite{2013ApJ...776...46E,BEMO18}. However, the regular magnetic fields in the winds of the rotating stars at the distances of interest are usually dominated by the azimuthal field component. Therefore, the SN shock waves propagating through the progenitor star winds are expected to be quasi-perpendicular. The quasi-perpendicular shock waves are found to be weak sources of the accelerated particles in both relativistic PIC simulations \cite{Sironi2011} and trans-relativistic cases \cite{Romansky18,Crumley2019}. In this paper we study the effect of the presence of the background magnetic field fluctuations in the clumped stellar wind on the electron acceleration by trans-relativistic shock propagating transversally to the regular magnetic field. \section{Numerical setup} We simulated a structure of a quasi-perpendicular trans-relativistic shock propagating in a clumped (turbulent) wind and derived the particle distribution. The simulation is two dimensional with three dimensional velocities and fields. For the numerical simulation we used Tristan-mp code with the explicit numerical scheme developed by Buneman \cite{Buneman93} and Spitkovsky\cite{Spitkovsky2005}. Shock wave is initialized by an electron-proton plasma which is flowing into the simulation region through the right boundary and then reflecting from the super-conducting wall at the left boundary. We use the following parameters for the setups: the initial flow Lorentz factor $\gamma = 1.5$, magnetization $\sigma = \frac{B^2}{4\pi\gamma (n_p m_p + n_e m_e) c^2} = 0.04$ (in the turbulent case $B^2$ is the mean square field). The dimensionless thermal energy $\Delta \gamma = \frac{k T}{m_p c^2}$ is equal $10^{-4}$ and the proton mass is reduced to $m_p = 25 m_e$. Size of the simulation region along x axis is $L_x = 8000\frac{c}{\omega_p}$ and in transverse direction $L_y = 200\frac{c}{\omega_p}$, where $\omega_p$ is the plasma frequency $\omega_p = \sqrt{\frac{4\pi q^2 n}{\gamma m_e}}$. These sizes correspond to $80000$ and $2000$ grid points respectively. Also, $2000$ grid points correspond to approximately $10$ gyroradii of upstream protons. We initialized turbulent field model by the following equation: \begin{equation}\label{field} B_{turb} (\textbf{r}) = \sum_{\textbf{k}}\sum_{p=1,2}B(\textbf{k}) \textbf{e}_{p} sin(\textbf{k}\textbf{r} + \phi (\textbf{k},p)) \end{equation} where $B(k_x(i),k_y(j))$ is the amplitude of turbulent mode, usually we choose $B \propto k^{-11/6}$ to imitate Kolmogorov's spectrum, where $B^2 k^2 \propto k^{-5/3}$. $B$ is normalized to fixed fraction of total magnetic energy $\eta$. In further research with more wide spectrum of turbulence, it's behaviour may be important. $\textbf{e}_{p}$ are vectors perpendicular to wavevector $\textbf{k}$ corresponding to two different field polarizations and $\phi (i,j,p)$ is randomly generated phase for each mode. Wavevectors $\textbf{k}$ are distributed on a uniform grid, $k_x = i \Delta k, k_y = j \Delta k$ and $\Delta k \approx \frac{2 \pi}{10 r_g}$ where $r_g$ is upstream proton gyroradius. The maximum wavevector $k_{max} \approx \frac{2 \pi}{r_g}$. These equations (\ref{field}) are evaluated in the plasma rest frame and then magnetic field is transformed to the downstream frame. Regular magnetic field is perpendicular to the plane of simulation. Such turbulent field structure is similar to so-called 2d-turbulence which is observed in solar wind\cite{Matthaeus1990}. Density and velocity of plasma flow is uniform and the electric field is set to compensate Lorentz force in every point. In this paper we report the results for a few setups with different magnetic turbulence scales and we also vary the partial fraction of the energy density in the magnetic fluctuations to study how these parameters influence on the accelerated particles spectra. \section{Results} The PIC simulations presented here revealed that both the shock structure and it's propagation speed depend on the properties of turbulence in upstream flow. In Figures \ref{regularB}, \ref{turbulentB} one can see the relation of the magnetic field in shock wave to upstream regular field. In the case of strong turbulence in the shock upstream, strong turbulence also exists in downstream. The shock front it is apparently corrugated (cf \cite{2016ApJ...827...44L}) and moves somewhat slower than in the regular magnetic field. \begin{figure}[h!] \centering \includegraphics[width=0.7\textwidth]{fig/regular_field.eps} \caption{Magnetic field in shock wave without turbulence.} \label{regularB} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.7\textwidth]{fig/turbulent_field.eps} \caption{Magnetic field in shock wave with 90\% turbulence.} \label{turbulentB} \end{figure} The spectrum of accelerated electrons shown in Figure \ref{spectrum} demonstrated a strong dependence on the fraction of the turbulent magnetic field energy density in the wind. When a half of the magnetic field energy density is in the magnetic turbulence the particle spectrum becomes significantly different from the spectrum in a homogeneous field. A fraction of the accelerated electrons highly increases with the growth of turbulent energy fraction. The quantity which is showing how much of the upstream flow kinetic energy (or the ram pressure) goes to the accelerated electrons $\epsilon_e$ can be defined as that in \cite{Crumley2019} \begin{equation} \epsilon_e = \frac{\int_{p_{inj}}^{\infty}E(p)F(p)dp}{\int_{0}^{\infty}E(p)F(p)dp }\frac{m_e(\langle \gamma \rangle - 1)}{m_p (\gamma_0 - 1)} \end{equation} where $F(p)$ is the electron distribution function, $E(p)$ is the energy of a particle of momentum $p$ and $\langle \gamma \rangle$ is the averaged Lorentz-factor. $p_{inj}$ is injection momentum, showing that particle is non-thermal and we consider it equal to $2 \gamma_0 \beta_0 m_p c$. For simulation with the strongest turbulence, $\epsilon_e$ is equal to $2\cdot10^{-3}$, and it is several times higher then that in a quasi-parallel shock wave. \begin{figure}[h!] \centering \includegraphics[width=0.7\textwidth]{fig/spectrum.eps} \caption{Distribution of electrons in the relativistic shock wave with different turbulence fractions.} \label{spectrum} \end{figure} The spectrum of accelerated electrons also depends on the scales of magnetic turbulence. We tested a few setups with different maximal wave length of the magnetic turbulent modes, corresponding to the minimal wavenumber $k_{min} = 2 \pi / 5 r_g$, $2 \pi / 10 r_g$ and $ 2 \pi /20 r_g$. As it is shown in Figure \ref{spectrum_length} the acceleration efficiency is somewhat higher for larger scales of fluctuations, but this dependence is weak and the spectrum changes are not very significant. \begin{figure}[h!] \centering \includegraphics[width=0.7\textwidth]{fig/spectrum_length.eps} \caption{Distribution of electrons in the relativistic shock wave with different turbulence length, turbulence energy fraction is 90\%.} \label{spectrum_length} \end{figure} We tested setup with a longer transverse size of the simulation region, corresponding to approximately $40$ gyroradii of the upstream protons, with the magnetic turbulence energy fraction 90\%. The spectrum of the accelerated electrons has no qualitative differences from the setup with a smaller transverse size since the non-thermal part of the distribution function survives. We also studied the effect of the proton to electron mass ratio within a setup with $\frac{m_p}{m_e} = 50$. The spectrum obtained has a similar dependence on the turbulence energy fraction and scale. It is shown in Figure \ref{electrons_mass} that the electron spectrum in the higher mass ratio setup is just shifted to the higher Lorentz-factors, while the fraction of the energy in accelerated electrons $\epsilon_e = 2.5\cdot10^{-3}$ is almost the same as in the case with $\frac{m_p}{m_e} = 25$. \begin{figure}[h!] \centering \includegraphics[width=0.7\textwidth]{fig/electrons_mass.eps} \caption{Distribution of electrons in the relativistic shock wave with different mass relation, turbulence energy fraction is 90\%.} \label{electrons_mass} \end{figure} \section{Conclusions} In this paper we presented the results of particle-in-cell simulation of electron acceleration in trans-relativistic shock propagating in a clumped wind of a massive star. Simulations show that the presence of background magnetic turbulence in the clumped wind has strong impact on particle acceleration and the acceleration is efficient for high enough level of the magnetic turbulence. The electron acceleration is highly suppressed in the case of trans-relativistic shock propagating quasi-perpendicular to a regular magnetic field in the wind without the background magnetic turbulence. With the PIC simulations we demonstrated here that the presence of magnetic fluctuations in the background wind results in efficient electron acceleration by the quasi-perpendicular trans-relativistic shocks. We find that the scales and amplitudes of the magnetic turbulent field are important for this process. Further PIC simulations with a wider dynamical range will allow to study the effect of turbulent wind on particle acceleration in detail. \ack The authors acknowledge a support from RSF grant 16-12-10225. %\newpage \section*{References} \bibliographystyle{iopart-num} \bibliography{bibliogr} \end{document}
{ "alphanum_fraction": 0.770638415, "avg_line_length": 96.9066666667, "ext": "tex", "hexsha": "05c89a541160cebcaa78529e1c3f7f6c89145677", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-06-26T06:43:58.000Z", "max_forks_repo_forks_event_min_datetime": "2016-05-16T01:45:16.000Z", "max_forks_repo_head_hexsha": "c457067e0f24eaebe2e4e1d6272dc9ceae10a42a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "VadimRomansky/PIC-", "max_forks_repo_path": "papers/Physica2019/Romansky_Physica2019.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c457067e0f24eaebe2e4e1d6272dc9ceae10a42a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "VadimRomansky/PIC-", "max_issues_repo_path": "papers/Physica2019/Romansky_Physica2019.tex", "max_line_length": 1920, "max_stars_count": 2, "max_stars_repo_head_hexsha": "c457067e0f24eaebe2e4e1d6272dc9ceae10a42a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "VadimRomansky/PIC-", "max_stars_repo_path": "papers/Physica2019/Romansky_Physica2019.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-17T05:13:23.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-16T01:41:35.000Z", "num_tokens": 3703, "size": 14536 }
\documentclass[12pt]{article} \usepackage{setspace} \usepackage{graphicx, color, fancyhdr, tikz-cd, enumitem, framed, adjustbox, bbm, upgreek, xcolor, manfnt} \usepackage[framed,thmmarks]{ntheorem} \usepackage[framemethod=tikz]{mdframed} \usepackage{hyperref} \hypersetup{ colorlinks = true, linkcolor = [rgb]{0,0,0.5}, citecolor = [rgb]{0.6,0,0}, urlcolor = [rgb]{0,0,0.5} } \usepackage[style=alphabetic, bibencoding=utf8]{biblatex} %Set the bibliography file \bibliography{sources} %lots of font stuff \usepackage[T1]{fontenc} \usepackage[urw-garamond]{mathdesign} \usepackage{garamondx} \let\mathcal\undefined \newcommand{\mathcal}[1]{\text{\usefont{OMS}{cmsy}{m}{n}#1}} %Document-Specific includes \usepackage{ytableau} \usepackage{mathtools} \usepackage{scalerel} %Replacement for the old geometry package \usepackage{fullpage} \usepackage{amsmath} %Input my definitions \input{./mydefs.tex} %Shade definitions \theoremindent0cm \theoremheaderfont{\normalfont\bfseries} \def\theoremframecommand{\colorbox[rgb]{0.9,1,.8}} \newshadedtheorem{defn}[thm]{Definition} %Set apart my theorems and lemmas and such \surroundwithmdframed[outerlinewidth=0.4pt, innerlinewidth=0pt, middlelinewidth=1pt, middlelinecolor=white, topline=false,bottomline=false,rightline=false,leftmargin=2em]{thm} \surroundwithmdframed[outerlinewidth=0.4pt, innerlinewidth=0pt, middlelinewidth=1pt, middlelinecolor=white, topline=false,bottomline=false,rightline=false,leftmargin=2em]{lem} \surroundwithmdframed[outerlinewidth=0.4pt, innerlinewidth=0pt, middlelinewidth=1pt, middlelinecolor=white, topline=false,bottomline=false,rightline=false,leftmargin=2em]{cor} \surroundwithmdframed[outerlinewidth=0.4pt, innerlinewidth=0pt, middlelinewidth=1pt, middlelinecolor=white, topline=false,bottomline=false,rightline=false,leftmargin=2em]{prop} \surroundwithmdframed[outerlinewidth=0.4pt, innerlinewidth=0, middlelinewidth=1pt, middlelinecolor=white, topline=false,bottomline=false,rightline=false,leftmargin=2em]{rmk} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%% Customize Below %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %header stuff \setlength{\headsep}{24pt} % space between header and text \pagestyle{fancy} % set pagestyle for document \lhead{General Exam Paper} % put text in header (left side) \rhead{Nico Courts} % put text in header (right side) \cfoot{\itshape p. \thepage} \setlength{\headheight}{15pt} %\allowdisplaybreaks % Document-Specific Macros \newcommand*{\ttc}{{\large $\triangle$}\kern-0.86em\raisebox{0.3ex}{$\scaleobj{0.78}\otimes$}\hspace{1ex}} \DeclareMathOperator{\Spc}{Spc} \DeclareMathOperator{\Pol}{Pol} \begin{document} %make the title page \title{Schur Duality and Strict Polynomial Functors\\\vspace{1ex} \normalsize General Exam Paper} \author{Nico Courts\footnote{University of Washington, Seattle. Email: [email protected]}} \date{Exam Presentation: March 10th, 2020, 10:15am, DEN 213} \maketitle \begin{abstract} We begin by going through a considerable amount of domain knowledge concerning representations of $\GL_n$, representations of $\frakS_n$, tracing the development from the classical study of group representations by Schur and Weyl and the transformation of this theory in the more robust language of affine group schemes. From there, the story takes on a more categorical flavor as we discuss different manifestations of polynomial representations of $\GL_n$, following the work of Friedlander and Suslin as well as Krause, Aquilino, and Reischuk. In the latter case, we show how they determined that the Schur-Weyl functor is monoidal, opening up the theory to the machinery of monoidal categories. We take some time to develop the theory of tensor triangulated geometry from Balmer as well as discuss some standard constructions necessary for the theory. We end our paper by talking about how all of this work comes together to elucidate some problems at the boundaries of modern representation theory as well as techniques with which we can solve them.\vspace{0.5in} {\begin{center} \footnotesize The most up-to-date version of this paper can be downloaded at the following link:\\ \url{https://github.com/NicoCourts/General-Exam-Paper/raw/master/General-Paper.pdf} \end{center}} \end{abstract} \newpage %\setcounter{tocdepth}{3} \tableofcontents \newpage \section{Introduction} \subsection{Issai Schur and polynomial representations} The story of this project (more-or-less) begins with Schur's doctoral thesis \cite{schur-thesis} in which he defines the polynomial representations of the group $\GL_n(k)$---a theory which he developed more completely in his later paper \textit{\"Uber die rationalen Darstellungen der allgemeinen linearen Gruppe}\footnote{English: \textit{On the rational representations of the general linear group}} \cite{schur-rational}. In these papers, Schur develops the idea of a \textbf{polynomial representation of $\GL_n(k)$}, meaning a (finite dimensional) representation where the coefficient functions of the representing map \[\rho:\GL_n(k)\to \GL_m(k)\] is polynomial in each coordinate. For example, the map sending \[A=\begin{pmatrix} a&b\\ c&d \end{pmatrix}\mapsto \begin{pmatrix} a^2d-abc & acd-c^2b & 0\\ abd-b^2c & ad^2-bcd & 0\\ 0 & 0 & ad-bc \end{pmatrix}=\rho(A)\] is a three-dimensional polynomial representation of $\GL_2(\bbR)$. The block-diagonal form above demonstrates a direct sum decomposition of our representation into two parts: one two-dimensional homogeneous degree 3 and one one-dimensional homogeneous degree 2 (in the entries of $A$). A result in \cite{schur-thesis} tells us that, in fact, this can always be done: if $V$ is a polynomial representation of $\GL_n(k)$, then $V$ decomposes as a direct sum of representations \[V=\bigoplus_\delta V_\delta\] where each $V_\delta$ is a polynomial representation where the coefficient functions are \textit{homogeneous degree $\delta$}. This allows us to focus our attention to the structure of these $V_\delta$ as the fundamental building blocks of the theory. The key insight made in this theory comes from the observation that the vector space \[E^{\otimes d}\eqdef (k^n)^{\otimes d}\] can made into a $(\GL_n(k),\frakS_d)$-bimodule in a very natural way, and that this bimodule gives us a way to relate $\rmod {\frakS_d}$ with $\lmod {\GL_n(k)}$ via the so-called \textbf{Schur-Weyl functor.} \subsection{A more modern treatment: affine group schemes} Schur's discovery, while already interesting enough by itself, takes on a new level of depth when one puts things in the right context. More modern mathematicians realized that this phenomenon is best stated as a property of \textit{affine group schemes}, rather than as groups. When put into this context, the classification of rational representations (group scheme morphisms into $\GL(V)$) comes as given and the classification comes as a very natural condition put on the corresponding map between coordinate algebras. This better motivates many of the constructions that Schur made and opens up his theory to analysis using the tools of category theory and algebraic geometry. \subsection{The Schur-Weyl functor} Clearly a connection between representations of two groups that are so ubiquitous in group theory and math in general is a stunning observation, and much effort has been expended since the late 20th century to study this functor and its properties---especially in how it relates the representation theory of these two groups. For instance, Friedlander and Suslin \cite{friedlander-suslin} originally discussed the idea of \textbf{strict polynomial functors} and showed that the category of repesentations of the Schur algebra $S(n,d)$ was equivalent to the category $\calP_d$ of homogeneous degree $d$ strict polynomial functors. In later work, Krause \cite{krause-strict-poly-func} used an alternative construction of $\calP_d$ as the category of of reprsentations of the $d$-divided powers of the category of finitely generated projective $k$-modules. This category is denoted $\Gamma^d P_k$ (or $\Gamma^d_k$ for short) and his version of strict polynomial functors is $\Rep \Gamma^d_k$. The upshot of this defintion is that the polynomial structure we desire is better encapsulated in the domain category, rather than placing awkward conditions on the functors themselves. This also enables Krause to define monoidal structure on $\Rep \Gamma^d_k$ using the fact that presheaves are canonical limits of representable presheaves. Krause's students Aquilino and Reischuk, in their paper \cite{aquilino-reischuk}, prove, among other facts, that under these natural monoidal structures the Schur-Weyl functor is in fact monoidal. This puts the theory of representations of these groups and algebras firmly in the realm of monoidal categories, opening up the area to new questions using tools from category theory. \subsection{Tools and further directions} Sections 5,6, and 7 are devoted to reproducing the core aspects of some tools that can be used in solving problems in representation theory including methods from homological algebra (the derived category) and triangulated categories (the Balmer spectrum). In fact, these two tools coalesce to give a way to analyze otherwise the (sometimes unweildy) categories $\Rep S(n,r)$. We finish our discussion with some ideas of how to proceed from this knowledge to solving new problems. \subsection{Notation and conventions}\label{subsec:notation} Throughout this paper we will define $k$ to be an infinite field (not necessarily of characteristic zero or algebraically closed unless otherwise noted). d%Let $\Alg_k$ be the category of $k$-algebras and $\Grp$ denote the category of groups with homorphisms. We will use $\Gamma=\GL_n$ to denote the affine group scheme $\Hom_{\Algk}(k[x_{ij}|1\le i,j\le n]_{\det},-)$ and $\GL_n(k)$ to denote either the $k$-points of $\GL_n$ or the abstract group, depending on which viewpoint best suits the discussion. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \section{The classical theory: Representations of \texorpdfstring{$\GL_n$}{GLn} and of \texorpdfstring{$\frakS_n$}{Sn}} We begin by detailing the theory behind the (polynomial) representations of $\GL_n$ as well as the representations of $\frakS_n$ to familiarize ourselves with the classical representation theory associated to these groups. \subsection{Representations of \texorpdfstring{$\frakS_n$}{Sn}} The representation theory for $\frakS_n$ over the complex numbers is a subject that has been widely studied by representation theorists and combinatorialists alike for over a century. Before we dive into specifics, we write down the idea originally worked out by Frobenius \cite{frobenius-charaktere} in his work in 1900 on the characters of $\frakS_n$: \begin{thm}\label{thm:frob-conj} The conjugacy classes (and thus isomorphism classes of irreducible representations over $\bbC$) of $\frakS_n$ are in bijection with partitions of $n$. \end{thm} \begin{rmk} In what follows we attempt to give a tangible, minimalistic overview of the nicest case of representations of $\frakS_n$. Some of the arguments below appeal more to intuition and examples than rigor, but we feel this better prepares the reader for computations in $\frakS_n$ without being weighed down by unnecessary details. This can all be made rigorous, of course, at the expense of some clarity and conciseness. \end{rmk} Let's get some sense first about how we can relate these two ideas by recalling some easy lemmas from group theory. Recall that each element of $\frakS_n$ can be written as a product of disjoint cycles and that this representation is unique up to reordering the cycles. We can make this representation unique by writing each cycle as one starting at its least element and then ordering the cycles by these least elements. For instance, the permutation (in two-line notation) \[\sigma=\begin{pmatrix}1&2&3&4&5&6&7&8\\ 2&1&7&5&3&8&4&6\end{pmatrix}\in\frakS_8\] is represented uniquely in this way as the product of cycles: \[\sigma=(1\,2)(3\,7\,4\,5)(6\,8).\] The next observation to recover: if $\tau,\eta\in\frakS_n$ and $\tau=(\tau_1\,\tau_2\,\cdots\,\tau_k)$ is a cycle, \[\eta^{-1}\tau\eta=(\eta(\tau_1)\,\eta(\tau_2)\,\cdots\,\eta(\tau_k)).\] We can see this demonstrated in the computation \begin{align*} (1\,3\,5)\sigma(1\,3\,5)^{-1}&=(1\,3\,5)(1\,2)(1\,5\,3)(1\,3\,5)(3\,7\,4\,5)(1\,5\,3)(1\,3\,5)(6\,8)(1\,5\,3)\\ &=(3\,2)(5\,7\,4\,1)(6\,8)\\ &=(1\,5\,7\,4)(2\,3)(6\,8) \end{align*} The important observation here is that the ``shape'' (the lengths of the cycles when written as a product of disjoint cycles) is preserved under conjugation. In fact, \begin{lem}\label{lem:conj-classes} The conjugacy classes of $\frakS_n$ are in one-to-one correspondence with the partitions of $n$. \end{lem} \begin{prf} Let $\scrP_n$ denote the partitions of $n$ and let $C_n$ denote the conjugacy classes in $\frakS_n$. We construct the set map \[\varphi:C_n\to \scrP_n\] by sending a conjugacy class to the weakly-decreasing list of cycle lengths (including trivial cycles, if necessary). For instance in $\frakS_8$, \[(1\,5\,3)(2\,7)\qquad\text{cooresponds to}\qquad (3,2,1,1,1).\] The results cited and demonstrated above shows that this map is well defined---conjugation preserves the cycle length in the disjoint cycle representation of an element. Furthermore if $p\in \scrP_n$ is a partition, the adjoint action of $\frakS_n$ on $\varphi^{-1}(p)$ is transitive, since if two elements have the same cycle lengths when written as disjoint cycles, we can line the cycles up according to length and act by the permutation that ``puts labels in the right place''. If we look at $\sigma$ and the element we found by conjugation above, we have \begin{align*} (1\,2)(3\,7\,4\,5)(6\,8)\\ (3\,2)(5\,7\,4\,1)(6\,8) \end{align*} where we notice that $1\mapsto 3$, $3\mapsto 5$, and $5\mapsto 1$, meaning that the cycle that takes the top element to the bottom is $(1\,3\,5)$---although of course we already knew that. Another example are the elements $(1\,4\,5)$ and $(3\,2\,1)$. Here we want $1\mapsto 3$, $4\mapsto 2$ and $5\mapsto 1$. Thus one element that takes the first to the second is $(5\,1\,3)(2\,4)$. This demonstrates that the action is not faithful since we could also act by $(5\,1\,3\,7)(2\,4)(6\,8)$ and get the same element. the important fact here is that $\frakS_n$ acts transitively on the elements of $1,\dots,n$, so there is always such an element. The surjectivity of this map is clear since we can write from any partition of $n$ a product of disjoint cycles corresponding to this partition (which then must map to it) and injectivity is clear since the disjoint cycle representation is unique (up to reordering cycles, which doesn't affect the image $\varphi(x)$). This proves the lemma. \end{prf} From here, the standard result that (again, over $\bbC$) the conjugacy classes of a group are in bijection with the irreducible representations finishes demonstrating how theorem \ref{thm:frob-conj} is true. But a simple set bijection belies the depth of the connection here. \subsubsection{Construction of the irreducible representations} It is possible, through the idea of a Young symmetrizer, to directly link a Young diagram to the corresponding irreducible representation. Throughout this subsection, we will be relying on facts developed in \cite{fulton-harris}, although there is also a more complete combinatorial picture painted in Fulton's book \textit{Young Tableaux} \cite{fulton-tableaux}. To begin our discussion, consider the trivial representation within the left regular representation $\bbC\frakS_n$: it is a one-dimensional subspace spanned by the element \[x_1=\sum_{\sigma\in\frakS_n}\sigma\] where you can see that this element is fixed by left multiplication, demonstrating that it has the trivial $\frakS_n$ action. The subspace spanned by the element \[x_{-1}=\sum_{\sigma\in\frakS_n}(-1)^{\operatorname{sign}(\sigma)}\sigma\] is the sign representation, where an element with sign 1 acts by -1. This is because \[\operatorname{sign}(\tau\sigma)=\operatorname{sign}(\tau)+\operatorname{sign}(\sigma)\pmod{2}.\] It ends up that these two representations form the two ``endpoints'' of the representation theory of $\frakS_n$. The exact sense in which this is true is captured through Young diagrams! For the purposes of illustration, let us return to our example above of $\frakS_8$. Here the trivial and sign representations correspond (repsectively) to the tableaux \[\ytableausetup{smalltableaux,centertableaux}\ydiagram{8}\qquad\text{and}\qquad\ydiagram{1,1,1,1,1,1,1,1}\] which, in turn, correspond to partitions $(8)$ and $(1,1,1,1,1,1,1,1)$ of $8$. The way to make this connection is through the definition of a \textit{Young symmetrizer:} \begin{defn} Fix an $n\ge 1$ and let $\lambda$ be a partition of $n$. Then define two elements of $\bbC\frakS_n$, $a_\lambda$ and $b_\lambda$ in the following way: \[a_\lambda=\sum_{\sigma\in R(T_\lambda)}\sigma\qquad\text{and}\qquad b_\lambda=\sum_{\sigma\in C(T_\lambda)}(-1)^{\operatorname{sign}(\sigma)}\sigma\] where $T_\lambda$ is the Young diagram corresponding to $\lambda$ and given some labeling (say the canonical one that labels boxes left-to-right and top-to-bottom) $R(T_\lambda)$ (resp. $C(T_\lambda)$) denote the subgroups of $\frakS_n$ stabilizing the rows (resp. columns) of $T_\lambda$ under the action of $\frakS_n$ on the labels. Then the \textbf{Young centralizer} of $\lambda$ is \[c_\lambda=a_\lambda b_\lambda\in\bbC\frakS_n.\] \end{defn} The canonical fillings of the diagrams above are\footnote{Here you can see yet another connection to disjoint cycle representations. Notice, under the map defined in lem.~\ref{lem:conj-classes}, that the conjugacy class corresponding to the trivial representation is the one consisting of ``long'' (length $n$) cycles. Using the unique ordering on products of disjoint cycles described after the statement of thm.~\ref{thm:frob-conj}, we can identify fillings with long cycles and we see that the cycle $(1\,2\,3\,4\,5\,6\,7\,8)$ is the only one in ``standard form'' in that it gives us a standard Young tableau. The complexity of the Young diagram (meaning how many different standard fillings it admits) gives us some information about the dimensionality of the corresponding irreducible representation, as we will see later.} \[\ytableaushort{12345678}\qquad\text{and}\qquad\ytableaushort{1,2,3,4,5,6,7,8}\] and so since the column stabilizer of the first diagram is trivial and the row stablizer is everything, \[c_{(8)}=\left(\sum_{\sigma\in R(T_{(8)})}\sigma\right)\left(\sum_{\sigma\in C(T_{(8)})}(-1)^{\operatorname{sign}(\sigma)}\sigma\right)=\sum_{\sigma\in\frakS_n}\sigma=x_1\] and since the roles of the column and row stabilizing elements are reversed for the sign representation, we get \[c_{(1,1,1,1,1,1,1,1)}=\left(\sum_{\sigma\in R(T_{(1,1,1,1,1,1,1,1)})}\sigma\right)\left(\sum_{\sigma\in C(T_{(1,1,1,1,1,1,1,1)})}(-1)^{\operatorname{sign}(\sigma)}\sigma\right)=\sum_{\sigma\in\frakS_n}(-1)^{\operatorname{sign}(\sigma)}\sigma=x_{-1}.\] That the Young symmetrizers correspond with the elements spanning the corresponding representations of is no coincidence! \begin{defn} The module $V_\lambda$ is a $\bbC\frakS_n$-module generated by the Young symmetrizer $c_\lambda$. \end{defn} Notice that the dimension of each $V_\lambda$ is determined by number of linearly-independent elements that lie in the orbit of $c_\lambda$. We compute another example that gives a general pattern: \begin{ex} Let $\lambda=(2,1,1)$ be the partition of $5$, so \[T_\lambda=\ydiagram{2,1,1}.\] Given the canonical filling of $T_\lambda$, \[\ytableaushort{12,3,4},\] we have \[a_\lambda=e+(1\,2)\qquad\text{and}\qquad b_\lambda=e-(1\,3)-(1\,4)-(3\,4)+(1\,3\,4)+(1\,4\,3)\] and so we can compute that the Young symmetrizer for this partition is \[c_\lambda=e-(1\,3)-(1\,4)-(3\,4)+(1\,2)+(1\,3\,4)+(1\,4\,3)-(2\,1\,4)-(1\,2)(3\,4)+(1\,3\,4\,2)+(1\,4\,3\,2)\] and one can show (c.f. \cite[48]{fulton-harris}) that this is the representation $V\wedge V$ where $V$ is the standard representation (the complement of copy of the trivial representation spanned by the vector $(1,1,1,1)\in\bbC^4$ under the usual embedding of $\frakS_4$ in $\GL_4$ as permutation matrices). \end{ex} This completes the description of the representations of $\frakS_n$ over $\bbC$, but in fact everything we have done here holds over the splitting field of $\frakS_n$, that is, the minimal field such that representations don't split further under field extension. We haven't proved here that \begin{enumerate} \item the $V_\lambda$ are irreducible; or \item the $V_\lambda$ are pairwise nonisomorphic, \end{enumerate} but one can look up any of the standard texts (including the ones cited in this section) for more rigorous and thorough treatments of these facts. \subsection{Polynomial representations of \texorpdfstring{$\Gamma$}{Gamma}} Let $k$ be an infinite field\footnote{In some cases we will be able to allow $k$ to be a ring, but we will still need that $k$ be infinite so that polynomials over it are determined by their values.} and $\Gamma$ be the affine group scheme $\GL_n$. This can be thought of as the functor \[\Gamma:\Alg_k\to \Grp\quad\text{sending}\quad A\mapsto \GL_n(A).\] Then \begin{defn} A (finite dimensional) \textbf{representation} of $\Gamma$ is a (finite dimensional) vector space $V$ along with a group scheme homomorphism \[\rho:\Gamma\to \GL(V)\eqdef \Aut(V\otimes_k -)\] \end{defn} \begin{rmk} Representations of (the group, which can be thought of as the $k$ points of the $k$-scheme) $\GL_n(k)$ can be, in general, ``analytic.'' One can check that the map \[\rho:k^\times=\GL_1(k)\to \GL(k^2)\qquad\text{via}\qquad x\mapsto\begin{pmatrix} 1 & \ln |x|\\ 0 & 1 \end{pmatrix}\] gives a group homomorphism (and thus representation) between these two groups, but the logarithm makes this representation decidedly \textit{not algebraic.} This leads to slightly more awkward definitions in more classical treatments of the theory (e.g. \cite{green}), where one has to specifically rule these out. The upshot to using a more algebro-geometric approach is that we start off in the world of rational maps where such a representation doesn't make sense. \end{rmk} Recall that the affine group scheme $\GL_n$ is represented by the algebra \[k[x_{ij}]_{\det}\] where $1\le i,j\le n$ and $\det$ is the polynomial corresponding to the determinant of the matrix $A=(x_{ij})$. Since $\GL_n$ is an affine scheme, we know that the global functions are \[k[\Gamma]=k^\Gamma\cong k[x_{ij}]_{\det}\] where we will (for clarity) use the notation $c_{ij}:\Gamma\to k$ to denote the function corresponding to $x_{ij}$. \begin{defn}\label{def:poly-rep} A \textbf{polynomial representation} of $\Gamma$ is a representation $\rho:\Gamma\to \GL(V)$ (where $\dim_k V=m$) such that (on points) the structure maps (\ref{rmk:structure-maps}) of \[\rho_A:\Gamma(A)\to \GL(V)(A)\cong\GL_m(A)\] are polynomials in the functions $c_{ij}:\Gamma(A)\to A$ that extract the $(i,j)^{th}$ entry. If all the structure maps are homogeneous of degree $r$ for some fixed $r$, we say that $\gamma$ is a \textbf{homogeneous degree $r$ polynomial representation of $\Gamma$.} \end{defn} \begin{rmk}\label{rmk:structure-maps} Recall (or learn for the first time!) that the \textit{structure maps} of a representation $(\rho,V)$ are a collection of maps $r_{ij}$ for $1\le i,j,\le n$ from $\Gamma$ to $k$ such that for all $g\in \Gamma$: \[g\cdot v_i=\sum_{j=1}^n r_{ij}(g)v_j\] where we have picked a basis $\{v_1,\dots,v_n\}$ for $V$. Of course changing basis may change our $r_{ij}$, but their \textbf{span} $\langle r_{ij}\rangle$ is an invariant of the representation. \end{rmk} \begin{defn}\label{def:Mnr} Let $\Pol_k(n)=\Pol(n)$ be the collection of all polynomial representations of $\GL_n$ and let $\Pol_k(n,r)=\Pol(n,r)$ be the collection of all homogeneous degree $r$ polynomial representations of $\GL_n$. \end{defn} It is the \textit{polynomial} representations that we will concern ourselves with in the following sections. \subsubsection{Reducing scope} In what follows we (temporarily) restrict to the case of considering the $R$-points of the scheme, where $R\in\Alg_k$. Using some of our familiar friends from representation theory (as well as some clever twists), we can simplify this picture considerably by proving the following structural result: \begin{thm}[{\cite[pp.7-10]{schur-thesis}}]\label{thm:decomp} Every polynomial representation $V$ of the group $\GL_n(R)$ (where $R$ is an algebra over an infinite field $k$) decomposes as a direct sum \[V\cong\bigoplus_{\delta\in\bbN}V_\delta\] where $V_\delta$ is a \textit{homogeneous} polynomial representation of degree $\delta.$ \end{thm} Clearly, then, it suffices to understand the \textit{homogeneous degree $r$} polynomial representations of $\Gamma(A)$ if we are looking to understand the larger structure. We begin with a useful lemma extracted from a proof in \cite{schur-thesis} echoing the general theory of orthogonal decomposition of Artinian algebras. \begin{lem}\label{lem:orth-decomp} Let $C_0,\dots,C_m\in M_n(R)$ be mutually orthogonal idempotent matrices that sum to the identity. That is, \[I_n=\sum_i C_i\quad\text{and}\quad C_iC_j=\delta_{ij}C_i\] for all $0\le i,j\le m$. Then there exists an invertible matrix $P$ such that for some positive integers $d_0,\dots,d_m$ with $\sum_k d_k=n$ and for all $i$, \[P^{-1}C_iP=\begin{pmatrix} \mathbf{0}_{N_i} & &\\ & I_{d_i} & \\ & & \mathbf{0}_{M_i} \end{pmatrix}\] Where $N_i=\sum_{0\le j<i}d_j$ and $M_i=n-d_i-N_i$ \end{lem} \begin{prf}[of lem~\ref{lem:orth-decomp}] We set $S_k=\{C_0,C_1,\dots,C_k\}$ and we proceed by induction on $k$. When $k=0$, $S_k=\{C_0\}$. Now since $C_0^2=C_0$, we get that 1 and 0 are the only eigenvalues of $C_0$, so there is an $r\times r$ matrix $P_0$ and a positive integer $d_0$ such that \[P_0^{-1}C_0P_0=\begin{pmatrix} I_{d_0} & \\ & \mathbf{0}_{n-d_0} \end{pmatrix}.\] which establishes the base case. Now assume that we have a matrix $P_{k-1}$ such that this property holds for all elements of $S_{k-1}$. Define, for each $0\le i\le k$, \[C_i'\eqdef P^{-1}_{k-1}C_iP_{k-1}\] and since the $C_k$ is assumed to be orthogonal to all other $C_i$, \[C_k'=\begin{pmatrix} \mathbf{0}_{N_k} & \\ & D_k \end{pmatrix}\] for some $D_k$. Now by properties of block diagonal matrices, we have \[D_k^2=D_k\] so the eigenvalues of $D_k$ are again one and zero. Thus there is an invertible $Q\in \GL_{n-N_k}$ such that \[Q^{-1}D_kQ=\begin{pmatrix}I_{d_k} &\\ & \mathbf{0}_{M_k}\end{pmatrix}\] and so by setting \[P_k\eqdef P_{k-1}\begin{pmatrix}I_{N_k} &\\ & Q\end{pmatrix}\] we can define \[C''_i\eqdef P_k^{-1}C_i P_k=\begin{pmatrix}I_{N_k} &\\ & Q\end{pmatrix}^{-1}C'\begin{pmatrix}I_{N_k} &\\ & Q\end{pmatrix}\] for $0\le i\le k$, we see immediately that $C_i'=C_i''$ for $0\le i<k$ and furthermore \[C_k''=\begin{pmatrix} \mathbf{0}_{N_k} & \\ & Q^{-1}D_kQ \end{pmatrix}=\begin{pmatrix} \mathbf{0}_{N_k} & &\\ & I_{d_k} & \\ & & \mathbf{0}_{M_k} \end{pmatrix}\] completing the inductive step. This this result holds for all $S_i$ and in particular for $S_m$, so the result is proven. \end{prf} As well as another result on a special class of commuting block diagonal matrices: \begin{lem}\label{lem:block-diag} Let $R\in\Alg_k$ ($k$ be an infinite field) and let $A$ be a block diagonal matrix over $k$ of the form \[A=\operatorname{diag}(x^mI_{d_m},x^{m-1}I_{d_{m-1}},\dots,I_{d_0})\] where $d_i$ is (clearly) the dimension of the $(m-i)^{th}$ block and let $B$ be any matrix that commutes with $A$ for every choice of $x\in k$. Then $B$ is block diagonal of the same shape as $A$. \end{lem} \begin{prf}[of lem~\ref{lem:block-diag}] We proceed by comparing the entries in $AB$ and $BA$: notice that \[(AB)_{ij}=\sum_k A_{ik}B_{kj}=A_{ii}B_{ij}=x^aB_{ij}\] and \[(BA)_{ij}=\sum_k B_{ik}A_{kj}=B_{ij}A_{jj}=x^bB_{ij}.\] We will show that if the $(i,j)^{th}$ postion is not in one of the blocks of $A$, then it is zero. But if $(i,j)$ is not in one of the blocks of $A$, then the nonzero element in the $i^{th}$ row and the nonzero element in the $j^{th}$ column ($x^a$ and $x^b$ in the above equations) are not the same! Since $x$ is arbitrary, this forces $B_{ij}=0$, so $B$ is block diagonal with blocks the same as $A$. \end{prf} \begin{rmk} Notice that in the above proof we used implicitly that there is an $x\in R$ such that for all $a,b$ \[x^a=x^b\quad\Rightarrow\quad a=b\] which is true since $k$ is infinite. This can cause a problem for finite fields since, for instance, every element in $\bbF_p$ satisfies $x^p=x$. \end{rmk} And finally using these two lemmas allows us to prove our main result: \begin{prf}[of thm~\ref{thm:decomp}] We recreate the argument in Schur's thesis, translated from German and reinterpreted in more modern parlance. Let $(\rho,V)$ be a polynomial representation of $\GL_n(R)$ with $\dim_k V=r$. Then let $x\in R^\times$ be arbitrary (thought of as an indeterminate) and consider the matrix $xI_n\in\Gamma$. The image of this matrix under $\rho$ is a matrix \[\rho(A)=\begin{pmatrix} p_{11}(x) & \cdots & p_{1r}(x)\\ \vdots & \ddots & \vdots\\ p_{r1}(x) & \cdots & p_{rr}(x) \end{pmatrix}\] where each $p_{ij}$ is a polynomial in $x$. Let $m=\max_{i,j}\deg p_{ij}$, and this gives us a decomposition \[\rho(A)=x^m C_0+x^{m-1}C_1+\cdots+ xC_{m-1}+C_m\] where each $C_i$ is an $r\times r$ matrix. Let $y$ be another indeterminate and $B=yI_n$. By virtue of being a representation of $\GL_n(R)$, we get \[\rho(A)\rho(B)=\rho(xI_n)\rho(yI_n)=\rho(xyI_n)=\rho(AB)\] and using this setup we prove the following result: \[\text{For all $0\le i,j\le m$, with the $C_l$ as above,}\quad C_iC_j=\delta_{ij}C_i\] That this is true can be established by comparing coefficients in the equation \begin{align*} \rho(AB)&=\rho(A)\rho(B)\\ C_0(xy)^m+\cdots+C_i(xy)^{m-i}+\cdots+C_m&=C_0^2x^my^m+\cdots+C_iC_jx^{m-i}y^{m-j}+\cdots+C_m^2 \end{align*} Indeed, we immediately get that $C_i=C_i^2$ and furthermore the coefficients on $x^iy^j$ when $i\ne j$ give us \[0=C_{m-i}C_{m-j}.\] Thus we have shown that the $C_i$ form a set of orthogonal idempotent matrices and evaluating our original equation at $x=1$, we get (since $\rho$ is a homomorphism) \[I_r=1C_0+\cdots+1C_m=\sum C_i\] so the result from lemma~\ref{lem:orth-decomp} applies: we get a matrix $P$ such that \[P^{-1}\rho(xI_n)P=\begin{pmatrix} x^mI_{d_0} & & & &\\ & x^{m-1}I_{d_1} & & &\\ & & \ddots & &\\ & & & xI_{d_{m-1}} & \\ & & & & I_{d_m} \end{pmatrix}\] Now let $\rho'(g)=P^{-1}\rho (g)P$ for all $g\in\GL_n(R)$. This is a representation of $\Gamma$ since it it differs from $\rho$ by an automorphism of $\GL(V)$. Since matrix multiplication is an algebraic operation, $\rho'$ is still a polynomial representation of $\GL_n(R)$. But notice that for all $g\in\GL_n(R)$ \[\rho'(g)\rho'(xI_n)=\rho'(xg)=\rho'(xI_n)\rho'(g)\] Then lemma \ref{lem:orth-decomp} gives us that $\rho'(g)$ decomposes in the same way for all $g\in \GL_n(R)$, so we know that $\rho'$ decomposes as a direct sum of representations \[\rho'=\sum_{i=0}^m \rho'_i\] where for each $i$ and $\lambda\in k$, \[\rho'_i(\lambda g)=\rho_i'(\lambda I_{d_i})\rho_i'(g)=\lambda^i\rho'_i(g)\] so each $\rho_i'$ is a homogeneous degree $i$ polynomial representation of $\Gamma$. But of course the decomposition of a representation is independent of choice of basis, so we get a decomposition of $\rho$ into homogenous pieces, as desired. \end{prf} This is wonderful if one is just interested in the representation theory of $\GL_n(A)$ for a specific $A$, but here we are interested in group \textit{scheme} representations. The result above effectively tells us how the scheme splits up on points, but how about the global structure? What we need is that this splitting is functorial. That is, if $\rho:\Gamma\to\GL(V)$ is a polynomial representation, there is a subscheme $\rho_r$ that, for all $A$, $(\rho_r)_A$ is the homogeneous degree $r$ part of $\rho_A$. Let $\rho_r$ be such a map and let $\varphi:A\to B$ be an algebra morphism. Then we want that this induces a map \[\hat\varphi:(\rho_r)_A\to (\rho_r)_B.\] Fix a basis $v_1,\dots,v_n$ for $V$ and let $g\in \Gamma(A)$. Then since $(\rho_r)_A$ is homogeneous degree $r$, for all $v_i$, \[g\cdot_A v_i=\sum_j f_{ij}(g)v_i\] where the $f_{ij}$ is a homogeneous degree $r$ polynomial in the $c_{ij}$. Now if we write $(\rho_r)_A(g)=M_{g,A}=(m_{ij})_{i,j}$, this means that if $\lambda\in k$, \[M_{\lambda g, A}=(\lambda^rm_{ij})_{i,j}=\lambda^rM_{g,A}\] The map $\hat \varphi$ is such that \[\hat\varphi(M_{g,A})=(\varphi(m_{ij}))_{i,j}=M_{g,B}\] and so \[M_{\lambda g,B}=\hat\varphi(M_{\lambda g,A})=(\varphi(\lambda^r m_{ij}))_{i,j}=(\lambda^r \varphi(m_{ij}))_{i,j}=\lambda^rM_{g,B}\] which tells us that the map $\rho_r$ is indeed a functor $\Gamma\to\GL(V)$, so is a subgroup scheme of $\rho$. Thus the pointwise splitting shown above lifts to a splitting of the entire representation $\rho$. \subsubsection{Monomials and multi-indices}\label{subsubsec:indices} All of the discussion up to this point has revolved around polynomials in $n^2$ variables, which quickly gets unwieldy unless one uses some better notation. To that end, \begin{defn} An $(n,r)$-\textbf{multi-index} $i$ is an $r$-tuple $(i_1,\dots,i_r)$ where each $i_j\in\underline n\eqdef\{1,\dots,n\}$. The collection of all $(n,r)$-multi-indices is denoted $I(n,r)$. \end{defn} \begin{rmk} One can also think of an element $i\in I(n,r)$ as a (set) map \[i:\underline r\to\underline n.\] \end{rmk} The idea here is to associate to each monomial in a polynomial ring in many variables a tuple indicating its multidegree. That is we think of \[(i_1,\dots,i_r)\quad\leftrightsquigarrow\quad x_{i_1}\cdots x_{i_r}\] as corresponding to the same object. Which is wonderful except for one small flaw: polynomials are commutative and multi-indices (as we have defined them) aren't! For example, in $I(3,4)$, \[(2,2,1,3)\quad\leftrightsquigarrow\quad x_1x_2^2x_3\quad\leftrightsquigarrow\quad (3,2,1,2).\] To handle this disparity, we define an equivalence relation on $I(n,r)$ where we say that $i\sim j$ if they are in the same orbit under the natural $\frakS_r$ action. That is, if there exists $\sigma\in\frakS_r$ such that \[(i_1,\dots,i_r)=(j_{\sigma(1)},\dots,j_{\sigma(r)})\] In the context of polynomial representations of $\Gamma$, we want to consider polynomials in the coordinate functions $c_{ij}$, so as a matter of notation if $i,j\in I(n,r)$, let $c_{i,j}$ denote the monomial \[c_{i,j}=c_{i_1j_1}\cdots c_{i_rj_r}.\] Again, we want to take into account that we can permute the order on the right hand side, but now we need that $i_k$ and $j_k$ remain linked to the same function. To deal with this, we define an equivalence relation $\sim$ on $I(n,r)\times I(n,r)$ such that \[(a,c)\sim (b,d)\] if there exists a $\sigma\in\frakS_r$ such that \[(a_1,\dots,a_r)=(b_{\sigma(1)},\dots,b_{\sigma(r)})\quad\text{and}\quad(c_1,\dots,c_r)=(d_{\sigma(1)},\dots,d_{\sigma(r)}).\] The upshot of this work is that it gives us a bijection between (total) degree $r$ monomials in the $c_{ij}$ and the set \[I(n,r)\times I(n,r)/\sim\] \subsubsection{\texorpdfstring{$A_k(n,r)$}{Ak(n,r)}} Notice that if $V\in \Pol(n,r)$, each of its structure maps are homogeneous degree $r$ polynomials. As the first object of study, consider \begin{defn} Let $A_k(n,r)=A(n.r)$ denote the collection of all homogeneous degree $r$ polynomials in the coordinate functions $c_{ij}:\Gamma\to k$. \end{defn} It is not too hard to see that \begin{prop} $A_k(n,r)$ is spanned by the elements \[\{c_{i,j}|(i,j)\in I(n,r)\times I(n,r)\}\] \end{prop} however it takes a short argument to see \begin{lem} The dimension of $A_k(n,r)$ over $k$ is $\binom{n^2+r-1}{n^2-1}=\binom{n^2+r-1}{r}$. \end{lem} \begin{prf} The following is a ``stars and bars'' argument that is pervasive in combinatorics. See for example \cite{stanley} if unfamiliar with these techniques. Fix an ordering of the $c_{ij}$ (say the dictionary order) and relabel them $\{\gamma_1,\dots,\gamma_{m}\}$ (here $m=n^2$) according to this order. Then the degree $r$ monomials are in bijection with $m$-tuples $(a_1,\dots,a_{m})\in\bbN^m$ such that $\sum_i a_i=r$ via the map which sends \[(a_1,\dots,a_{m})\mapsto \gamma_1^{a_1}\cdots\gamma_{m}^{a_{m}}.\] But choosing such an element is the same as inserting $m-1$ bars into a line of $r$ stars (that is an ordered partition of $r$ into $m$ parts, where parts are allowed to be zero). But this is equivalent to choosing $m-1$ bars in a field of $m+r-1$ symbols. This is just \[\binom{m+r-1}{m-1}\] and a well-known identity for binomial coefficients gets us the final equality. \end{prf} \begin{ex} In case the reader is unfamiliar with this kind of reasoning, consider the case when $n=5$ and $r=4$. Then the composition $(1,0,0,2,1)$ corresponding to $\gamma_1\gamma_4^2\gamma_5$ corresponds to the stars-and-bars diagram \begin{center} $\ast|||\ast\ast|\ast$ \end{center} where there are $m+r-1=8$ symbols, $r=4$ of which are stars. \end{ex} \subsubsection{Hopf algebras and group schemes} $A(n,r)$ lies within $k^\Gamma=k[\Gamma]$, which has the structure of a Hopf algebra induced from the group structure on $\Gamma$. More precisely, the functor $\Gamma:\Alg_k\to \Grp$ that assigns to every $k$-algebra $A$ the group $\GL_n(A)$ is representable. In other words, \[\GL_n(-)\simeq \Hom_{\Alg_k}(R,-)\] where $R=k[\Gamma]$. The anti-equivalence of the categories of affine group schemes over $k$ and finite dimensional commutative $k$-Hopf algebras (of which this is a particular instance) follows from Yoneda lemma (c.f. \cite[chp. 1]{waterhouse}). The resulting Hopf algebra will be (as an algebra) $R$, and along with a coalgebra structure induced the group structure on $\Gamma$: we have maps $\mu,\epsilon$, the multiplication and unit maps on $\Gamma$ satisfying the diagrams \begin{center} \begin{tikzcd} \Gamma\times\Gamma\times\Gamma\ar[r,"\mu\times\id"]\ar[d,"\id\times\mu"] & \Gamma\times\Gamma\ar[d,"\mu"]\\ \Gamma\times\Gamma\ar[r,"\mu"] & \Gamma \end{tikzcd} \quad \begin{tikzcd} \ast\times G\ar[r,"\epsilon\times\id"] &G\times G\ar[d,"\mu"]& G\times \ast\ar[l,"\id\times\epsilon",swap]\\ & G\ar[ur,leftrightarrow,"\sim"]\ar[swap,ul,leftrightarrow,"\sim"] & \end{tikzcd} \end{center} (where $\ast$ is the trivial group and initial object in the category of group schemes) giving us associativity and identity. Yoneda tells us that the maps between schemes \[\mu:\Gamma\times\Gamma\to \Gamma\quad\text{and}\quad \epsilon:\ast\to\Gamma\] give rise to maps in $\Alg_k$: \[\Delta\eqdef\mu^\ast:R\to R\otimes_k R\quad\text{and}\quad \varepsilon\eqdef\epsilon^\ast: R\to k\] satisfying diagrams \begin{center} \begin{tikzcd} R\otimes R\otimes R & R\otimes R\ar[l,"\Delta\otimes\id",swap]\\ R\otimes R\ar[u,"\id\otimes \Delta"] & R\ar[u,"\Delta"]\ar[l,"\Delta"] \end{tikzcd} \quad\begin{tikzcd} k\otimes R\ar[dr,"\sim",leftrightarrow,swap] & R\otimes R\ar[l,"\varepsilon\otimes \id",swap]\ar[r,"\id\otimes\varepsilon"] & R\otimes k\ar[dl,"\sim",leftrightarrow]\\ & R\ar[u,"\Delta"] & \end{tikzcd} \end{center} \begin{prop} The maps $\Delta$ and $\varepsilon$ which, in coordinates, for $1\le i,j\le n$, are \[\Delta(c_{ij})=\sum_k c_{ik}\otimes c_{kj}\quad\text{and}\quad \varepsilon(c_{ij})=\delta_{ij}\] give a coalgebra structure on $R$ \end{prop} That these maps satisfy the diagrams above is a straightforward computation. That, furthermore, these maps make $R$ into a bialgebra amounts to checking that $\Delta$ and $\varepsilon$ are algebra morphisms. But what is not immediately obvious is why \textit{these particular maps} are the ones we use on $R$. To see this, one must dig into the Yoneda correspondence a bit to see what happens to the multiplication and unit morphisms. In service of this, let's translate matrix multiplication into a statement about representable functors. We want to define $m$ as a map \[m:\Hom(R,-)\times\Hom(R,-)\to \Hom(R,-)\] and to see what $m$ should do in this context, we evaluate at a $k$-algebra \[m_A:\Hom(R,A)\times\Hom(R,A)\to \Hom(R,A)\] where we interpret each map $f:R\to A$ as a matrix with entries in $A$ by saying $f$ corresponds to a matrix $A_f$ such that \[(A_f)_{ij}=f(c_{ij}).\] Then if $(f,g)\in \Hom(R,A)\times\Hom(R,A)$, we want that the algebra structure is the usual matrix multiplication, so \[m_A(f,g)=A_fA_g\] and by computing the $(i,j)^{th}$ entry everywhere, we get \[m_A(f,g)(c_{ij})=(A_fA_g)_{ij}=\sum_{k=1}^n(A_f)_{ik}(A_g)_{kj}=\sum_k f(c_{ik})g(c_{kj}).\] This gives us the values of our component maps everywhere, so this defines the natural transformation $m$. Then (the proof of) Yoneda tells us that we can compute the corresponding algebra morphism as \[\mu(c_{ij})=m_{R\otimes R}(\iota_l\otimes\iota_r)(c_{ij})=\sum_k \iota_l(c_{ik})\iota_r(c_{kj})=\sum_k c_{ij}\otimes c_{kj}.\] Above we call $\iota_l$ (resp. $\iota_r$) to be the map $R\to R\otimes R$ which embeds $R$ into the left (resp. right) tensor factor. Notice that $\iota_l\otimes\iota_r=\id_{R\otimes R}$. Using the same identification between maps and matrices over $A$, let $\ast:k\to A$ be the unique map sending $1_k\mapsto 1_A$. Then we want \[u_A(\ast)=f:R\to A\] corresponding to the identity $(n\times n)$ matrix over $A$. So \[u_A(\ast)(c_{ij})=f(c_{ij})=(I_n)_{ij}=\delta_{ij}\cdot 1_A.\] Again applying Yoneda, we have \[\varepsilon(c_{ij})=u_k(\id_k)(c_{ij})=\delta_{ij}1_k\] and we have our counit map. In fact, as mentioned before, $R$ becomes a bialgebra (a Hopf algebra even, although we won't need the antipode here). This means that $\Delta$ and $\varepsilon$ are algebra morphisms for the natural algebra structure given by multiplication $m$ on $R$. In diagrams: \begin{center} \begin{tikzcd} R^{\otimes 4}\ar[r,"\id\otimes\tau\otimes 1"] & R^{\otimes4}\ar[r,"m\otimes m"] & R\otimes R\\ R\otimes R\ar[u,"\Delta\otimes \Delta"]\ar[rr,"m"] & & R\ar[u,"\Delta"] \end{tikzcd} \quad\begin{tikzcd} R\otimes R\ar[r,"m"]\ar[d,"\varepsilon\otimes\varepsilon"] & R\ar[d,"\varepsilon"]\\ k\otimes k\ar[r,"m"] & k \end{tikzcd} \end{center} where $\tau:R\otimes R\to R\otimes R$ is the twist map $a\otimes b\mapsto b\otimes a$. Chasing an element through the diagram on the left, we get \[\tilde m\circ (\Delta\otimes \Delta)(c_{ij}\otimes c_{ab})=\sum_{1\le k,l\le n}c_{ik}c_{al}\otimes c_{kj}c_{lb}=\Delta(c_{ij}{c_{ab}})\] or using our multi-index notation, \[\Delta(c_{(i,a),(j,b)})=\sum_{(k,l)\in I(n,2)}c_{(i,a),(k,l)}\otimes c_{(k,l),(j,b)}.\] Written more simply, the fact that $\Delta$ is an algebra morphism can be written \[\Delta(a\cdot b)=\Delta(a)\ast\Delta(b)\] under suitable definitions of $\cdot$ and $\ast$. In a way that can be made precise, this means in particular that \[\Delta(a\cdot b\cdot c)=\Delta(a)\ast\Delta(b\cdot c)=\Delta(a)\ast\Delta(b)\ast\Delta(c)\] and so on (since multiplication everywhere is associative) and therefore we can define this for arbitrary monomials and extend $k$-linearly: \begin{prop} If $i,j\in A(n,r)$, then \[\Delta(c_{i,j})=\sum_{k\in I(n,r)}c_{i,k}\otimes c_{k,j}\quad\text{and}\quad \varepsilon(c_{i.j})=\delta_{i,j}\] \end{prop} One can easily see that degree is preserved by $\Delta$, meaning that \begin{prop} $\Delta$ and $\varepsilon$ descend to a coalgebra structure on $A(n,r)$. That is, $A(n,r)$ is a ($k$-)coalgebra. \end{prop} \subsubsection{The structure maps of \texorpdfstring{$\rho$}{rho}} This context empowers us to better understand what is meant by the structure maps of a representation. At the moment, we define a homogeneous polynomial representation by how it looks on points and simply use the fact it coalesces to a functor. A natural question to ask is how the structure morphisms relate to the entries of the matrices $\rho_A(g)=M_{g,A}$. To understand the answer to this question, we need to uncover how the entries of a matrix come about. We have been thinking of an element of $\Gamma(A)$ as a matrix, but what it actually is is a morphism \[k[x_{ij}]_{\det}\to A\] and then thinking of $\Aut(V\otimes A)\cong\GL_m(A)$ in the same way, we interpret a representation as a map \[\rho:\Hom(k[x_{ij}]_{\det},-)\to \Hom(k[y_{kl}]_{\det},-)\] where $i$ and $j$ run from 1 to $n$ and $k$ and $l$ run from 1 to $m$. Then Yoneda lemma tells us that $\rho$ corresponds to an algebra map \[\rho^\ast:k[y_{lk}]_{\det}\to k[x_{ij}]_{\det}\] This map is the one such that if $f\in\Hom(k[x_{ij}]_{\det},A)$, \[\rho_A(f)=f\circ\rho^\ast:k[y_{lk}]_{\det}\to k[x_{ij}]_{\det}\to A.\] What are the structure maps for this representation? We can compute for any $g\in \GL_n(A)$ (which we, though a mild abuse of notation, think of as a map $g:k[x_{ij}]_{\det}\to A$ where $g(x_{ij})=g_{ij}$) \[\rho_A(g)\cdot v_i=\rho_A(g(x_{ij}))_{i,j}v_i=((g\circ\rho^\ast)(y_{ij}))_{i,j}v_i=\sum_j g(\rho^\ast(y_{ij}))v_j=\sum_j\rho^\ast(y_{ij})(g)v_j\] from which we can see \begin{lem} Let $\rho:\Gamma\to \GL(V)$ be a polynomial representation. Then for any $A\in\Alg_k$, the structure maps $f_{ij}$ of the group representation $\rho_A:\GL_n(A)\to \GL_m(A)$ are precisely the $\rho^\ast(y_{ij})$. \end{lem} This enables us to re-define polynomial representations in the following way (compare with definition \ref{def:poly-rep}): \begin{defn}\label{def:poly-rep-new} A finite dimensional \textbf{polynomial representation of $\Gamma$ of degree $r$} is a finite dimensional vector space $V$ over $k$ along with a scheme map $\rho:\Gamma\to \GL(V)$ such that the associated algebra map $\rho^\ast:k[\GL(V)]\to k[\Gamma]\cong k[x_{ij}]_{\det}$ is homogeneous degree $r$. In other words, the image of $\rho^\ast$ is entirely contained within the degree $r$ graded piece of $k[x_{ij}]\subseteq k[\Gamma]$. \end{defn} \begin{rmk} In the following section, we freely identify $k[x_{ij}]_{\det}$ with the ring of functions on $\Gamma$, and $k[y_{lk}]_{\det}$ with the ring of functions on $\GL(V)$ (where $m=\dim V$). \end{rmk} \subsubsection{Comodules} In this section let $A=A(n,r)$, which we have just established is a coalgebra with $\Delta$ and $\varepsilon$ defined above. \begin{defn} A (left) \textbf{$A$-comodule} is a vector space $V$ over $k$ along with a (left) \textbf{$A$-coaction} given by a ($k$-)morphism \[\phi:V\to A\otimes_k V\] that is both \textbf{coassociative and counital} in the sense that the diagrams in Figure~\ref{fig:comodule} commute. \end{defn} \begin{figure} \centering \begin{tikzcd} V\ar[r,"\phi"]\ar[d,"\phi"] & A\otimes V\ar[d,"\id\otimes\phi"]\\ A\otimes V\ar[r,"\Delta\otimes \id",swap] & A\otimes A\otimes V \end{tikzcd}\qquad \begin{tikzcd} V\ar[r,"\phi"]\ar[rd,"\sim",swap] & A\otimes V\ar[d,"\varepsilon\otimes\id"]\\ & k\otimes V \end{tikzcd} \caption{The coassociative and counital axioms} \label{fig:comodule} \end{figure} Given two $A$-comodules $V$ and $W$, a comodule morphism $\varphi:V\to W$ is one that preserves the coaction. That is \[(\id\otimes\varphi)\phi_V(v)=\phi_W(\varphi(v))\] for all $v\in V$. \begin{defn} Let $A$ be any coalgebra. Then $\lcomod A$ denote the category of (finite-dimensional, left) $A$-comodules along with comodule morphisms. \end{defn} Sometimes a more useful way to think of polynomial representations is as comodules. That idea is made more formal in the following theorem: \begin{lem}\label{lem:comod-map} Every homogeneous degree $r$ polynomial representation of $\GL_n$ gives rise to an $A(n,r)$-comodule in the following way: the underlying vector space is the same and the $A(n,r)$ coaction is given by \[\phi(v_i)=\sum_j\rho^\ast(y_{ij})\otimes v_j\] where in the above we identify the algebra $k[y_{ij}]_{\det}$ with the ring of functions on $\GL(V)$. As a matter of notation, we call this map \[\Psi:\Pol(n,r)\to \lcomod{A(n,r)}.\] \end{lem} \begin{prf} By definition this gives us a map into $\lcomod{k[\Gamma]}$, but we can see that the image is entirely contained within $\lcomod{A(n,r)}$ since, in light of definition \ref{def:poly-rep-new}, $\rho^\ast(y_{ij})$ is homogeneous degree $r$. Then it remains to show that the given map is legitimately a coaction. We can compute (identifing the map $\varepsilon:k[\Gamma]\to k$ as the matrix $I_n$ over $k$) \[(\varepsilon\otimes\id)\circ\phi(v_i)=\sum_j \varepsilon(\rho^\ast(y_{ij}))\otimes v_j=\sum_j(\varepsilon\circ\rho^\ast)(y_{ij})\otimes v_j=\sum_j\rho_k(I_n)_{ij}\otimes v_j=1_k\otimes v_i\] so $\phi$ satisfies the counit identity. For coassociativity, identify the morphism $k[\Gamma]\to k[\Gamma]\otimes k[\Gamma]$ with the matrix $D$ whose entries are $\Delta(x_{ij})$. Then \begin{align*} (\Delta\otimes\id)\circ\phi(v_i)&=\sum_j\Delta(\rho^\ast(y_{ij}))\otimes v_j\\ &=\sum_j\rho(D)_{ij}\otimes v_j\\ &=\sum_j \left(\sum_k \rho^\ast(y_{ik})\otimes\rho^\ast(y_{kj})\right)\otimes v_j\\ &=\sum_k \rho^\ast(y_{ik})\otimes\left(\sum_j \rho^\ast(y_{kj})\otimes v_j\right)\\ &=(\id\otimes\phi)\sum_k\rho^\ast(y_{ik})\otimes v_k\\ &=(\id\otimes\phi)\circ\phi(v_i) \end{align*} which shows that $\phi$ gives a $A(n,r)$-comodule structure on the underlying vector space of a representation $\rho$ of $\Gamma.$ \end{prf} \begin{rmk} Above we used the fact that, for all $g\in\GL_n(A)$, \[\Delta(\rho^\ast(y_{ij}))(g)=\sum_k\rho^\ast(y_{ik})\otimes\rho^\ast(y_{kj})\] which is true since the map \[\Delta\circ\rho^\ast:k[\GL(V)]\to k[\Gamma]\otimes k[\Gamma]\] corresponds to the map \[\rho\circ m:\Gamma\times\Gamma\to \GL(V)\] where, after evaluating at $A\in\Alg_k$, \[(\rho\circ m)(M,N)=\rho(MN)=\rho(M)\rho_A(N)=m\circ(\rho\times\rho)(M, N)\] which implies we have the equation \[\Delta\circ \rho^\ast=(\rho^\ast\otimes \rho^\ast)\circ \Delta\] and the equality follows. \end{rmk} The preceeding lemma is in service of reframing the problem in terms of the comodules of a nicely-behaved (e.g. finite dimensional!) coalgebra: \begin{lem} The map $\Psi$ defined in lemma~\ref{lem:comod-map} is a functor. \end{lem} \begin{prf} Let $f:(V,\rho)\to (W,\eta)$ be a map of homogeneous degree $r$ polynomial representations of $\Gamma$. This is a linear map $f:V\to W$ satisfying the usual property that for any $A\in\Alg_k$, $a\in V\otimes A$ and $g_A\in \Gamma(A)$, \[f(\rho(g_A)a)=\eta(g_A)f(a).\] %This corresponds to a diagram %\begin{center} % \begin{tikzcd}[column sep=large] % \Gamma\times V\ar[r,"{(g,v)\mapsto \rho(g)v}"]\ar[d,"\id\times f",swap] & V\ar[d,"f"]\\ % \Gamma\times W\ar[r,"{(g,w)\mapsto \eta(g)w}",swap] & W % \end{tikzcd} % \end{center} % and upon evaluating at $k[\Gamma]$ the action gives us a coaction defined by (for all $v\in V$) % \[\phi_\rho(v)=\rho_{k[\Gamma]}(\id_{k[\Gamma]})(1_{k[\Gamma]}\otimes v)\in k[\Gamma]\otimes V\] We want to show that $f$ is a map of $A(n,r)$-comodules. Let $v_i\in V$ be a basis element as we have used before. Then if $f(v_i)=\sum_k a_{ik}w_{k}$ where $W=\langle w_k\rangle$, \[\phi_W\circ f(v_i)=\phi_W\left(\sum_ka_{ik}w_k\right)=\sum_ka_{ik}\phi_W(w_k)=\sum_ka_{ik}\left(\sum_j \eta^\ast(y_{kj})\otimes w_j\right)\] and on the other hand \[(\id\otimes f)\circ\phi(v_i)=(\id\otimes f)\left(\sum_j\rho^\ast(y_{ij})\otimes v_j\right)=\sum_j\rho^\ast(y_{ij})\otimes f(v_i)=\sum_j\rho^\ast(y_{ij})\otimes\left(\sum _ka_{jk}w_k\right)\] and using the identification of \[k[\Gamma]\otimes W\cong \Hom_k(\Gamma,W)\] we see the first line corresponds to the map \[g\mapsto \sum_ka_{ik}\sum_j\eta(g)_{kj}w_j=\sum_ka_{ik}\eta(g)w_k=\eta(g)f(v_i)\] and the bottom line corresponds to \[g\mapsto \sum_j\rho(g)_{ij}f(v_j)=f\left(\sum_j\rho(g)_{ij}v_j\right)=f(\rho(g)v_i)\] and these two values are equal by virtue of of $f$ being a $G$-module morphism. \end{prf} Finally we prove that these are the same category! \begin{thm} The map \[\Psi:\Pol(n,r)\to \lcomod{A(n,r)}\] is an equivalence of categories. \end{thm} \begin{prf} To prove essential surjectivity, let $V$ be an $A(n,r)$ comodule with coaction $\phi:V\to A(n,r)\otimes V$. We define from this an object in $\Pol(n,r)$ via the action \[g\cdot v=(e_g\overline\otimes \id)\circ\phi(v)\] where $e_g:k[\Gamma]\to k$ is evaluation at $g\in \Gamma$ and \[e_g\overline\otimes\id (f\otimes v)=f(g)v.\] This defines the map \[\rho:\Gamma\to \GL(V)\quad\text{via}\quad g\mapsto (e_g\overline\otimes \id)\circ\phi\] which is a representation of $\Gamma$. To check that this is homogeneous degree $r$, we just compute (where here we write $\phi(v)=\sum_i f_i\otimes v_i$) \[\rho(\lambda g)(v)=(e_{\lambda g}\overline\otimes\id)\circ\left(\sum f_i\otimes v_i\right)=\sum f_i(\lambda g)v_i=\lambda^r\sum f_i(g)v_i=\lambda^r\rho(g)(v)\] where we used that the $f_i$ are homogeneous degree $r$ maps. Our category is concrete (we have a faithful functor to $\Set$) and therefore the comodule map $\Psi(f)$ is completely determined by its underlying map on sets. But $\Psi(f)(v)=f(v)$ after passing to sets for all $v\in V$, so $\Psi$ is automatically fully faithful. Thus $\Psi$ is an equivalence of categories, as desired. \end{prf} \begin{rmk} Actually, the above proof can be modified slightly to show that $\Psi$ has a functorial inverse--that is, $\Psi$ is an \textit{isomorphism of categories}. Since we are only interested in representations up to isomorphism, however, equivalence is all we need. \end{rmk} \subsubsection{The Schur algebra} Finally we get to the actual object of study: \begin{defn}\label{def:schur-alg} A \textbf{Schur algebra} is an element of the two-parameter family $\{S(n,r)\}=\{S_k(n,r)\}$ where $n$ and $r$ are any positive integers. As a set, $S(n,r)$ is the linear dual of $A(n,r)$: \[S(n,r)=A(n,r)^\ast=\Hom_k(A(n,r),k)\] Let $\xi_{i,j}$ denote the element dual to $c_{i.j}\in A(n,r)$. In other words: \[\xi_{(a,b)}(c_{i,j})=\begin{cases} 1, & (a,b)\sim(i,j)\\ 0, & \text{otherwise} \end{cases}\] \end{defn} \begin{lem} The coalgebra structure $(\Delta,\varepsilon)$ on $A(n,r)$ defines an algebra structure on $S(n,r)$. \end{lem} \begin{prf} Since $k$ is an initial object in $\Alg_k$, there is a unique map $u:k\hookrightarrow S(n,r)$ sending $1$ to the unit function $\1$, which is given by \[\1(c_{i,j})=c_{i,j}(I_n)=\delta_{i,j}\] Define multiplication $(\cdot)$ in $S(n,r)$ as follows: if $f,g\in S(n,r)$ then for any $x\in A(n,r)$ define \[(f\cdot g)(x)=m_k\circ (f\otimes g)\circ \Delta(x)=\sum f(x_{(1)})g(x_{(2)})\] where $m_k:k\otimes k\to k$ denotes multiplication in $k$ and $\Delta(x)=\sum x_{(1)}\otimes x_{(2)}$ in Sweedler notation. Then we must just confirm that these maps satisfy the properties of a $k$-algebra. $(\cdot)$ is $k$-bilinear because (for instance) \begin{align*} ((af+bg)\cdot h)(x)&=\sum (af+bg)(x_{(1)})\otimes h(x_{(2)})\\ &=\sum a(f(x_{(1)})\otimes h(x_{(2)}))+b(g(x_{(1)})\otimes h(x_{(2)}))\\ &= a\sum f(x_{(1)})\otimes h(x_{(2)})+ b\sum g(x_{(1)})\otimes h(x_{(2)})\\ &=(a(f\cdot h)+b(g\cdot h))(x). \end{align*} By $k$-linearity, it suffices to show that the unit $\1$ acts as it should on the spanning set $\xi_{i,j}$ for a basis element $c_{a,b}$: \[(\1\cdot \xi_{i,j})(c_{a,b})=\sum_{k=1}^n \1(c_{a,k})\cdot\xi_{i,j}=\1(c_{a,a})\cdot\xi_{i,j}(c_{a,b})=\xi_{i,j}(c_{a,b})\] and a similar identity holds on the right. Then it remains to show that this multiplication is associative. Again by linearity it suffices to check that this works on the spanning set $\{c_{i,j}\}$: \begin{align*} ((\alpha\cdot \beta)\cdot\gamma)(c_{i,j})&=\sum_{k\in I(n,r)}(\alpha\cdot\beta)(c_{i,k})\gamma(c_{k,j})\\ &=\sum_k\left(\sum_{l\in I(n,r)}\alpha(c_{i,l})\beta(c_{l,k})\right)\gamma(c_{k,j})\\ &=\sum_l\alpha(c_{i,l})\left(\sum_k \beta(c_{l,k})\gamma(c_{k,j})\right)\\ &=\sum_l\alpha(c_{i,l})(\beta\cdot\gamma)(c_{l,j})\\ &=(\alpha\cdot(\beta\cdot\gamma))(c_{i,j}). \end{align*} Thus since we have $k$-linear maps $\1$ and $m=(\cdot)$ satisfying the usual identity and associativity diagrams, $S(n,k)$ is a $k$-algebra with $\1$ and $m$ as its unit and multiplication. \end{prf} There is a standard result that says \begin{prop} The finite dimensional left comodules of a coalgebra $\Lambda$ are the same as finite dimensional right modules over $\Lambda^\vee=\Hom(\Lambda, k)$. \end{prop} \begin{prf}[sketch] The key idea here is as follows: an right comodule over $\Lambda^\vee$ is, equivalently, a $k$-linear map \[V\otimes \Lambda^\vee\to V\] satisfying the usual associativity and identity axioms. But notice that \[\Hom(V\otimes \Lambda^\vee,V)\cong\Hom(V,\Hom(\Lambda^\vee,V))\subseteq\Hom(V,\Lambda\otimes V)\] which, in turn, correspond to $\Lambda$ comodules. It remains to show that the associativity and unit axioms restrict the collection on the left in the right way to give us a map satisfying the coassociativity and counit axioms on the right. \end{prf} Since $S(n,r)$ is commutative ($A(n,r)$ is cocommutative), this tells us that there is an equivalence between $\lcomod{A(n,r)}$ and $\rmod{S(n,r)}\cong\lmod{S(n,r)}$, and so \begin{cor} The categories $\Pol(n,r)$ and $\lmod{S(n,r)}$ are equivalent. \end{cor} \begin{rmk} Using this equivalence, we identify $\lmod{S(n,r)}$ with $\Pol(n,r)$ whenever it suits us. \end{rmk} The following result has been proven in many different contexts, but one source is a paper of Doty and Nakano which completely categorized the semisimple Schur algebras. \begin{cor}[{\cite[Thm. 2]{doty-nakano}}]\label{cor:semisimple} If $\ch k=p$, the algebra $S_k(n,r)$ is semisimple if and only if one of the following hold: \begin{itemize} \item $p=0$ \item $p>r$ \item $p=n=2$ and $r=3$ \end{itemize} \end{cor} \subsubsection{Weights and characters} The discussion in section~\ref{subsubsec:indices} highlights an important idea: while we care about the \textit{quantities} in which each $c_{ij}$ occurs in a monomial, we are not particularly interested in the \textit{order}. Sometimes it is easier, then, to simply regard these as weak compositions: \begin{defn} Let $n$ and $r$ be integers as usual. Then denote by $[a_1,\dots,a_n]$ the \textbf{weight} corresponding to $(i_1,\dots,i_r)\in I(n,r)$ where for each $i$, \[a_i=\#\{k\in\underline r| i_k=i\}\] Denote by $\Lambda(n,r)$ the collection of all weights. \end{defn} \begin{rmk} Another way to realize $\Lambda(n,r)$ is in the presentation \[\Lambda(n,r)=\left\{[a_1,\dots,a_n]\left|\sum_i a_i=r\right.\right\},\] or as the set of compositions of $r$ into $n$ parts (allowing zeros). Yet another is to think of $\Lambda(n,r)$ as the set of $\frakS_r$ orbits in $I(n,r)$ (where now two objects are distinguished only if their ``contents'' vary). \end{rmk} Recall (c.f. \ref{def:schur-alg}) that we had that $\xi_{i,j}(c_{a,b})=1$ if and only if $(i,j)\sim(a,b)$. Because of this, it makes sense (if $\alpha$ is the weight of $i$) to write \[\xi_{\alpha}\eqdef \xi_{\alpha,\alpha}\eqdef \xi_{i,i}\] since the action is the same irrespective of the choice of representative $i$ of $\alpha.$ Notice that the weights admit a $\frakS_n$ action \[\sigma\cdot [a_1,\dots,a_n]=[a_{\sigma(1)},\dots,a_{\sigma(n)}]\] then \begin{defn} $\Lambda_+(n,r)$ is the orbit space of $\Lambda(n,r)$ under the above $\frakS(n)$ action. \end{defn} \begin{rmk} The above are called the \textbf{dominant weights} in $\Pol(n,r)$. Since each orbit $\alpha$ contains an element $[a_1,\dots,a_n]\in\alpha$ such that \[a_1\ge a_2\ge\cdots\ge a_n\] we will often identify weights with their weakly-decreasing representative. Sometimes we will refer to the dominant weight representing the orbit of $i\in I(n,r)$ as the \textbf{shape of $i$.} \end{rmk} The theory of weights in representations of $\Gamma$ closely mirrors similar decompositions in other Artinian algebras: first we identify a family of (mutually orthogonal) idempotents: \begin{lem} For $\alpha\in\Lambda(n,r)$ and $i,j\in I(n,r)$, \[\xi_\alpha\xi_{i,j}=\begin{cases} \xi_{i,j}, & i\in\alpha\\ 0, &\text{otherwise} \end{cases}\quad\text{and}\quad\xi_{i,j}\xi_\alpha=\begin{cases} \xi_{i,j}, & j\in\alpha\\ 0, &\text{otherwise} \end{cases}\] \end{lem} \begin{prf} We can compute the image of these on the $c_{a,b}\in A(n,r)$: \begin{align*} \xi_\alpha\cdot \xi_{i,j}(c_{a,b})&=\sum_k \xi_\alpha(c_{a,k})\xi_{i,j}(c_{k,b})\\ &= \xi_\alpha(c_{a,a})\xi_{i,j}(c_{a,b}) \end{align*} where above we used that $\xi_{\alpha}(c_{i,j})=0$ unless $i=j$. But \[\xi_\alpha(c_{a,a})=\begin{cases} 1, & a\in\alpha\\ 0, & \text{otherwise} \end{cases}\] so \[\xi_\alpha\cdot \xi_{i,j}(c_{a,b})=\begin{cases} \xi_{i,j}(c_{a,b}),& a\in\alpha\\ 0,& \text{otherwise} \end{cases}\] but in the case where $a\in\alpha$ and $\xi_{i,j}(c_{a,b})\ne 0$, this implies that $i\sim a$, so $i\in \alpha$. So finally, \[\xi_\alpha\cdot \xi_{i,j}(c_{a,b})=\begin{cases} \xi_{i,j}(c_{a,b}),& i\in\alpha\\ 0,& \text{otherwise} \end{cases}\] and since this holds for any $c_{a,b}$, the left-hand side is proven. A symmetric argument goes through for the right-hand side. \end{prf} For the next step, we decompose the identity into a sum of these idempotents: \begin{lem}\label{lem:decomp-one} We have the decomposition \[\1 = \sum_{\alpha\in\lambda(n,r)} \xi_\alpha.\] \end{lem} \begin{prf} On the one hand, for any $c_{a,b}\in A(n,r)$, $\1(c_{a,b})=\delta_{a,b}$. On the other hand, for any $\alpha$, \[\xi_\alpha(c_{a,b})=0\] when $a\ne b$ \textit{or when $a\notin \alpha$}. Therefore when $a=b$, there is precisely one $\alpha$ (the orbit of $a=b$) such that $\xi_\alpha(c_{a,b})=1$, so putting this all together, \[\sum_{\alpha\in\Lambda(n,r)}\xi_\alpha(c_{a,b})=\delta_{a,b}\] whence these two functions are equal. \end{prf} \begin{rmk}\label{rmk-weight-spaces} Using lemma~\ref{lem:decomp-one}, we can then decompose any $V\in \Pol(n,r)$ into weight spaces: \[V=\1\cdot V=\sum_{\alpha\in\Lambda(n,r)}\xi_\alpha V\] which we will denote \[\xi_\alpha V=V^\alpha.\] \end{rmk} \begin{defn}\label{defn:character} The \textbf{formal character} of a representation $V\in \Pol(n,r)$ is a polynomial \[\Phi_V(X_1,\dots,X_n)=\sum_{\alpha\in\Lambda(n,r)}(\dim V^\alpha)X_1^{\alpha_1}\cdots X_n^{\alpha_n}=\sum_{\alpha\in\Lambda_+(n,r)}(\dim V^\alpha)m_\alpha(X_1,\dots,X_n)\] where $m_\alpha$ is the \textit{monomial symmetric polynomial} \[m_\alpha(X_1,\dots,X_n)=\sum_{\sigma\in\frakS_n}X_{\sigma(1)}^{\alpha_1}\cdots X_{\sigma(n)}^{\alpha_n}.\] \end{defn} \subsubsection{Irreducible representations} The irreducible representations in $\Pol(n,r)$ are given by a couple of results by some of the big names in representation theory: the original proof for $k=\bbC$ was proven in \cite[p.37]{schur-thesis} and then generalized in a later paper by Weyl \cite{weyl} and in work by Chevalley\footnote{Green \cite{green} mentions a paper by Serre: \textit{Groupes de Grothendieck des Sch\'emas en Groupes R\'eductifs D\'eploy\'es} \cite{serre-chevalley}, which makes mention to Chevalley's contributions in proving the existence of modules with prescribed characters. This author was unable to find Chevalley's work.}: \begin{thm}\label{thm:irreps} Fix the usual lexicographical ordering on monomials in $k[X_1,\dots,X_n]$. Let $n$ and $r$ be given integers with $n\ge 1$ and $r\ge 0$ Let $k$ be an infinite field. Then \begin{enumerate} \item For each $\lambda\in\Lambda_+(n,r)$, there exists an (absolutely) irreducible module $F_{\lambda,k}$ in $\Pol(n,r)$ whose character $\Phi_{\lambda,k}$ has leading term $X_1^{\lambda_1}\cdots X_n^{\lambda_n}$. \item Every irreducible $V\in \Pol(n,r)$ is isomorphic to $F_{\lambda,k}$ for exactly one $\lambda\in \Lambda_+(n,r)$. \end{enumerate} \end{thm} So then the problem of classifying the simple modules (the ``basic building blocks'' in the semisimple case) is completely solved for infinite fields. It remains to demonstrate a way to construct $F_{\lambda,k}$. \begin{defn} Fix some $\lambda\in \Lambda_+(n,r)$. Notice that this corresponds to a Young diagram with $r$ boxes. Fix any labeling $1,\dots,r$ of the boxes in the Young diagram corresponding to $\lambda$. Let $T$ denote the diagram for $\lambda$ along with this labeling. Let $i:\underline r\to\underline n$ be any map. Then denote by $T_i$ the \textbf{$\lambda$-tableau}, which is $T$ with the $k^{th}$ entry consisting of $i(k)\in\underline r$. \end{defn} \begin{rmk} This notation varies slightly (but not in spirit) from the notation in Green's book. He denotes the Young diagram by $[\lambda]$ and lets $T^\lambda$ be the labelling of the boxes in $[\lambda]$--a bijection $[\lambda]\to\underline r$. \end{rmk} \begin{ex} Let $\lambda=(3,1,1)\in\Lambda_+(3,5)$. Thus $T$ is of shape \[\ydiagram{3,1,1}\] Then if we fix the left-to-right/top-to-bottom ordering of the boxes in $T$ and let $i:\{1,2,3,4,5\}\to\{1,2,3\}$ be given by $(2,1,3,3,2)$, we get the $\lambda$-tableau \[T_i=\ytableaushort{2 1 3, 3, 2}\] \end{ex} The core tool in constructing (a basis for) the irreducible modules is in the following definiton: \begin{defn} Let $\lambda\in\Lambda_+(n,r)$ be some shape with a fixed labeling and let $i,j:\underline r\to\underline n$. Then the \textbf{bideterminant of $T_i$ and $T_j$} is \[(T_i:T_j)=\sum_{\sigma\in C(T)}\operatorname{sgn}(\sigma)c_{i,j\sigma}\in A_k(n,r)\] where $C(T)$ is the column stabilizer of $T$. \end{defn} This definition can be a bit difficult to unpack, so we give some examples: \begin{ex} \begin{enumerate} \item $\lambda=(2,1,0)\in \Lambda_+(3,3)$\[\ytableausetup{nosmalltableaux}\left(\ytableaushort{1 2,3}:\ytableaushort{3 1,2}\right)=\left|\begin{array}{cc} c_{13} & c_{12}\\ c_{33} & c_{32} \end{array}\right|c_{21}=(c_{13}c_{32}-c_{12}c_{33})c_{2,1}=c_{(1,2,3),(3,1,2)}-c_{(1,2,3),(2,1,3)}\] \item $\lambda=(n,0,\dots,0)\in\Lambda_+(m,n)$\[\left(\begin{ytableau} a_1& a_2& a_3&\none[\dots] &a_n\end{ytableau}: \begin{ytableau} b_1& b_2& b_3&\none[\dots]&b_n\end{ytableau}\right)=c_{a_1b_1}\cdots c_{a_nb_n}\] \item $\lambda=(1,\dots,1,0,\dots)\in\Lambda_+(m,n)$ where $n\ge m$ \[\left(\begin{ytableau}a_1\\ a_2\\\none[\vdots]\\a_n\end{ytableau}:\begin{ytableau}b_1\\ b_2\\\none[\vdots]\\b_n\end{ytableau}\right)= \left|\begin{array}{ccc}c_{a_1b_1} & \cdots & c_{a_1b_n}\\ \vdots & \ddots & \vdots\\ c_{a_nb_1} & \cdots & c_{a_nb_n} \end{array}\right|\] \end{enumerate} \end{ex} In the following, let $l:\underline r\to\underline n$ be $(1,\dots,1,2,\dots,2,3,\dots)$ such that for any shape $\lambda$ the $\lambda$-tableau $T_l$ is \[\begin{ytableau} 1 & 1 &\none[\dots] &\none[\dots] &1\\ 2 & 2 &\none[\dots] &2\\ \none[\vdots]\\ k \end{ytableau}\ytableausetup{smalltableaux}\] with $i$ in every box on the $i^{th}$ row from the top. \begin{defn} Define, for every shape $\lambda\in\Lambda_+(n,r)$, the module \[D_{\lambda,k}=\langle(T_l:T_i)\rangle_{i\in I(n,r)}\] where $l$ is the filling defined above. \end{defn} According to \cite{green}, these modules were originally called ``Weyl modules'', while he (and we) reserve this name for the contravariant dual of these objects. To construct them, define the map \begin{equation}\label{eqn:pimap} \pi:E^{\otimes r}\to D_{\lambda, k}, \end{equation} and we get objects originally defined in Carter and Lusztig's treatment of modular representations of $\GL_n$ \cite{carter-lusztig} and tweaked by Green in \cite{green}: \begin{defn} Given a shape $\lambda$, the \textbf{Weyl module of shape $\lambda$ over $k$} is $V_{\lambda, k}\eqdef N^\perp$ where \[N\eqdef\ker\pi\hookrightarrow E^{\otimes r}\to D_{\lambda,k}\] and the orthogonal complement of $N$ is taken with respect to the canonical contravariant form on $E^{\otimes r}$ that has the property $\langle e_i,e_j\rangle=\delta_{ij}$. \end{defn} In their original paper \cite[p.218]{carter-lusztig}, Carter and Lusztig showed that these modules are, in fact, generated as $S(n,r)$-modules by a single element: \begin{thm}\label{thm:weyl-basis} Let $\lambda\in\Lambda_+(n,r)$ and $T$ the Young diagram corresponding to $\lambda$. Let $l$ be the labelling above. Then the element \[f_l=e_l\cdot\sum_{g\in C(T)\subset\frakS_n}\operatorname{sign}(\sigma)\sigma\] generates $V_{\lambda,k}$ as a $S(n,r)$-module. \end{thm} \begin{prf}[sketch.] We refer the reader to Green's \cite[p.46]{green} proof for the details, but the idea is as follows: he relies on an earlier result that the modules $D_{\lambda,k}$ have a basis consisting of the bideterminants \[(T_l:T_i)\] such that $T_i$ is in ``standard form'' (meaning that it forms a valid Young tableau). One can define a nondegenerate contravariant form \[(\cdot,\cdot):V_{\lambda,l}\times D_{\lambda,k}\to k\] by pulling back any element in $D_{\lambda,k}$ to a representative in $E^{\otimes r}$ under the map $\pi:E^{\otimes r}\to D_{\lambda,k}$. Recall that $V_{\lambda,k}$ is defined as the orthogonal complement (under the canonical form $\langle\cdot,\cdot\rangle$ on $E^{\otimes r}$) of $\ker\pi.$ This gives us that $(\cdot,\cdot)$ is indeed well-defined. From there, Green does some computation to show that one can bootstrap the independence of the $(T_l:T_i)$ to prove that of the set \[\{\xi_{jl}f_l|j\in I(n,r), T_j\text{ standard}\}\] forms a $(k-)$basis for $V_{\lambda,k}$, and therefore $f_l$ generates the entire module under the $S(n,r)$ action. \end{prf} \begin{lem}\label{lem:unique-maximal-submod} The modules $V_{\lambda,k}$ have a unique maximal submodule $V_{\lambda,k}^{\text{max}}$ \end{lem} \begin{prf}[{\cite[p.47]{green}}] Begin by noticing that the weight space $V_{\lambda,k}^\lambda$ is spanned by the single element $f_l$. This is because \[\xi_l\cdot\xi_{il}f_l=\delta_{il}f_l\] so the only nonzero basis vector from the proof of thm.~\ref{thm:weyl-basis} is $f_l$ itself. Since $f_l$ generates all of $V_{\lambda,k}$ as an $S(n,r)$-module, however, any proper submodule $M$ of $V_{\lambda,k}$ must be contained in the complement of $V_{\lambda,k}^\lambda.$ Thus the sum of all proper submodules is contained in the complement of this weight space, and is therefore proper! This sum is our $V_{\lambda,k}^\text{max}$ \end{prf} We are finally in good shape to compute the irreducible modules promised to us in thm.~\ref{thm:irreps}. We define \[F_{\lambda, k}=V_{\lambda,k}/V_{\lambda,k}^\text{max}\] where $V_{\lambda,k}^\text{max}$ is the unique maximal submodule guaranteed to us by lemma~\ref{lem:unique-maximal-submod}. It remains to show that the $F_{\lambda,k}$ have the requisite characters $\Phi_{\lambda,k}$. But notice that $V^\lambda_{\lambda,k}$ is one-dimensional, so the character (c.f. definition \ref{defn:character}) of $V_{\lambda, k}$ is of the form \[m_\lambda(X_1,\dots,X_n)+\sum_{\lambda\ne\alpha\in\Lambda_+(n,r)} \dim V_{\lambda,l}^\alpha m_\alpha(X_1,\dots,X_n)\] but since each $V_{\lambda,k}^\alpha$ is contained in $V_{\lambda,k}^\text{max}$, it occurs as a weight space of this maximal submodule with the same multiplicity. Therefore the character of $V_{\lambda,k}^\text{max}$ is \[\sum_{\lambda\ne\alpha\in\Lambda_+(n,r)} \dim V_{\lambda,l}^\alpha m_\alpha(X_1,\dots,X_n)\] so we can conclude that \[\Phi_{V_{\lambda,k}}(X_1,\dots,X_n)=m_\lambda(X_1,\dots,X_n)=X_1^{\lambda_1}\cdots X_n^{\lambda_n}+\cdots\] which has leading term (under the lexicographic ordering) precisely what we wanted. \subsection{Explicit examples for comparison} To demonstrate the theory developed above, we begin a computation (in a simple case) of the isomorphism classes of irreducible representations of both $S_\bbC(2,2)$ and $\frakS_2$. \subsubsection{The symmetric group on two letters} The representation theory (over $k=\bbC$) of $\frakS_2$ is as simple as it comes: of course $\frakS_2\cong \bbZ/2\bbZ$ and we know that there are $|G|$ nonisomorphic irreducible representations of an abelian group $G$ over $\bbC$. Since we are talking about a symmetric group, we can realize these as the trivial and sign representations, represented by the Young diagrams: \[\ydiagram{2}\quad\text{and}\quad\ydiagram{1,1}\] As submodules of the regular representation $k\frakS_2= k e\oplus k(1\,2)$, we can construct these as $\langle e+(1\, 2)\rangle$ (trivial representation) and $\langle e-(1\,2)\rangle$ (sign representation). \subsubsection{The Schur algebra \texorpdfstring{$S_\bbC(2,2)$}{S(2,2)}} Since $\ch\bbC=0$, corollary~\ref{cor:semisimple} implies that $S_\bbC(2,2)$ is semisimple, so it suffices to identify the irreducible submodules therein. We know \[S=S_\bbC(2,2)\cong \bbC^2\otimes\bbC^2\] so $\dim_\bbC S=4.$ The theory outlined above gives us that isomorphism types of irreducible modules are in bijection with compositions of 2 of length 2, meaning we have two isomorphism types: one corresponding to $\lambda_1=(1,1)$ and one corresponding to $\lambda_2=(2,1)$. Using the construction of $D_{\lambda,\bbC}$ from above, we can compute these two irreducible modules explicitly: \begin{ex}[$\mathbf{\lambda_1=(1,1)}$] In this case our shape is $(1,1)$, corresponding to the Young diagram \[\ydiagram{1,1}\] and then $D_{\lambda_1,\bbC}$ is spanned by the element \[(T_l:T_{(2,1)})=\left(\ytableaushort{1,2}:\ytableaushort{2,1}\right)=c_{12}c_{21}-c_{11}c_{22}=c_{(1,2),(2,1)}-c_{(1,2),(1,2)}\in A_\bbC(2,2)\] since all other bideterminants of this shape are zero or linearly dependent. Thus this is a one-dimensional irreducible representation. \end{ex} \begin{ex}[$\mathbf{\lambda_2=(2,0)}$] Now our shape is $(2,0)$, corresponding to the diagram \[\ydiagram{2}.\] The bideterminants here are \begin{align*} (T_l:T_{(1,1)})=\big(\ytableaushort{1 1}:\ytableaushort{1 1}\big)=c_{11}^2\\ (T_l:T_{(1,2)})=(T_l:T_{(2,1)})=c_{11}c_{12}\\ (T_l:T_{(2,2)})=c_{12}^2 \end{align*} So we have a three-dimensional irreducible representation spanned by $\langle c_{11}^2,c_{11}c_{12},c_{12}^2\rangle$. \end{ex} Since these are the only two Young diagrams of size two, these examples form a complete list of isomorphism classes of irreducible representations of $S_\bbC(2,2)$. If we prefer instead to recognize our irreducibles as submodules of $E^{\otimes 2}=(k e_1\oplus k e_2)^{\otimes 2}$ (giving us a more obvious action by our algebras), we can use the short exact sequence \[0\to N\hookrightarrow E^{\otimes 2}\twoheadrightarrow D_{\lambda,\bbC}\to 0\] to define the $N=\ker\pi$, where $\pi$ is the map defined in equation (\ref{eqn:pimap}) above. Then we can compute the orthogonal complement to $N$ to get $V_{\lambda,\bbC}$. We can compute: \[V_{\lambda_1,\bbC}=\langle e_1\otimes e_2-e_2\otimes e_1\rangle\] and \[V_{\lambda_2,\bbC}=\langle e_1\otimes e_1, \,e_1\otimes e_2+e_2\otimes e_1, \,e_2\otimes e_2\rangle.\] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \section{The Schur-Weyl Functor} From the discussion in the last section it is evident that the combinatorics behind the representation theory of $S(n,r)$ and $\frakS_r$ have some intersections in their use of Young tableaux and this connection is more than superficial. In fact, there is a functor relating the representations of these two objects in the following way: \subsection{Construction of the Schur-Weyl functor \texorpdfstring{$\calF$}{F}} Let $V\in \Pol(n,r)$ be a $S(n,r)$-representation and select any weight $\alpha\in\Lambda(n,r)$. Then the weight space (cf. rmk~\ref{rmk-weight-spaces}) \[V^\alpha=\xi_\alpha V\] becomes a $S(\alpha)\eqdef\xi_\alpha S(n,r)\xi_\alpha$-module using the action from $S(n,r)$. Now if we allow $r\le n$ and let \[\omega=(1,\dots,1,0,\dots,0)\in\Lambda(n,r)\] notice that $S(\omega)$ is spanned by the elements \[\xi_\omega\xi_{i,j}\xi_\omega,\quad i,j\in I(n,r)\] but by the multiplication rules established in the definition of $S(n,r)$, these are nonzero precisely when $i$ and $j$ are both of shape $\omega$. So then since $\xi_{i,j}=\xi_{i\sigma,j\sigma}$ for all $\sigma\in\frakS_r$, we can take as a basis of $S(\omega)$ the set \[\{\xi_{u\pi,u}|\pi\in\frakS_r\}\] where $u=(1,2,\cdots,r)\in I(n,r)$. To prove the next statement we require a computational result. \begin{lem}\label{lem:somega-mult} If $u=(1,2,\dots,r)\in I(n,r)$, then for all $\pi,\sigma\in\frakS_r$, \[\xi_{u\pi,u}\cdot \xi_{u\sigma,u}=\xi_{u\pi\sigma,u}.\] \end{lem} \begin{prf} Using the formulas for multiplication in $S(n,r)$, recall that \begin{equation} \xi_{u\pi,u}\cdot\xi_{u\sigma,u}=\sum Z_{i,j} \xi_{i,j}\label{eq:1} \end{equation} where \[Z_{i,j}=\#\{s\in I(n,r)|(u\pi,u)\sim(i,s)\text{ and }(u\sigma,u)\sim (s,j)\}.\] Then for each $i,j$, since $u=(1,2,\dots,r)$ has no stabilizer in $\frakS_r$, there is a unique $g$ such that $u\pi g=i$, meaning that $s=ug$. But then this fixes (again a unique) $h\in\frakS_r$ such that $u\sigma h=s=u g$ whence $\sigma h= g$. One computes that \[u\pi\sigma h = u\pi g=i\quad\text{and}\quad uh = j\] therefore since in the above computation $s$ was completely determined by $i$, we have \[Z_{i,j}=\left\{\begin{array}{lr} 1, & (i,j)\sim(u\pi\sigma,u)\\ 0, & \text{otherwise} \end{array}\right.\] and the result follows. \end{prf} Using this result, we prove a more obviously useful statement: \begin{lem} $S(\omega)\cong k\frakS(r)$. \end{lem} \begin{prf} Define the map $\varphi:S(\omega)\to k\frakS_r$ on the basis above to be \[\varphi (\xi_{u\pi,u})=\pi\] and extending $k$-linearly. This is a homomorphism since \[\varphi(\xi_{u\pi,u}\xi_{u\sigma,u})=\varphi(\xi_{u\pi\sigma,u})=\pi\sigma=\varphi(\xi_{u\pi,u})\varphi(\xi_{u\sigma,u})\] and it is bijective since it is bijective on the respective bases and is thus bijective as a linear map. \end{prf} The upshot of these lemmas is that one can define the \textbf{Schur-Weyl functor} \[\calF:\Pol(n,r)\to \Rep(\frakS_r)\] via the map that sends any representation $V$ to its $\omega$ weight space $V^\omega\in \lmod {S(\omega)}\simeq \Rep(\frakS_r)$. \subsecti on{The general theory} The idea of the Schur functor fits into a larger context: Let $S$ be a $k$-algebra and let $M\in\lmod S$. Furthermore, let $e\in S$ be a (nonzero) idempotent. Then one can define a functor \[\calF:\lmod S\to\lmod {eSe}\quad\text{via}\quad V\mapsto eV.\] An important property of this functor is \begin{prop}\label{prop:F-irred} The image of an irreducible $S$ module under the functor $\calF$ above is zero or irreducible. \end{prop} \begin{prf} Let $e\in S$ be the idempotent in the discussion above and let $W\subseteq eV$ be any nonzero $eSe$-submodule. Then notice that $eW$ is a nonzero $S$-module contained in $e^2V=eV$, so $eW=eV$. But since $eW\subseteq W$, this forces $W=eV$, so $\calF(V)$ is irreducible. \end{prf} Next, a discussion in Green \cite[p. 56]{green} gives us a natural thought process to follow in constructing a partial inverse to this functor. Let $\calG:\lmod{eSe}\to\lmod S$ be an extension of scalars: specifically, if $M\in\lmod{eSe}$, then \[\tilde\calG(M)=Se\otimes_{eSe}M.\] This is clearly functorial and furthermore satisfies the property that \[\calF\circ\tilde\calG(M)=\calF(Se\otimes_{eSe}M)=e(Se\otimes_{eSe}M)=eSe\otimes_{eSe} M\cong e\otimes_{eSe}M\cong M\] so it is a right inverse (up to isomorphism) to $\calF$---a good candidate for our purposes. \begin{rmk} It is easy to prove the fact, which I glossed over above, that $M\cong e\otimes M$ via the $eSe$-isomorphism $m\mapsto e\otimes m$. \end{rmk} What we are really looking for, however, is a functor that sends irreducible modules to irreducibles. It can be shown that $\tilde G$ \textit{does not} satisfy this property, so we define \begin{defn} If $M\in\lmod S$ and $e\in S$ is an idempotent, denote by $M_{(e)}$ the largest $S$-submodule of $(1-e)M$. \end{defn} \noindent which enables us to define the functor \[\calG:\lmod{eSe}\to \lmod S\quad\text{via}\quad M\mapsto \tilde\calG(M)/\tilde\calG(M)_{(e)}.\] This leads to the result: \begin{prop} If $M\in \lmod{eSe}$ is irreducible, then so is $\calG(M)$. \end{prop} \begin{prf} Let $W$ be an $S$-module such that \[\tilde\calG(M)_{(e)}\subseteq W\subseteq \tilde\calG(M)\] Then consider multiplying by $e$ in the above inculsions: we get \[0=e\tilde\calG(M)_{(e)}\subseteq eW\subseteq e\tilde \calG(M)=\calF\circ\tilde\calG(M)\simeq M\] which, by the irreducibility of $M$, forces either $eW=0$ (in which case $W\subseteq\tilde\calG(M)_{(e)}$ and we are done) or else $eW=e\tilde\calG(M)$. In this latter case, we find \[\tilde\calG(M)= Se\otimes M\simeq Se\otimes eSeM=S(eSe\otimes M)=S(e\tilde\calG(M))=SeW\subseteq W\] Thus we can conclude that $W=\tilde\calG(M)$, so $\calG(M)$ has no nontrivial proper submodules, so it is simple. \end{prf} \subsection{Properties of \texorpdfstring{$\calF$ and $\calG$}{F and G}} Returning to the specific case of $S=S(n,r)$ and $eSe\cong\frakS_r$, the theory developed in the last part gives us a pair of functors \[\calF:\Pol(n,r)\to \lmod{\frakS_r},\qquad \calG:\lmod{\frakS_r}\to \Pol(n,r),\] each of which preserve irreducibility. We also have that \begin{prop} If $M\in \Pol(n,r)$ is irredicible and if $eM\ne 0$, then $\calG\circ\calF(M)=\calG(eM)\cong M.$ \end{prop} \begin{prf} Notice by prop.~\ref{prop:F-irred} and the following discussion that $eM$ is irreducible and (by assumption) nonzero, so \[\calF\circ\calG(eM)\cong eM\] and since \[0\ne eM\subseteq M\] and $M$ is irreducible, $eM=M$. \end{prf} \subsection{In positive characteristic} Schur's classical work dealt only with the case when $k$ is a field of characteristic zero. In Aquilino and Reischuk's paper \cite{aquilino-reischuk} on the monoidal structure of $\lmod{S(n,d)}$, the authors mention that (in general), \[\lmod{S(n,d)}\not\cong\lmod{\frakS_d}.\] To fix this problem, the authors restrict attention to the ``nicely behaved ones''. Let $M^\lambda$ denote the $\lambda\in\Lambda(n,d)$ weight space of $E^{\otimes d}=(k^n)^{\times d}$. Then one can define \begin{defn} Let $M=\{M^\lambda|\lambda\in\Lambda(n,d)\}$ and let the category $\mathbf{add}(M)$ be the full subcategory of $\lmod{\frakS_d}$ consisting of modules that are summands of finite direct sums of weight modules $M^\lambda\in M$. \end{defn} One can define an analogous subcategory $\mathbf{add}(S(n,d))$, and the usual Schur-Weyl functor \[\calF(M)=\xi_\omega M=M^\omega\] restricts to an equivalence between the categories $\mathbf{add}(M)$ and $\mathbf{add}(S(n,d)).$ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \section{Strict polynomial functors} The theory of strict polynomial functors has its genesis in the idea of \textit{polynomial maps between vector spaces}, or equivalently the rational maps between the schemes they represent. The category of vector spaces with these polynomial maps---and more specifically, the representaion category associated to it---gives the category $\Rep \Gamma^d_k$ of strict polynomial functors. Originally definied by Friedlander and Suslin in \cite{friedlander-suslin}, the authors there showed that the category $\lmod{S(n,r)}$ is equivalent to this category, introducing the language of polynomial functors as a way to understand the structure of representations of the Schur algebras. This process was carried out by Krause \cite{krause-strict-poly-func} and his students Aquilino and Reischuk \cite{aquilino-reischuk}. In the former, Krause identifies projective generators $\Gamma^{d,V}$ for $\Rep\Gamma^d_k$ and defines the tensor product by defining it for projectives and taking the appropriate colimits. In the latter paper, the construction is further elucidated and it is proven that the Schur-Weyl functor $\calF$ is monoidal. \subsection{Polynomial maps} Let $V,W$ be vector spaces over a field $k$. There are many equivalent formulations of polynomial maps between such spaces, but one that this author of this paper finds particulaly motivating is the scheme-theoretic one: \begin{defn}\label{defn:poly-maps} Let $V,W$ be as above. Then the set of \textbf{polynomial maps from $V$ to $W$} is defined to be \[\Hom_\text{Pol}(V,W)\eqdef \Hom_{\Sch/k}(V,W).\] \end{defn} To make sense of this definition, one recalls that every $V\in\Vectk$ corresponds to an affine $k$-scheme $\Spec S^\ast(V^\vee)=V\otimes_k-$ (which we, through an abuse of notation, again denote $V$) represented by the symmetric algebra of the dual of $V$. Thus the polynomial maps are precisely the rational maps one considers between these objects in their algebro-geometric realizations. For reasons that will become apparent shortly, it is easier to identify the polynomial maps with elements of a vector space in the following way: \begin{defn} If $V,W\in\Vectk$ are finite dimensional, a \textbf{polynomial map $f:V\to W$} can be alternatively defined as an element \[f\in W\otimes S^\ast(V^\vee)\cong\Hom_{\Sch/k}(V,W)\] through the identifications above. \end{defn} \begin{rmk} In Friedlander and Suslin's original paper, they take this to be the first definition of a polynomial map. That this agrees with the geometric definition (assuming that $V$ and $W$ are finite dimensional) follows from the following series of isomorphisms: \begin{align*} \Hom_\text{Pol}(V,W)&\eqdef\Hom_{\Sch/k}(V,W)\\ &\simeq\Hom_{\Alg_k}(S^\ast(W^\vee),S^\ast(V^\vee))\\ &\simeq\Hom_{k}(W^\vee,S^\ast(V^\vee))\\ &\simeq W\otimes S^\ast(V^\vee) \end{align*} where we used above properties of affine schemes and standard facts of the linear algebra of finite dimensional vector spaces as well as the fact that a map from $S^\ast(V)$ is determined uniquely by its images on $V$. \end{rmk} The upshot to this seemingly more \textit{ad hoc} definition is that, while it introduces the restriction of finite dimensionality (which will suffice for our definitions anyways), it enables us to make more simple the following idea: \begin{defn}\label{def:homog-poly-map} Let $V$ and $W$ be vector spaces. Then a map $f\in \Hom_\text{Pol}(V,W)$ is called \textbf{homogeneous degree $d$} if it corresponds (under the isomorphisms above) to an element \[f\in W\otimes S^d(V^\vee).\] \end{defn} This is clearly a tangible and sensible way to define a degree $d$ map and it is less obvious how to define a property on the map of corresponding varieties that achieves the same goal. We will see in the next subsection other ways to define this notion that may appeal more to representation theorists. \begin{ex} Here are some examples of polynomial maps: \begin{itemize} \item The identity (scheme) map $\id:V\to V$ is a (homogeneous degree 1) polynomial map. This corresponds to the element \[\sum_{i=1}^n v_i\otimes v_i^\vee\in V\otimes S^\ast(V^\vee)\] where $v_1,\dots,v_n$ is a basis for $V$. \item If $V=\langle v_1,\dots,v_n\rangle$ and $W=\langle w_1,\dots,w_m\rangle$, the element \[\sum_1^m w_i\otimes (v_i^\vee\otimes v_i^\vee)\] gives rise to a map of algebras that sends basis element \[\sum_{\sigma\in\frakS_k}w_{i_{\sigma(1)}}^\vee\otimes \cdots\otimes w_{i_{\sigma(k)}}^\vee\mapsto \sum_{\sigma\in\frakS_k}v_{i_{\sigma(1)}}^\vee\otimes v_{i_{\sigma(1)}}^\vee\otimes \cdots\otimes v_{i_{\sigma(k)}}^\vee\otimes v_{i_{\sigma(k)}}^\vee\] which corresponds to a homogeneous degree 2 polynomial (scheme) map $V\to W$. \end{itemize} \end{ex} \subsection{The category \texorpdfstring{$\calP_k$}{Pk} of strict polynomial functors} Before we define these categories we should describe the objects in question! \begin{defn} A \textbf{strict polynomial functor} is a functor $T:\Vect_k\to \Vect_k$ such that for any $V,W\in\Vect_k$, the map on $\Hom$s \[T_{V,W}:\Hom_k(V,W)\to \Hom_k(T(V),T(W))\] is a polynomial map. That is, \[T_{V,W}\in\Hom_\text{Pol}\big(\Hom_k(V,W), \Hom_k(T(V),T(W))\big)\] \end{defn} Earlier I promised that we would have a more representation-theoretic interpretation of the homogeneous degree of a strict polynomial functor. I am nothing if I am not true to my word: \begin{lem}[Lem. 2.2 in \cite{friedlander-suslin}] Let $T$ be a strict polynomial functor and let $n\ge 0$ be an integer. Then the following conditions are equivalent: \begin{enumerate} \item For any $V\in\Vectk$, any field extension $k'/k$ and any $0\ne\lambda\in k'$, the $k'$-linear map $T_{k'}(\lambda\cdot 1_{V_{k'}})\in\End_{k'}(T(V)_{k'})$ coincides with $\lambda^n1_{T(V)_{k'}}$. \item For any $V\in\Vectk$, $n$ is the only weight of the representation of the algebraic group $\Gm$ in $T(V)$ obtained by applying $T$ to the evident representation of $\Gm$ in $V$. \item For any $V,W\in\Vectk$, the polynomial map \[T_{V,W}:\Hom_k(V,W)\to \Hom_k(T(V),T(W))\] is homogeneous of degree $n$ (in the sense of \ref{def:homog-poly-map}). \end{enumerate} \end{lem} %\begin{prf} % \color{red} sketch out ideas here. Maybe just important ones. %\end{prf} \begin{defn} The category $\calP_d$ is the full subcategory \[\calP_d\subset\Func(\Vectk,\Vectk)\] whose objects are the \textbf{strict polynomial functors of degree $d$.} \end{defn} We refer the reader to \cite[Thm. 3.2]{friedlander-suslin} for a proof of the following fact: \begin{thm}\label{thm:FS-equiv} Let $n\ge d$. Then the map \[\Psi:\calP_d\to \lmod {S(n,d)}\] given by evaluation at $k^n$: \[T\mapsto T(k^n)\] is an equivalence of categories with quasi-inverse \[M\mapsto\Gamma^{d,n}\otimes_{S(n,d)}M\] where $\Gamma^{d,n}=\Gamma^d\circ\Hom_k(k^n,-)$ (c.f. \ref{defn:div-powers} below). \end{thm} The important idea in this proof is that, for any polynomial functor $T$ and any finite-dimensional $V,W\in\Vectk$, we get \begin{align*}T_{VW}&\in\Hom(T(V),T(W))\otimes S^d(\Hom(V,W)^\vee)\\ &\cong\Hom(S^d(\Hom(V,W)^\vee)^\vee,\Hom(T(V),T(W)))\\ &\cong\Hom(S^d(\Hom(V,W)^\vee)^\vee\otimes T(V),T(W)) \end{align*} and by using that $\Gamma^d(X)\cong S^d(X^\vee)^\vee\eqdef (S^d)^\sharp(X)$ and letting $V=W=k^n$, we can identify a canonical map \[T_{k^n\,k^n}:\Gamma^d(\End(k^n))\otimes T(k^n)\to T(k^n)\] which gives us an action of $\Gamma^d(\End(k^n))$ on $T(k^n)$ and one can see without too much trouble that \[\Gamma^d(\End(k^n))\cong S(n,d).\] The rest of the proof is showing that these maps do what we want them to do. \subsection{Strict polynomial functors... again} Just when you thought you had enough categories to consider, Krause developed a new category that more succinctly captures the stucture of homogeneous degree $d$ polynomial maps: there the author changes the domain of these functors to encode the desired properties into the functors, rather than take a subcategory of objects satisfying a condition (which is inherently more difficult to work with). \begin{defn}\label{defn:div-powers} When $k$ is any commutative ring, one can define the category $P_k\subseteq \lmod{k}$ as the full subcategory of finitely-generated projective $k$-modules. In this paper, we require that $k$ is an infinite field. In this case, $P_k=\Vect_k$, but we use the former notation so that it aligns more closely with Krause's work. Define $\Gamma^d P_k$ to be the category of \textbf{divided powers}---the objects are the same as those of $P_k$, but such that \[\Hom_{\Gamma^dP_k}(V,W)=\Gamma^d\Hom_{P_k}(V,W)\] where $\Gamma^d X=(X^{\otimes d})^{\frakS_d}$ denotes the \textbf{$d^{\text{th}}$} divided powers of the vector space $X$. Finally, as a matter of notation, let \[\Rep\Gamma^d_k=\Rep\Gamma^dP_k=\Func(\Gamma^dP_k,\lmod k)\] which we (suggestively) call the \textbf{category of homogeneous degree $d$ strict polynomial functors.} \end{defn} \begin{rmk}\label{rmk:action} Of course since $P_k=\Vectk$, an element \[T\in\Rep\Gamma^d_k=\Func(\Gamma^d\Vectk,\Vectk),\] is a functor that, on objects, is a map $\Vectk\to \Vectk$ and on morphisms is of the form \[T_{VW}:\Hom_{\Gamma^d\Vectk}(V,W)=\Gamma^d\Hom_k(V,W)\to \Hom_k(T(V),T(W))\] which, leveraging $\otimes$-$\Hom$ adjunction, gives us a map \[\Gamma^d(V,W)\otimes T(V)\to T(W)\] just as we got in the discussion following thm.~\ref{thm:FS-equiv}. \end{rmk} Using the idea in the last remark, Krause proves that there is another equivalence of categories: \begin{thm}[{\cite[Thm. 2.10]{krause-strict-poly-func}}] Let $d,n$ be positive integers. Then evaluation at $k^n$ induces a functor \[\Rep\Gamma^d_k\to \lmod{S(n,d)}\] which is an equivalence of categories when $n\ge d$. \end{thm} The key idea of this proof is to restrict attention to small projective generators of $\Rep\Gamma^d_k$. A class of these are the weight spaces $\Gamma^\lambda$ of the object $\Gamma^{d,k^n}$. Then it just remains to see that \[\End_{\Gamma^d_k}(\Gamma^{d,k^n})\cong S_k(n,d)\] and the result follows. \subsection{The monoidal structure on \texorpdfstring{$\Rep\Gamma^d_k$}{Rep Gdk}} One of the upshots of Krause's reformulation of strict polynomial functors is that it admits a more obvious monoidal structure. His construction of the tensor product on $\Rep\Gamma_k^d$ takes the following tack: notice that the Yoneda embedding is a map \[y:(\Gamma^dP_k)\op\to \Rep\Gamma^d_k\] sending each object $V\mapsto \Hom_{\Gamma^dP_k}(V,-)$. Furthermore, the embedding $y$ is dense! \begin{lem}\label{lem:yoneda-dense} Given a small category $\calC$, let $y$ be the Yoneda embedding \[y:(\calC)\to \Func(\calC\op,\Set)=\PreSh(\calC).\] Then every element in $\PreSh(\calC)$ is (in a canonical way) a colimit of elements in the image of $y$. That is, for some collection of $C_i\in \calC$, \[X=\colim_{\longrightarrow i}y(C_i)\] \end{lem} To prove this lemma, let us remind you of the construction called the \textbf{category of elements} of a functor $\calF:\calC\op\to \Set$. It elements are pairs $(C,x)$ where $C\in\calC$ and $x\in \calF(C)$ is a point. Morphisms between two objects \[f:(C,x)\to (C',y)\] are honest morphisms $f:C\to C'$ in $\calC$ such that the set morphism \[\calF(f):\calF(C')\to \calF(C)\] has the property that \[\calF(f)(y)=x.\] This category gives us a way to work with elements of a category ``locally'' even if the category $\calC$ is not concrete. \begin{prf}[of \ref{lem:yoneda-dense}] The setup (but not the details) for following proof comes from one in \textit{Sheaves in Geometry and Logic} \cite[41-43]{maclane-moerdijk}. Define the functor \[R:\PreSh(\calC)\to \PreSh(\calC)\quad\text{via}\quad E\mapsto \Hom_{\hat\calC}(y(-),E).\] Define also the opposing functor \[L:\PreSh(\calC)\to \PreSh(\calC)\quad\text{via}\quad F\mapsto \colim \calD_F\] where $\calD_F$ is the diagram \[\int_\calC F\xrightarrow{\pi_\calC}\calC\xrightarrow{y}\hat\calC\] (this is makes sense since $\Set$ is cocomplete). Now we claim that $L\ladjointto R$ are a pair of adjoint functors. To prove this, it suffices to show that \[\Hom_{\PreSh(\calC)}(F,R(E))\cong \Hom_{\PreSh(\calC)}(L(F),E)\] for all $F,E\in\PreSh(\calC)$. But notice that maps from the colimit of a diagram to $E$ are in bijection with cones under the diagram (i.e. cocones) with nadir $E$ by the universal property of colimits! So \[\Hom(L(F),E)=\Hom(\colim\calD_F,E)=\operatorname{cocone}(\calD_F,E)\] where an element of $\operatorname{cocone}(\calD_F,E)$ is a collection of maps $(\varphi_{(C,x)})_{(C,x)\in\int_\calC F}$ such that for all morphisms $\alpha:(C',x')\to (C,x)$ in $\int_\calC F$, the following diagram commutes \begin{center} \begin{tikzcd} \calD_F(C,x)\ar[rr,"\calD_F(\alpha)"]\ar[swap,dr,"\varphi_{(C,x)}"] & &\calD_F(C',y)\ar[dl,"\varphi_{(C',y)}"]\\ & E & \end{tikzcd} \end{center} Using these maps, we can construct a natural transformation $\eta:F\to \Hom(y(-),E)$ in the following way: for each $C\in\calC$, let \[\eta_C:F(C)\to \Hom(y(C),E)\quad\text{such that}\quad \eta_C(x)=\varphi_{(C,x)}\in\Hom(y(C),E).\] This assembles to an honest natural transformation since for each $C,C'\in\calC$, and morphism $f:C\to C'$, we have a diagram \begin{center} \begin{tikzcd} F(C)\ar[r,"x\mapsto \varphi_{(C,x)}"] & \Hom(y(C),E)\\ F(C')\ar[u,"F(f)"]\ar[r,"x'\mapsto \varphi_{(C',x')}",swap] & \Hom(y(C'),E)\ar[u,"{\Hom(y(f),E)}",swap] \end{tikzcd} \end{center} which commutes since (by the commutativity of the colimit diagram above), \[\varphi_{(C,x)}=\varphi_{(C',x')}\circ\Hom(-,\alpha|_C)\] where $\alpha|_C:C\to C'$ denotes the underlying map in $\calC$ (instead of in $\int_\calC F$). Then fixing $x'\in F(C')$---and therefore $x=F(f)(x')\in F(C)$---we can see that naturality of $\eta$ means that \[\Hom(y(f),E)\circ\eta_{C'}(x')=\Hom(y(f),E)(\varphi_{(C',x')})=\varphi_{(C',x')}\circ y(f)\] and (continuing in the other direction) \[\eta_C\circ F(f)(x')=\varphi_{(C,x)}\] must be the same. But in this case the map $f:C\to C'$ lifts to a map $\hat f:(C',x')\to (C,x)$ in $\int_\calC F$, and $\hat f|_{C}=f$ we get that the equality of these two expressions is precisely the compatibility condition of the structural morphisms of the cocone. Thus we have showed that there is a well-defined map \[\Psi_{E,F}:\operatorname{cocone}(\calD_F,E)\to\Hom(F,R(E))\] since a natural transformation is defined by its structural maps and a cocone by its legs, this map is injective. It is surjective because for every $\eta:F\to R(E)$, we can define legs for a cocone: \[\varphi_{(C,x)}=\eta_C(x)\in\Hom(y(C),E)=\Hom(\calD_F(C,x),E).\] Next, we aim to show is natural in $F$ and $E$. If $\epsilon:E\to E'$ is a morphism, we have the diagram \begin{center} \begin{tikzcd} \operatorname{cocone}(\calD_F,E)\ar[d,swap,"(\varphi_a)\mapsto(\epsilon\circ\varphi_a)"]\ar[r,"\Psi_{E.F}"] &\Hom(F,R(E))\ar[d,"{\Hom(F,R(\epsilon))}"] \\ \operatorname{cocone}(\calD_F,E')\ar[r,swap,"\Psi_{E',F}"] & \Hom(F,R(E')) \end{tikzcd} \end{center} But tracing along the bottom left, a cocone $(\varphi_a)_a$ under $\calD_F$ with nadir $E$ maps to the cocone $(\epsilon\circ\varphi_a)_a$ with nadir $E'$. Under the map $\Psi_{E',F}$ just defined, this has as its image the natural transformation \[\eta:F\to \Hom(-,E')\quad\text{via}\quad \eta_{C}(x)=\varphi_{(C,x)}\circ\epsilon\in\Hom(C,E')\] Proceeding along the top right, the same cocone with nadir $E$ is mapped to the natural transformation \[\hat\eta:F\to \Hom(-,E)\quad\text{via}\quad \hat\eta_{C}(x)=\varphi_{(C,x)}\in\Hom(C,E)\] which is then mapped to $\varphi_{(C,x)}\circ\epsilon\in\Hom(C,E')$. This gives us naturality in $E$. To show naturality in $F$, let $\beta:F\to F'$ be a natural map between presheaves. Then this induces a map \[\int_\calC\beta:\int_\calC F\to\int_\calC F'\quad\text{via}\quad (C,x)\mapsto (C,\beta_C(x))\] which, in turn, induces a map between diagrams \[\calD_\beta:\calD_{F'}\to \calD_{F}\] where, on points, this is the map (if $x\in F(C)$) \[\left(\calD_\beta(\calD_{F'})\right)_C(C,x)=(\calD_{F'})_C(C,\beta_C(x))\] which, in turn, induces the map \[\hat\beta:\operatorname{cocone}(\calD_{F'},E)\to \operatorname{cocone}(\calD_{F},E)\] such that, if $(\varphi_{(C,x)})_{\int_\calC F'}$ is a cone under $\calD_{F'}$ with nadir $E$, the image $\rho=\hat\beta(\varphi_{(C,x)})$ is such that \[\rho_{(C,x)}=\varphi_{(C,\beta_C(x))}.\] So naturality in $F$ is equivalent to the commutivity of \begin{center} \begin{tikzcd} \operatorname{cocone}(\calD_F,E)\ar[r,"\Psi_{E,F}"] & \Hom(F, R(E))\\ \operatorname{cocone}(\calD_{F'},E)\ar[u,"\hat\beta"]\ar[swap,r,"\Psi_{E,F'}"] & \Hom(F',R(E))\ar[u,"{\Hom(\beta,R(E))}",swap] \end{tikzcd} \end{center} which follows from the statement that the two natural transformations below take the same values: \[\left(\Psi_{E,F}\circ\hat\beta(\varphi_{\alpha})\right)_C(x)=\varphi_{(C,\beta_C(x))}\] and \[\left(\Hom(\beta,R(E))\circ\Psi_{E,F'}(\varphi_{\alpha})\right)_C(x)=(\Psi_{E,F'}(\varphi_{\alpha})\circ\beta)_C(x)=(\Psi_{E,F'}(\varphi_{(C,x)}))_C(\beta_C(x))=\varphi_{(C,\beta_C(x))}\] This completes the proof that $L\ladjointto R$. But by the Yoneda lemma, we know that \[R(E)(C)=\Hom(y(C),E)\cong E(C)\] which implies that $R$ is naturally isomorphic to $\id_{\PreSh(\calC)}$. But adjoints, when they exist, are unique! Therefore $L\simeq\id_{\PreSh(\calC)}$, or in other words for all $F\in\PreSh(\calC)$, \[F=\id_{\PreSh(\calC)}(F)\simeq L(F)=\colim D_F=\colim_i \Hom(-,C_i)\] proving the result. \end{prf} Notice that since \[\Hom_\calC(-,X)=\Hom_{\calC\op}(X,-)\] we can replace $\calC$ with $\calC\op$ in the above argument and get that any functor in $\Func(\calC,\Set)$ is a colimit of (covariant) representable functors of the form $\Hom(C_i,-)$. Thus \textbf{all} the elements in our category $\Rep \Gamma^d_k$ can be written as a colimit of elements of the form \[\Gamma^{d,V}\eqdef\Hom_{\Gamma^dP_k}(V,-)\] Let $0\to X\to Y\to Z\to 0$ be an exact sequence of elements in $\Rep\Gamma^d_k$. Then by applying $\Gamma^{d,V}$ and applying the Yoneda isomorphism $\Hom(\Gamma^{d,V},X)\simeq X(V)$, we get that \[0\to X(V)\to Y(V)\to Z(V)\to 0\] is exact whence \[0\to \Hom(\Gamma^{d,V},X)\to \Hom(\Gamma^{d,V},Y)\to\Hom(\Gamma^{d,V},Z)\to 0\] is. This proves the fact that \begin{lem} For all $V\in\Gamma^dP_k$, $\Gamma^{d,V}$ is a projective object. \end{lem} From these reasonably simple objects, one defines (letting $\Gamma^d_k=\Rep\Gamma^d_k$ in what follows) \[\Gamma^{d,V}\otimes_{\Gamma^d_k}\Gamma^{d,W}\eqdef\Gamma^{d,V\otimes W}\] and leveraging the facts above, for each $Y\in\Gamma^dP_k$, \[\Gamma^{d,V}\otimes_{\Gamma^d_k} Y\eqdef\colim_{\Gamma^{d,W}\to Y}\Gamma^{d,V\otimes W}\] and finally for each $X\in \Gamma^dP_k$, \[X\otimes_{\Gamma^d_k} Y\eqdef\colim_{\Gamma^{d,V}\to X}\Gamma^{d,V}\otimes Y.\] One can similarly define internal hom: \[\iHom_{\Gamma^d_k}(X,Y)\eqdef \lim_{\Gamma^{d,V}\to X}\colim_{\Gamma^{d,W}\to Y}\Gamma^{d,\Hom(V,W)}\] which in \cite[prop 2.4]{krause-strict-poly-func} is shown to satisfy the usual adjunction: \begin{prop}[Krause] If $X,Y,Z\in\Gamma^dP_k$, \[\Hom_{\Gamma^d_k}(X\otimes_{\Gamma^d_k} Y,Z)\cong\Hom_{\Gamma^d_k}(X,\iHom_{\Gamma^d_k}(Y,Z))\] \end{prop} \subsection{Monoidicity of the Schur-Weyl functor \texorpdfstring{$\calF$}{F}} In \cite{aquilino-reischuk}, the authors show that this is the ``correct'' monoidal structure. This is summed up in the primary result of their paper: \begin{thm}[{\cite[thm. 4.4]{aquilino-reischuk}}] The functor \[\calF=\Hom(\Gamma^\omega,-):\Rep\Gamma^d_k\to \lmod{k\frakS_d}\] preserves the monoidal structure defined on strict polynomial functors, i.e. \[\calF(X\otimes_{\Gamma_k^d}Y)\cong\calF(X)\otimes_k\calF(Y)\] for all $X$ and $Y$ and if $\1$ is the tensor unit, \[\calF(\1_{\Rep\Gamma^d_k})=\1_{k\frakS_d}.\] \end{thm} The key observation in their proof of this result is that strict polynomial functors can be computed as limits of representable presheaves where the representing objects are \textit{free}. This is believable enough if we (as they do) allow $k$ to be any commutative ring. Since we are only interested in the case when $k$ is a field, however, we have to make no such reduction. Then a combinatorial argument connecting weights in $\Lambda(mn,d)$ to the collection of all matrices $A^\lambda_\mu$ with $\lambda\in\Lambda(n,d)$ and $\mu\in\Lambda(m,d)$ such that the $i^{th}$ column sums to $\lambda_i$ and the $j^{th}$ row sums to $\mu_j$. Then we observe that \[\Hom(\Gamma^\omega,\Gamma^{d,n}\otimes\Gamma^{d_m})\cong\bigoplus_{\lambda\in\Lambda(n,d),\,\mu\in\Lambda(m,d)}\bigoplus_{A\in A^\lambda_\mu}\Hom(\Gamma^\omega,\Gamma^A)=\bigoplus_{\lambda,\mu}\bigoplus_A{^\lambda M}\] and by a decomposition result (their lemma 3.1), then have that \[\bigoplus_A {^\lambda M}\cong{^\lambda M}\otimes_k{^\mu M}\] which is the crucial step in separating into a tensor product of $\frakS_d$ modules. %I like this idea. I will try to do it, if time allows. %\subsection{A dictionary} %{\color{red} Spell out how one can translate between the three different categories: irreducibles and tensor structure.} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \section{Tensor products in the derived category \texorpdfstring{$\Db(S(n,r))$}{DbS(n,r)}} (Co)homology is a powerful tool in analyzing the composition of objects and their actions. This is evidenced by the sheer number of cohomology theories that are in use across many different fields. Homological computations are, in their nature, lossy---one is reducing the object to its signature and then we play the game of gleaning what we can from the structure that remains. It is a well-known fact of homological algebra that the cohomology of an $R$-module is independent of resolution by projective objects. Because of this fact, if we are interested in the homological properties of modules over a ring $R$, it isn't useful to look at the (abelian) category $\lmod R$, but rather its ``homologically-distilled'' analog, $\D(R)$. Throughout this section we will be relying on Weibel \cite{weibel} and his discussion on chain, homotopy, and derived categories. \subsection{Derived categories} In what follows, let $\calA$ denote any abelian category. If it helps, the reader can relatively safely assume that $\calA$ is $\lmod R$, the category of (left) $R$-modules.\footnote{That one can do this is the subject of the \textit{Freyd-Mitchell embedding theorem}, which tells us that any small Abelian category can be embedded faithfully in $\lmod R$ for some ring $R$. Even if $\calA$ isn't small (a set), one can study it via this embedding by restricting attention to small abelian subcategories.} Denote by $\Ch(\calA)$ (or $\Ch(R)$ when $\calA=\lmod R$) the category of chain complexes $(C_\bullet,\partial)$ such that each $C_i\in\calA$ and $\partial\circ\partial=0$. Let $\Chb(\calA)$ denote the full subcategory of $\Ch(\calA)$ consisting of the complexes that are bounded---that is, $C_i=0$ for all $i>N$ and $i<M$ for some $N,M$. Recall that a chain complex morphism\footnote{A morphism that commutes with the differential.} $f_\bullet:C_\bullet\to D_\bullet$ is a \textbf{chain nullhomotopic} in $\Ch(\calA)$ if there exist maps $\sigma_i:C_i\to D_{i+1}$ such that we have following (non-commuting) diagram: \begin{figure}[h] \centering \begin{tikzcd} \cdots\ar[r,"\partial"] &C_{n+1}\ar[d,"f_{n+1}",swap]\ar[r,"\partial"] & C_n\ar[dl,"\sigma_n"]\ar[r,"\partial"]\ar[d,"f_n"] & C_{n-1}\ar[dl,"\sigma_{n-1}"]\ar[d,"f_{n-1}"]\ar[r,"\partial"] & \cdots\\ \cdots\ar[r,"\partial",swap] &D_{n+1}\ar[r,"\partial",swap] & D_n\ar[r,"\partial",swap] & D_{n-1}\ar[r,"\partial",swap] & \cdots \end{tikzcd} \end{figure} \noindent with the condition that (for all $n$) \[f_n=\partial\circ\sigma_n+\sigma_{n-1}\circ \partial.\] \begin{defn} Two chain maps $f,g:C_\bullet\to D_\bullet$ in $\Ch(\calA)$ are said to be \textbf{chain homotopic} if their difference is chain nullhomotopic. That is, if there exists maps $\sigma_i:C_i\to D_{i+1}$ such that \[f_n-g_n=\partial\circ\sigma_n+\sigma_{n-1}\circ\partial.\] \end{defn} A well-known lemma is the following: \begin{lem} If $f$ and $g$ are chain homotopic maps, then they induce the same maps on (co)homology. \end{lem} Chain homotopies play the role of \textbf{homotopy equivalences} (keeping in mind the example of topological spaces with simplicial homology for intuition) and the fact we have nontrivial homotopy equivalences is the first indication that we aren't in the right category to study homology. A natural thing to do, then, is to attempt to pass to a category where we identify equivalent morphisms. \begin{defn} Given the category $\Ch(\calA)$, we define the \textbf{homotopy category} $\K(\calA)$ to be the category whose objects are the same as those in $\Ch(\calA)$ and whose morphisms between any two chains $C_\bullet$ and $D_\bullet$ are \[\Hom_{\K(\calA)}(C_\bullet,D_\bullet)\eqdef \Hom_{\Ch(\calA)}(C_\bullet,D_\bullet)/H\] where $H$ consists of all chain nullhomotopic maps from $C_\bullet$ to $D_\bullet$. \end{defn} \begin{rmk} We can analogously define the category $\K^\text{b}(\calA)$ that is formed through the same process after first restricting to the subcategory $\Chb(\calA)$ of bounded chain complexes. \end{rmk} The upshot here is that we are now closer to our (until now only implicit) goal: to find a category that captures the information in $\Ch(\calA)$ \textit{up to quasi-isomorphism.} One can show that, however, that in general there are quasi-isomoprhisms that are not homotopic to the identity map! So our job is only partially complete. A result of great importance to reaching our goal is that $\K(\calA)$ is \textit{triangulated} with distinguished triangles given by the mapping cones \[A\xrightarrow{u} B\to \cone(u)\to A[1]\] and all triangles equivalent to them\footnote{We say a triangle $X\to Y\to Z\to X[1]$ is equivalent to a mapping cone if $X,Y,Z\in\K(\calA)$ and there exists isomorphisms (equivalently, homotopy equivalences when considered as maps in $\Ch(\calA)$) $f,g,h$ such that the diagram in fig.~\ref{fig:tri-equiv} commutes (for some $A,B$ and $u$): } \begin{figure} \centering \begin{tikzcd} X\ar[r]\ar[d,"f"] & Y\ar[r]\ar[d,"g"] & Z\ar[r]\ar[d,"h"] & X[1]\ar[d,"{f[1]}"]\\ A\ar[r,"u"] & B\ar[r] & \cone(u)\ar[r] & A[1] \end{tikzcd} \caption{Equivalence of triangles in $\K(\calA)$} \label{fig:tri-equiv} \end{figure} The importance of triangluated categories cannot be understated (it is critical, e.g. in the construction of the Balmer spectrum in sec.~\ref{sec:ttc}). Many people, including Verdier (\cite{verdier-thesis}), and Neeman (\cite{neeman-duality}, \cite{neeman-book}) have put considerable time and effort into developing a framework within the context of triangulated categories to enable examination and manipulation. One of the tools that we will now use is \textit{Verdier localization}. It closesly mirrors the idea of localization of a ring at a multiplicative subset (a parallel that will be extended further in the following section). \begin{defn} Given a triangulated category $\calT$, a \textbf{multiplicative system} $S$ in $\calT$ is a collection of morphisms in $\calT$ satisfying the following properties: \begin{itemize} \item If $s,s'\in S$, so are $s\circ s'$ and $s'\circ s$ (whenever either of these make sense). \item $\id_X\in S$ for all $X\in\calT$ \item (\textbf{Ore condition}) If $t\in S$ with $t:Z\to Y$ then for every $g:X\to Y$ there are maps $f$ and $s$ (with $s\in S$) such that the diagram in figure \ref{fig:fractions} commutes. The symmetric statement also holds. \item (\textbf{Cancellation}) If $f,g:X\to Y$ are two morphisms, then there is an $s\in S$ with $sf=sg$ if and only if there is a $t\in S$ with $ft=gt$. \end{itemize} \end{defn} \begin{figure} \centering \begin{tikzcd} W\ar[d,"s"]\ar[r,"f"] & Z\ar[d,"t"]\\ X\ar[r,"g"] & Y \end{tikzcd} \caption{Ore condition in a multiplicative system} \label{fig:fractions} \end{figure} \begin{rmk} Under the foresight we will eventually be inverting the elements in $S$, the Ore condition translates into the following idea: for all $g:X\to Y$ and $t:Z\to Y$ in $S$, \[t^{-1}g=fs^{-1}\] for some maps $s\in S$ and $f$. This fixes the inherent noncommutativity of function composition. \end{rmk} \subsubsection{The calculus of fractions} We can finally construct the Verdier localization of $\K(\calA)$ using a generalization of the calculus of fractions in localization of a ring. We will call a diagram of the form \[fs^{-1}:X\xleftarrow{s} X_1\xrightarrow{f} Y\] where $s\in S$ a \textbf{fraction} and say that two fractions $fs^{-1}$ and $gt^{-1}$ are equivalent if there exists an element $X_3$ fitting into the commutative diagram below: \begin{center} \begin{tikzcd} & X_1\ar[dl,swap,"s"]\ar[dr,"f"] &\\ X & X_3\ar[u]\ar[l]\ar[r]\ar[d] & Y\\ & X_2\ar[ul,"t"]\ar[ur,"g",swap] & \end{tikzcd} \end{center} Then from this we can define \begin{defn} Let $\calT$ be a triangulated category and $S$ be a multiplicative system for $\calT$. Then the \textbf{Verdier localization of $\calT$ at $S$}, $\calT[S^{-1}]$ is a category whose objects are the same as those of $\calT$ and whose morphisms are equivalence classes of fractions of maps, as defined above. \end{defn} From this more general framework, we can very simply define the \textbf{derived category of an abelian category $\calA$} to be \[\D(\calA)=\K(\calA)[W^{-1}]\] where $W$ is the collection of weak homotopy equivalences (quasi-isomorphisms). For our purposes, it will suffice to restrict to the full triangulated subcategory $\K^\text{b}(\calA)$, giving us the \textbf{bounded derived category} \[\Db(\calA)=\K^\text{b}(\calA)[W^{-1}].\] \subsubsection{Tensor products in \texorpdfstring{$\Db(R)$}{Db(R)}} In the context of $R$ (where $R$ is a $k$ algebra) modules, there is a tensor bifunctor \[-\otimes_R-:\rmod R\times\lmod R\to \Vectk\] and since it is right exact, but not exact, we can take the left derived functor \[-\otimes_R^\mathbf{L}-\eqdef \L(-\otimes_R-):\D(\rmod R)\times\D(\lmod R)\to\D(\Vectk)\] which we call \textbf{the derived tensor product}. This can defined via a Kan extension: \begin{defn} Let $\calF:\calA\to \calB$ be an additive functor between abelian categories. Then since $\calF$ preserves chain homotopies, it descends to a functor $\K\calF:\K(\calA)\to \K(\calB)$. We define the \textbf{right derived functor} (if it exists) to be a functor $\R\calF:\D(\calA)\to\D(\calB)$ along with a natural transformation $\xi:q\circ\K\calF\to \R\calF\circ q$ such that for any $\calG:\D(\calA)\to \D(\calB)$ and $\zeta:q\circ\K\calF\Rightarrow \calG\circ q$ fitting into the diagram \begin{center} \begin{tikzcd}[row sep=large] \K(\calA)\ar[r,"\K\calF"]\ar[dr,"q",swap] & \K(\calB)\ar[r,"q"]\ar[d,"\zeta",Rightarrow] & \D(\calB)\\ & \D(\calA)\ar[ur,bend right=45,"\R\calF",swap]\ar[phantom,bend right=45,ur,""{name=RF}]\ar[ur,"\calG"]\ar[phantom,ur,""{name=G,below}] & \arrow[from=RF,to=G,Rightarrow,"\eta"] \end{tikzcd} \end{center} there exists a unique $\eta:\R\calF\Rightarrow \calG$ such that $\eta q\circ \xi=\zeta$. In other words, \begin{center} $\R\calF$ is the \textit{right Kan extension of $q\circ\K\calF$ along the localization map $q$.} \end{center} Similarly, the left derived functor $\L\calF$ can be defined as the left Kan extension of $q\circ\K\calF$ along $q$, satisfying the same universal property with the natural transformations reversed. \end{defn} This gives us a property characterizing the functor, but in practice one usually computes this via resolutions. In the simplest case, let $M,N\in\calA$ for some abelian monoidal category $\calA$. Then \[M[0]\otimes^\mathbf{L}_R N[0]=F_\bullet\otimes G_\bullet\] where $F_\bullet$ and $G_\bullet$ are chain complexes quasi-isomorphic to $M[0]$ and $N[0]$, respectively (e.g. flat resolutions). %\subsection{Compatibility of monoidal structures} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \section{The (Balmer) spectrum of a tensor triangulated category}\label{sec:ttc} In Paul Balmer's 2005 paper \cite{balmer-spc}, he developed a general framework for understanding the structure of certain kinds of categories that arose from the original constructions in algebraic geometry. Serving as a source of inspiration for Balmer, in \cite{friedlander-pevtsova-pi} Friedlander and Pevtsova proved that the projective geometry of the cohomology ring of a finite group scheme can be recovered by looking at ``ideals'' in the category $\stmod G$ of stable $G$ modules. Using this as a springboard, Balmer ported the definitions of ideals and prime ideals to tensor-triangulated categories (see below) and proved a broader result that gives some tools for better analyzing familes of representaitons of finite groups (among other things). \subsection{Some motivation and a definition} Let $\calC$ be a symmetric monoidal (i.e. tensor) category with tensor product $\otimes$ and unit object $\1$. After giving some thought to the matter, one realizes that a ring is given by putting a ``compatible'' monoidal structure on top of an abelian group, and to that end, one may consider the case when $\calC$ is also additive. This perspective gives us an interesting analogy between (unital, commutative) rings in algebra and category theory. Since every triangulated category is also additive, we can further specify that $\calC$ be triangulated: \begin{defn} A \textbf{tensor-triangulated} category $\calC$ is both a symmetric moniodal category and a triangulated category such that the monoidal structure preserves the triangluated structure. As a reminder, such a category is equipped with a tensor product $-\otimes -:\calC\times\calC\to \calC$ and unit object $\1$, along with a collecton distinguished triangles $\calT$ comprised of objects in $\calC$ and shift functor (an auto-equivalence) $(-)[1]:\calC\to \calC$ such that: $-\otimes-$ is a triangulated (or exact) functor in each entry (it takes $\calT$ to itself). \end{defn} \subsubsection{Aside: Why triangulation?} In the construction of the spectrum, we will see that the triangulated structure isn't explicitly necessary. It appears that one only needs a symmetric (or not!) monoidal category with all sums (at least if we are just relying on analogy to rings). A question, which may not have an answer yet (fully or in part) is whether changing these requirements significantly changes things. For instance, what happens when one tries to compute the spectrum of the abelian (symmetric monoidal) category $\Rep G$? \subsection{Construction of the spectrum} Once the appropriate context is identified (which is the real ingenuity of Balmer's paper), the construction very closely mirrors the construction seen in elementary algebraic geometry: \begin{defn} Let $\calC$ be a tensor-triangulated category (TTC). Then a \textbf{(thick tensor) ideal} $I\subseteq \calC$ is a full triangulated subcategory with the following conditons: \begin{itemize} \item \textit{(2-of-3 rule/Triangulation)} If $A,B,$ and $C\in\calC$ are objects that fit into a distinguished triangle \[A\to B\to C\to A[1]\] in $\calC$, and if any two of the three are objects in $I$, then so is the third. \item \textit{(Thickness)} If $A\in I$ is an object that splits as $A\cong B\oplus C$ in $\calC$, then both $B$ and $C$ belong to $I$. \item \textit{(Tensor Ideal)} If $A\in I$ and $B\in \calC$ then $A\otimes B=B\otimes A\in I$. \end{itemize} \end{defn} \begin{rmk} The first condition just ensures that our ideals respect the triangulated structure (and thus stability) in the parent category $\calC$. The final condition is the most direct analog of an ideal and is central in the analogy between this theory and classical AG. \end{rmk} From here the rest of the picture is relatively straightforward: \begin{defn} Let $\calC$ be a TTC as before. Then an ideal $I\subseteq\calC$ is called a \textbf{prime ideal} if, whenever $A\otimes B\in I$ for some $A,B\in \calC$, either $A$ or $B$ is in $I$. We call the collection of all primes the \textbf{spectrum} of $\calC$ and write $\operatorname{Spc}(\calC)$. \end{defn} Here the construction varies slightly from the traditional construction of $\Spec$: we define \[Z(S)\eqdef\{\calP\in\Spc(\calC)|S\cap\calP=\varnothing\}\] and define sets (for any $S\subseteq\calC$ and $A\in \calC$): \[U(S)\eqdef \Spc(\calC)\setminus Z(S)=\{\calP\in\Spc(\calC)|S\cap \calP\ne\varnothing\}\] and \[\supp(A)\eqdef Z(\{A\})=\{\calP\in\Spc\calC|A\notin \calP\}\] A routine check of the axioms shows us \begin{lem}[2.6 of \cite{balmer-spc}] The sets $U(S)$ for all $S\subseteq\calC$ form a basis for a topology on $\Spc\calC$. \end{lem} which we call the \textbf{Zariski topology}, giving $\Spc\calC$ the structure of a topological space. \subsection{As a locally-ringed space} The above discussion mentions how we can construct a topological space from the set of prime thick tensor ideals in a TTC, but there is even more we can get: the structure of a locally-ringed space. To get this, we need to define the structure sheaf: \begin{defn} Let $\calC$ be a tensor-triangulated category and let $\Spc\calC$ be the construction discussed above. Then the structure sheaf on $\Spc\calC$ is given by the sheafification $\O_\calC$ of the presheaf \[\tilde\O_\calC:\operatorname{Open}(\Spc\calC)\op\to \Ring\] given by \[\tilde\O_\calC(U)\eqdef \End_{\calC/\calC_Z}(\1_U)\] where $U\subseteq\Spc\calC$ is an open set and $\calC_Z$ is the thick tensor ideal in $\calC$ supported on $Z=\Spc\calC\setminus U$. The ringed space $(\Spc \calC,\O_\calC)$ is denoted $\Spec_\text{Bal} \calC$. \end{defn} \begin{rmk} That $\calC_Z$ is a thick tensor ideal requires some work, but it follows from work that Balmer does to define a support data $(X,\sigma)$ on a tensor-triangulated category and showing that for any subset $Y\subset X$ of its associated topological space, the following set \[\{A\in\calC|\sigma(A)\subseteq Y\}\] is a thick tensor ideal of $\calC$ (c.f. lem.~3.4). \end{rmk} Balmer emphasizes that this is the ``correct'' ringed space structure to put on $\Spc\calC$. To do so, one defines an abstract support datum: \begin{defn} A \textbf{support datum} for a TTC $\calC$ is a pair \[(X,\sigma)\] where $X$ is a topological space and $\sigma:\calC\to \operatorname{closed}(X)$ is a map sending $a\mapsto\sigma_a$ such that \begin{enumerate} \item $\sigma(0)=\varnothing$ and $\sigma(1)=X$, \item $\sigma(a\oplus b)=\sigma(a)\cup\sigma(b)$, \item $\sigma (a[1])=\sigma(a)$, \item $\sigma(a)\subseteq \sigma(b)\cup\sigma(c)$ for any triangle $a\to b\to c\to a[1]$, \item $\sigma(a\otimes b)=\sigma(a)\cap\sigma(b).$ \end{enumerate} \end{defn} Using this definition, Balmer shows \begin{thm}[{\cite[thm. 3.2]{balmer-spc}}] $(\Spc\calC,\supp)$ is a support datum for $\calC$ and furthermore this support datum is terminal in the category of support data for $\calC$. That is, for any other $(X,\sigma)$, there exists a unique continuous map $f:X\to \Spc\calC$ such that \[\sigma(a)=f^{-1}(\supp(a)).\] \end{thm} To finish up the discussion of tensor-triangulated geometry, we state a couple of results originally proven in different contexts but used by Balmer to motivate the utility of this construction. In \cite{thomason}, the author classifies the triangulated tensor subcategories of $\Dperf(X)$, thereby defining the set $\Spc\Dperf(X)$. Applying Balmer's language and structure, he proved that \begin{thm}[{\cite[thm. 6.3(a)]{balmer-spc}}] If $X$ is a topologically Noetherian scheme, then (as ringed spaces) \[\Spec_\text{Bal}\Dperf(X)\simeq X.\] \end{thm} Furthermore another result from Friedlander and Pevtsova \cite{friedlander-pevtsova-pi} showed (again using the language of $\Spec_\text{Bal}$): \begin{thm}[{\cite[thm. 3.6]{friedlander-pevtsova-pi},\cite[thm. 6.3(b)]{balmer-spc}}] Let $G$ be a finite group scheme over a field $k$. Then \[\Spec_\text{Bal}(\stmod(kG))\simeq\Proj(H^\bullet(G,k))\] where, $\stmod(kG)$ is the full subcategory of the stable module category consisting of the finitely generated modules and $H^\bullet(G,k)=\Ext_G^\bullet(k,k)$ is the cohomology ring of $G$. \end{thm} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \section{Questions and extensions} The following are some rough outlines of research programs that we can look into moving forward. They vary in depth and difficulty and the questions asked herein may not end up being the ones that are most interesting in these different areas. These do, however, provide a good starting place as we transition into tackling new problems. \subsection{Computing the spectrum of \texorpdfstring{$\Db(S(n,r))$}{DbS(n,r)}} When one is interested in understanding the representation theory of an object, one often runs into the problem of algebras of ``wild'' representation type. These are the algebras whose isomorphism types of indecomposables are in bijection with those of $k\langle x,y\rangle$. It has been shown (accurding to \cite{bensonI}) that the representation theory of algebras $\Lambda$ of wild type is \textit{undecideable} in that there exists no algorithm for a Turing machine that can decide the truth or falsehood of a statement about $\Lambda$ modules. While that may seem like a dismal prospect, the (Balmer) spectrum of the derived category of Schur algebras gives us a little more hope. Recall that the spectrum is comprised of prime thick tensor ideals in $\Db(S(n,r))$, which are triangulated subcategories that, among other properties, are closed under summands and extensions. This gives us a coarser grouping of chains of $\Lambda$-modules to work with, and (hopefully!) gives us a better chance at being able to understand things better. A first step would be to compute $\Db(S(n,r))$ in the ``nice'' case where our field is characteristic zero. From there, there are interesting computations to be done for Schur algebras over fields of characteristic $p>0$ which could yield interesting results. \subsection{The representation theory of \texorpdfstring{$S(n,r)$}{S(n,r)} in positive characteristic} When we are working over an infinite field of characteristic zero, the theory of Schur algebra representations affords a relatively nice, clean description. However, as is pointed out in \cite{erdmann}, the representation theory of Schur algebras in positive characteristic can be fraught with troubles. For instance, $S(3,10)$ over a field of characteristic 5 has wild representation type. That such troublesome algebras exist and are so readily accessible (the above example is spanned by 66 elements) indicates that there are an endless supply of computational examples that one could try to understand and that could eventually lead to questions and conjectures concerning the nature of the representations of algebras of wild type. \subsection{Representation theory of the \texorpdfstring{$q$}{q}-Schur algebra} Recall a motiviating example (c.f. \cite{majid}) of a quantum group: $\operatorname{SL}_q(2)$, so named because it is a ``$q$-analog'' of the algebra $\operatorname{SL}_2$. Fix some $q\in k^\times$. Then it is defined (as an algebra) as a quotient \[k\langle a,b,c,d\rangle/R\] where $R$ is the ideal generated by the following relations: \[\begin{array}{ccc} ca=qac & ba=qab & db=qdb\\ dc=qcd & bc=cb & da-ad=(q-q^{-1})bc \end{array}\] along with the ``$q$-determinant relation'' \[ad-q^{-1}bc=1.\] Notice that setting $q=1$ makes $a,b,c,$ and $d$ commute, so we are left with the usual special linear group. Quantum groups and, more generally, quantum deformations of objects in commutative algebra, give mathematicians a way to carefully perturb objects to open up areas of research in noncommutative algebra to the the same (or similar) techniques used by commutative algebraists and algebraic geometers.\footnote{See, for instance Taft and Towber's \textit{Quantum deformation of flag schemes and Grassmann schemes. I. A q-deformation of the shape-algebra for GL(n)} or the second half of my notes on the Grassmannian at \href{https://github.com/NicoCourts/Grassmannian-Notes/}{https://github.com/NicoCourts/Grassmannian-Notes/} where I summarize this paper.} The $q$-Schur algebras were developed by Dipper and James and eventually summarized very nicely in \cite{donkin-q-schur} in a manner that reflects the character of \cite{green} and re-derives the classical results as a degenerate case of a more complex and interesting interplay between quantum $\GL_n$ and Iwahori-Hecke algebras. These algebras (and even further generalizations) are still an area of active research. The question of identifying representation types of $q$-Schur algebras has been completed already by Erdmann and Nakano in \cite{erdmann-nakano}, but the other questions persist. In particular, one can ask questions like: \begin{itemize} \item What are explicit indecomposable representations and (in the finite and tame cases) how can we classify the families of indecomposable representations of these algebras? \item How can we generalize the idea of Schur duality to even broader families of noncommutative quasihereditary algebras? \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \section*{Acknowledgements} \label{sec:ack} \addcontentsline{toc}{section}{\nameref{sec:ack}} I extend my most heartfelt thanks to my advisor, Julia Pevtsova, who not only helped me immensely in setting a target for this project, but also introduced me to many of the classical ideas found in this paper (some times more than once). Her knowledge and understanding while I learned this subject has been absolutely invaluable to me. My thanks also to my loving partner Allison, who stands beside me in good times and in bad and always patiently humors me when I need someone to listen to my inane ramblings. Finally, thank you to my friends and colleagues in the University of Washington math department for many fruitful conversations and inspiration for ideas to investigate along the way. In particular I am indebted to (in no particular order) Thomas Carr, Sean Griffin, Sam Roven, and Cody Tipton for all their help and support. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%% Bibliography %%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \printbibliography \addcontentsline{toc}{section}{References} \end{document}
{ "alphanum_fraction": 0.701048419, "avg_line_length": 60.6814404432, "ext": "tex", "hexsha": "567be8fcec6cbbdd55779679788afe15d4bb1157", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bba03156740b61d1e3e9bcc9fdae6d2328863218", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "NicoCourts/General-Exam-Paper", "max_forks_repo_path": "General-Paper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bba03156740b61d1e3e9bcc9fdae6d2328863218", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NicoCourts/General-Exam-Paper", "max_issues_repo_path": "General-Paper.tex", "max_line_length": 254, "max_stars_count": null, "max_stars_repo_head_hexsha": "bba03156740b61d1e3e9bcc9fdae6d2328863218", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NicoCourts/General-Exam-Paper", "max_stars_repo_path": "General-Paper.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 42608, "size": 131436 }
\documentclass{article} \usepackage{graphicx} \usepackage{listings} \lstset{ basicstyle=\ttfamily, inputpath=../src } \begin{document} \title{Problem 1: System calls, error checking, and reporting} \author{Caleb Zulawski} \maketitle \section{Implementation} \begin{lstlisting} $ ./copycat -h Usage: copycat [OPTION]... [FILE]... Concatenate FILE(s), or standard input, to standard output. Similar to GNU cat. -v print diagnostic messages to standard error -b SIZE size of internal copy buffer, in bytes -m MODE file mode, in octal -o FILE output to FILE instead of standard output -h display this help and exit \end{lstlisting} This software was developed and tested on Linux 3.10.18 on an x86-64 netbook. The buffer size dependent performance is shown in Figure \ref{performancefig}. The higher performance for larger buffers indicates that reading into and writing from the buffer is very fast compared to the process of setting up the read() or write(). The larger buffers seem to have similar performance because there is virtually no overhead compared to the amount being written. \section{Examples} \subsection{Basic use} \begin{lstlisting} $ ./copycat Hello World Hello World EOF $ \end{lstlisting} \subsection{Concatenating from standard input and a file} \begin{lstlisting} $ echo And again... | ./copycat - copycat.c And again... /* copycat.c * Caleb Zulawski * * Entrance point of the program. */ #include "copycat.h" int main(int argc, char* argv[]) { Options options; cc_parse_args(argc, argv, &options); cc_log(&options); cc_copy(&options); return 0; } $ \end{lstlisting} \subsection{Writing to a file, with error handling} \begin{lstlisting} $ touch outfile $ chmod 000 outfile $ echo nope | ./copycat -o outfile - Error opening file outfile: Permission denied $ sudo chmod 644 outfile $ echo yup | ./copycat -o outfile - $ cat outfile yup $ \end{lstlisting} \subsection{Bad input file behavior} \begin{lstlisting} $ echo Uh oh | ./copycat - nofile Uh oh Error opening file nofile: No such file or directory $ echo Uh oh | ./copycat nofile - Error opening file nofile: No such file or directory $ \end{lstlisting} \begin{figure} \centering \includegraphics[width=\linewidth]{../test/results} \caption{Performance} \label{performancefig} \end{figure} \end{document}
{ "alphanum_fraction": 0.7232888147, "avg_line_length": 22.6037735849, "ext": "tex", "hexsha": "b88c16c9dc95323491117d0aeb43c46a7d03e52f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "972b1188d818ee3d8315915697a27fa4c629b4f2", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "calebzulawski/ECE457", "max_forks_repo_path": "PS_1/doc/copycat.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "972b1188d818ee3d8315915697a27fa4c629b4f2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "calebzulawski/ECE457", "max_issues_repo_path": "PS_1/doc/copycat.tex", "max_line_length": 171, "max_stars_count": null, "max_stars_repo_head_hexsha": "972b1188d818ee3d8315915697a27fa4c629b4f2", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "calebzulawski/ECE457", "max_stars_repo_path": "PS_1/doc/copycat.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 628, "size": 2396 }
\section{Concurrency Control in CDF} \label{sec:concurrency} Logging has multiple purposes in CDF use. It is used to allow incremental checkpointing, which is faster and takes less space than continual saving of an entire CDF. Logging is also used to support concurrent use of a CDF. Logging can be turned on and/or off. When it is on, every update to the CDF is ""logged"" in an in-memory predicate. This log can then be saved to disk in a ""checkpoint"" file, and later used to recreate the CDF state as it was at the time of the file-saving. The checkpoint file contains the locations of the versions of the components needed to reconstruct the state. When used to support multiple concurrent use of a CDF, first logging is turned on, and CDF components are loaded from a stored (shared) CDF, and their versions are noted. Subsequent updates to the in-memory CDF are logged as they are done. Then when the in-memory CDF is to be written back to disk to create new versions of the updated components, using update\_all\_components(in\_shared\_place), the following is done for each component. If the current most-recent-version on disk is the same as the one orginally loaded to memory, then update\_all\_components works normally (in\_place), incrementing the version number and writing out the current in-memory component as that new version. Otherwise, there is a more recent version of the CDF on disk (written by a ""concurrent user"".) The most recent version is loaded into memory, and the log is used to apply all the updates to that new version. (If conflicts are detected, they must be resolved. At the moment, no conflict detection is done.) Then update\_all\_components(in\_place) is used to store that updated CDF. After update\_all\_components is run, the log is emptied, and the process can start again. \begin{description} \ourpredmoditem{cdf\_log\_component\_dirty/1} This is a dynamic predicate. After restoring a checkpoint file and applying the updates, {\tt cdf\_log\_component\_dirty/1} is true of all components that differ from their stored versions. It is a ""local version"" of cdf\_flags(dirty,\_), and should be ""OR-ed"" with it to find the components that have been updated from last stored version. \ourpredmoditem{cdf\_set\_log\_on/2} {\tt cdf\_set\_log\_on(+LogFile,+Freq)} This predicate creates a new log and ensures that logging will be performed for further updates until logging is turned off. \ourpredmoditem{cdf\_set\_log\_suspend/0} {\tt cdf\_set\_log\_suspend/0} temporarily turns logging off, if it is on. It is restarted by {\tt cdf\_set\_log\_unsuspend/0}. \ourpredmoditem{cdf\_reset\_log/0} If logging is on, this predicate deletes the current log, and creates a new empty one. If logging is off, no action is taken \ourpredmoditem{cdf\_log/1} {\tt cdf\_log(ExtTermUpdate)} takes a term of the form assert(ExtTerm) or retractall(ExtTerm) and adds it to the log, if logging is on. ExtTerm must be a legal extensional fact in the CDF. \ourpredmoditem{cdf\_apply\_log/0} {\tt cdf\_apply\_log} applies the log to the current in-memory CDF. For example, if the in-memory CDF has been loaded from a saved CDF version, and the log represents the updates made to that CDF saved in a checkpoint file, then this will restore the CDF to state at the time the checkpoint file was written. When applying the updates, it does NOT update the CDF dirty flags. However, it does add the name of any modified component to the predicate {\tt cdf\_log\_component\_dirty/1}. This allows a user to determine both when a change has been made since the last checkpoint has been saved and since the last saved component version. (See also {\tt cdf\_log\_OR\_dirty\_flags/0}.) \ourpredmoditem{cdf\_apply\_merge\_log/0} {\tt cdf\_apply\_merge\_log} applies the current log to the current CDF. The CDF may not be the one that formed the basis for the current log. I.e., it may have been updated by some other process. This function depends on the user-defined predicate, {\tt check\_log\_merge\_assert(+Term,-Action)} to provide information on whether the assert actions should be taken or not, and which provide a conflict. (Retract actions are always assumed to be acceptable.) \ourpredmoditem{cdf\_log\_OR\_dirty\_flags/0} {\tt cdf\_log\_OR\_dirty\_flags} makes every component in {\tt cdf\_log\_component\_dirty/1} to be dirty, i.e., {\tt cdf\_flags(dirty,CompName)} to be made true. \ourpredmoditem{cdf\_save\_log/1} {\tt cdf\_save\_log(LogFile)} writes a checkpoint file into the file named {\tt LogFile}. The file contains the current in-memory log and the components and their versions from which these updates created the current state. \ourpredmoditem{cdf\_remove\_log\_file/1} {\tt cdf\_remove\_log\_file(LogFile)} renames the indicated file to the name obtained by appending a \verb|~| to the file (deleting any previous file with this name.) This effectively removes the indicated file, but allows for external recovery, if necessary. \ourpredmoditem{cdf\_restore\_from\_log/1} {\tt cdf\_restore\_from\_log(LogFile)} recreates the CDF state represented by the chekpoint file named {\tt LogFile}. The current CDF is assumed initialized. It loads the versions of the components indicated in the checkpoint file, and then applies the logged updates to that state. \end{description}
{ "alphanum_fraction": 0.7786074295, "avg_line_length": 50.5377358491, "ext": "tex", "hexsha": "3db5c15c614d20aa5edecc848c90be7566730d46", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "99a0c8d48a8bc69d8714334606f3de8f52e1d3cd", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "HoaiNguyenofficial/AutoPenTest-using-DRL", "max_forks_repo_path": "repos/XSB/packages/altCDF/doc/concurrency.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "99a0c8d48a8bc69d8714334606f3de8f52e1d3cd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "HoaiNguyenofficial/AutoPenTest-using-DRL", "max_issues_repo_path": "repos/XSB/packages/altCDF/doc/concurrency.tex", "max_line_length": 75, "max_stars_count": null, "max_stars_repo_head_hexsha": "99a0c8d48a8bc69d8714334606f3de8f52e1d3cd", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "HoaiNguyenofficial/AutoPenTest-using-DRL", "max_stars_repo_path": "repos/XSB/packages/altCDF/doc/concurrency.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1402, "size": 5357 }
\chapter{Approach}\label{chap:approach} To semantically segment urban scenes and extract predictions for traversability classes, we proposed to use a deep neural network based on the DeepLab v3+ architecture with a loss function inspired by "U-Net: Convolutional Networks for Biomedical Image Segmentation" by Olaf Ronneberger et al.~\cite{unet}. In this chapter, the architecture used and an explanation of our loss function will be discussed. \section{Architecture Selection} \label{section:approach-architectureselection} We came to use the DeepLab v3+ architecture after running these experiments as it outperformed other networks that were tested. A further discussion of the results can be found in section and the experiments done to select the network can be found in Section \ref{section:experiments-networkevaluation}. \subsection{Network Architecture}\label{section:approach-networkarchitecture} We propose to apply the DeepLab v3+ architecture to our stated problem of curb and curb cut segmentation. The architecture itself is given in \figref{fig:approach-network} with further descriptions in Tables \ref{tab:drn}-\ref{tab:decoder}\footnote{The full implementation written using the PyTorch framework can be found at \url{github.com/yvan674/CurbNet}}. \input{figures/approach/network} \input{tables/drn} \input{tables/bottleneck} \input{tables/aspp} \input{tables/decoder} \section{Loss Function}\label{section:approach-lossfunction} We chose to use a modified weighted cross entropy loss due to the severe class imbalance. Using the assumption that all curbs and curb cuts must be located along the perimeter of roads, we used a loss function inspired by the paper "U-Net: Convolutional Networks for Biomedical Image Segmentation," which we call Masked Cross Entropy (MCE) and is defined in \eqref{eq:mce}~\cite{unet}. The weighted cross entropy loss function was modified to penalize according to the given weights when labeling within a certain border around road classes, which we call the mask $M$. We define road classes $\text{class}_{\text{road}}$ as all classes which can reasonably be expected to be found on roads, including road, road markings, potholes, etc. This mask was calculated using a binary dilation on a full $b \times b$ matrix $B$ where $b$ is $0.05 \times \text{image}_{width}$ on the road class, then subtracted by the road class itself. Thus, this can be formalized as follows: \begin{align} B &= \underbrace{ \begin{bmatrix} 1 & \cdots & 1\\ \vdots & \ddots & \vdots\\ 1 & \cdots & 1 \end{bmatrix}}_{b \text{ columns and } b \text{ rows}} \\ M &= \left(\text{class}_{\text{road}} \oplus B\right) - \text{class}_{\text{road}} \end{align} A visualization of this mask applied to a color street-level image from the Mapillary dataset results in Figure \ref{fig:approach-mask}. \input{figures/approach/mask} The value 0.05 was chosen empirically after looking at samples in the dataset and measuring what area around roads are typically curbs. The road class was simply taken from the ground truth data. Any labeling outside of $M$ by the network is then given an increased penalty, incentivizing the network to focus labeling around road edges. We chose to multiply the penalty for areas outside $M$ by a factor of 3. This can be seen in the following formalization of the loss function we used: \begin{align}\label{eq:mce} \ell_{MCE}(y, \hat{y}) &=\sum_{m}-y_m\log(\hat{y}_m) \cdot d_m\\ \text{with } d_m &= \begin{cases} d_m' & \text{if } x \in M\\ d_m' \cdot 3 & \text{if } x \notin M \end{cases} \end{align} where $d_m'$ are the user defined weights. This loss function operates on the assumption that curbs and curb cuts must be located adjacent to roads. A visualization of the resulting mask $M$ can be seen in Figure \ref{fig:approach-mask}.
{ "alphanum_fraction": 0.7696526508, "avg_line_length": 70.9074074074, "ext": "tex", "hexsha": "0773d0ae8fbaf68221cc555c7be6f2908c9bb592", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "00121f35245c20ddf77bd5d0ca9467460849902c", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "yvan674/bachelor-thesis", "max_forks_repo_path": "chapters/4-approach.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "00121f35245c20ddf77bd5d0ca9467460849902c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "yvan674/bachelor-thesis", "max_issues_repo_path": "chapters/4-approach.tex", "max_line_length": 306, "max_stars_count": null, "max_stars_repo_head_hexsha": "00121f35245c20ddf77bd5d0ca9467460849902c", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "yvan674/bachelor-thesis", "max_stars_repo_path": "chapters/4-approach.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 990, "size": 3829 }
\section{Model Description} The gravity effector module is responsible for calculating the effects of gravity from a body on a spacecraft. A spherical harmonics model and implementation is developed and described below. The iterative methods used for the software algorithms are also described. Finally, the results of the code unit tests are presented and discussed. \subsection{Mathematical model} \subsubsection{Relative Gravitational Dynamics Formulation} The gravity effector module is critical to the propagation of spacecraft orbits in Basilisk. In order to increase the accuracy of spacecraft trajectories, a relative gravitational acceleration formulation can be used. Relative dynamics keep the acceleration, velocity, and distance magnitudes small, allowing more bits of a double variable to be used for accuracy, rather than magnitude. This additional accuracy is compounded via integration. This relative formulation is enforced when a user sets any planet in a multi-planet environment to have \verb|isCentralBody = True|. If no planets in a simulation are set as the central body, then an absolute formulation of gravitational acceleration is used. In the absolute formulation, acceleration of a spacecraft due to each massive body is summed directly to calculate the resultant acceleration of the spacecraft. \begin{equation} \ddot{\bm{r}}_{B/N, \mathrm{grav}} = \sum_{i = 1}^{n} \ddot{\bm{r}}_{B/N, i} \end{equation} where the accelerations on the right hand side are the acceleration due to the $i^{\mathrm{th}}$ planet which is being modeled as a gravity body. In this absolute mode, spacecraft position and velocity are integrated with respect to the inertial origin, typically solar system barycenter. In the relative formulation, the acceleration of the spacecraft is calculated \textit{relative to} the central body. This is done by calculating the acceleration of the central body and subtracting it from the acceleration of the spacecraft. \begin{equation} \ddot{\bm{r}}_{B/C, \mathrm{grav}} =\ddot{\bm{r}}_{B/N, \mathrm{grav}} - \ddot{\bm{r}}_{C/N, \mathrm{grav}} \end{equation} where $C$ is the central body. In this case, other accelerations of the central body (due to solar radiation pressure, for instance) are ignored. For relative dynamics, the Basilisk dynamics integrator uses only \textit{relative} acceleration to calculate \textit{relative} position and velocity. The gravity module then accounts for this and modifies the spacecraft position and velocity by the central body's position and velocity after each timestep. The above relative formulation leads to some questions regarding the accuracy of the dynamics integration. First, if acceleration due to gravity is being handled in a relative form, but accelerations due to external foces are handled absolutely, does Basilisk always produce the correct absolute position and velocity? Second, if dynamic state effectors such as hinged rigid bodies are using the gravitational acceleration that the spacecraft receives from the gravity module, are their states being integrated correctly? Absolute accelerations (i.e. due to thrust) being integrated alongside the relative gravitational acceleration is handled easily due to the linearity of integration. In the absolute dynamics formulation there is: \begin{equation} \ddot{\bm{r}}_{B/N} = \ddot{\bm{r}}_{B/N, \mathrm{grav}} + \ddot{\bm{r}}_{B/N, \mathrm{thrust}} + \ddot{\bm{r}}_{B/N, \mathrm{SRP}} + \dots \label{eq:absGrav} \end{equation} and each term can be integrated separately on the right side so that \begin{equation} \bm{r}_{B/N} = \int \int \ddot{\bm{r}}_{B/N, \mathrm{grav}} \mathrm{dtdt} + \int \int \ddot{\bm{r}}_{B/N, \mathrm{thrust}} \mathrm{dtdt} + \int \int \ddot{\bm{r}}_{B/N, \mathrm{SRP}}\mathrm{dt dt} + \dots \end{equation} In the derivation that follows, the double integral to position is used, but the logic holds for the first integral to velocity as well. Now, because accelerations also add linearly, \begin{equation} \ddot{\bm{r}}_{B/N} = \ddot{\bm{r}}_{B/C} + \ddot{\bm{r}}_{C/N} = \ddot{\bm{r}}_{B/C, \mathrm{grav}} + \ddot{\bm{r}}_{C/N, \mathrm{grav}} + \ddot{\bm{r}}_{B/N, \mathrm{thrust}} + \ddot{\bm{r}}_{B/N, \mathrm{SRP}} + \dots \end{equation} which differs from Eq. \ref{eq:absGrav} in the gravitational acceleration of the spacecraft being split at the acceleration of the central body. Applying the integrals: \begin{equation} \bm{r}_{B/N} = \bm{r}_{B/C} + \bm{r}_{C/N} = \int \int \ddot{\bm{r}}_{B/C, \mathrm{grav}} \mathrm{dt dt} + \bm{r}_{C/N} + \int \int \ddot{\bm{r}}_{B/N, \mathrm{thrust}} \mathrm{dt dt} + \int \int \ddot{\bm{r}}_{B/N, \mathrm{SRP}}\mathrm{dt dt} + \dots \end{equation} where $\ddot{\bm{r}}_{C/N}$ is deliberately double integrated to $\bm{r}_{C/N}$ to show that it can be removed from both sides and $\bm{r}_{B/C}$ can be evaluated using relative gravitation acceleration combined with absolute accelerations due to external forces: \begin{equation} \bm{r}_{B/C}= \int \int \ddot{\bm{r}}_{B/C, \mathrm{grav}} \mathrm{dt dt} + \int \int \ddot{\bm{r}}_{B/N, \mathrm{thrust}} \mathrm{dt dt} + \int \int \ddot{\bm{r}}_{B/N, \mathrm{SRP}}\mathrm{dt dt} + \dots \end{equation} Once that is done, it is clear that the absolute position can be found by simply adding the position of the central body to the relative position just found: \begin{equation} \bm{r}_{B/N} = \bm{r}_{B/C} + \bm{r}_{C/N} \end{equation} This is how absolute position and velocity are found in Basilisk when using a relative dynamics formulation: the relative dynamics are integrated and the position and velocity of the central body are added afterward. The position and velocity of the central body are not integrated by Basilisk, but found from Spice. Dynamic state effectors connected to the spacecraft hub can use the relative gravitational acceleration in their calculation for much the same reason. Effector positions and velocities are always integrated relative to the spacecraft. In fact, the absolute position and velocity of an effector is rarely, if ever, calculated or used. This is explains why a hinged body experiencing a relative acceleration does not quickly fall behind the spacecraft which is known to be moving along a course experiencing absolute gravitational acceleration. Additionally, because the effector is "pulled along" with the spacecraft when the spacecraft position is modified by the central body position, the effector sees the effect of absolute gravitational acceleration as well. For intricacies related to using absolute vs relative dynamics, see the user manual at the end of this document. \subsubsection{Gravity models} Gravity models are usually based on solutions of the Laplace equation ($\nabla^2 U(\mathbf{\bar r}) = 0$). It is very important to state that this equation only models a gravity potential outside a body. For computing a potential inside a body the Poisson equation is used instead. The spherical harmonic potential is a solution of the Laplace equation using orthogonal spherical harmonics. It can be derived solving the Laplace equation in spherical coordinates, using the separation of variables technique and solving a Sturm-Liouville problem. In this work, the solution will be found using another technique, which essentially follows Vallado's book\cite{vallado2013}. For each element of mass $d m_\text{Q}$ the potential can be written as \begin{equation} \D U(\mathbf{\bar r}) = G \frac{\D m_\text{Q}}{\rho_\text{Q}} \end{equation} where $\rho_\text{Q}$ is the distance between the element of mass and the position vector $\mathbf{\bar r}$ where the potential is computed. This position vector is usually given in a body-fixed frame. The relation between the position vector $\mathbf{\bar r}$, the position of the element of mass $\mathbf{\bar r_\text{Q}}$ and $\rho_\text{Q}$ can be given using the cosine theorem and the angle $\alpha$ between the two position vectors, as can be appreciated in Figure \ref{fig:spher_harm}. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{Figures/spherical_harmonics.png} \caption{Geometry of the Spherical Harmonics Representation.}\label{fig:spher_harm} \end{figure} \begin{equation} \rho_\text{Q} = \sqrt{r^2 + r_\text{Q}^2 - 2 r r_\text{Q} \cos(\alpha)} = r \sqrt{1 - 2 \frac{r_\text{Q}}{r} \cos(\alpha) + \bigg(\frac{r_\text{Q}}{r}\bigg)^2} = r \sqrt{1 - 2 \gamma \cos(\alpha) + \gamma^2} \end{equation} where $\gamma = r_\text{Q}/r$. The potential can be obtained by integrating $dU$ through the whole body. \begin{equation} U(\mathbf{\bar r}) = G \int_{body} \frac{\D m_\text{Q}}{r \sqrt{1 - 2 \gamma \cos(\alpha) + \gamma^2}} \end{equation} If the potential is computed outside the body, $\gamma$ will always be less than 1, and the inverse of the square root can be approximated using the Legendre polynomials $P_l[\beta]$\cite{vallado2013}. Even though this derivation does not use the Laplace equation, it still assumes that the potential is computed outside the body. The Legendre polynomials can be written as \begin{equation} P_l[\beta] = \frac{1}{2^l l!} \frac{d^l}{d \beta^l} (\beta^2 - 1)^l \end{equation} The potential is \begin{equation} U(\mathbf{\bar r}) = \frac{G}{r} \int_{body} \sum_{l=0}^\infty \gamma^l P_l[\cos(\alpha)] \D m_\text{Q} \end{equation} The angle $\alpha$ must be integrated. However, the cosine of the angle $\alpha$ can be decomposed using the geocentric latitude and the longitude associated to vectors $\mathbf{\bar r}$ and $\mathbf{\bar r_\text{Q}}$. These angles will be called $(\phi, \lambda)$ and $(\phi_\text{Q}, \lambda_\text{Q})$ respectively. Using the addition theorem it is possible to write\cite{vallado2013}. \begin{equation} P_l[cos(\alpha)] = P_l[\sin(\phi_\text{Q})] P_l[\sin(\phi)] + 2 \sum_{m=1}^l \frac{(l-m)!}{(l+m)!} (a_{l,m} a'_{l,m} + b_{l,m} b'_{l,m}) \end{equation} where \begin{align} a_{l,m} &= P_{l,m}[\sin(\phi_\text{Q})] \cos(m \lambda_\text{Q})\\ b_{l,m} &= P_{l,m}[\sin(\phi_\text{Q})] \sin(m \lambda_\text{Q})\\ a'_{l,m} &= P_{l,m}[\sin(\phi)] \cos(m \lambda)\\ b'_{l,m} &= P_{l,m}[\sin(\phi)] \sin(m \lambda) \end{align} where $P_{l,m}[x]$ are the associated Legendre functions. "$l$" is called degree and "$m$", order. The polynomials can be computed as \begin{equation} P_{l,m}[\beta] = (1 - \beta^2)^\frac{m}{2} \frac{d^m}{d \beta^m} P_l[\beta]\label{eq:legendre} \end{equation} As can be seen, $a_{l,m}$ and $b_{l,m}$ must be integrated, but $a'_{l,m}$ and $a'_{l,m}$ can be taken outside the integral. Therefore, it is possible to define \begin{align} C'_{l,m} &= \int_{body} (2 -\delta_m) r_\text{Q}^l \frac{(l-m)!}{(l+m)!} a_{l,m} \D m_\text{Q}\\ S'_{l,m} &= \int_{body} (2 -\delta_m) r_\text{Q}^l \frac{(l-m)!}{(l+m)!} b_{l,m} \D m_\text{Q} \end{align} where $\delta_m$ is the Kronecker delta. Then \begin{equation} U(\mathbf{\bar r}) = \frac{G}{r} \sum_{l=0}^\infty C'_{l,0} \frac{P_l[\sin(\phi)]}{r^l} + \frac{G}{r} \sum_{l=0}^\infty \sum_{m=1}^l \frac{P_{l,m}[\sin(\phi)]}{r^l} \big[C'_{l,m} \cos(m \lambda) + S'_{l,m} \sin(m \lambda)] \end{equation} Non-dimensional coefficients $C_{l,m}$ and $S_{l,m}$ are usually used \begin{align} C'_{l,m} &= C_{l,m} R_{\text{ref}}^l m_\text{Q}\\ S'_{l,m} &= _\text{CoM}S_{l,m} R_{\text{ref}}^l m_\text{Q} \end{align} where $m_\text{Q}$ is the total mass of the body and $R_{\text{ref}}$ is a reference radius. If the coefficients $C_{l,m}$ and $S_{l,m}$ are given, the reference radius must be specified. Usually, the reference is chosen as the maximum radius or the mean radius\cite{scheeres2012}. The potential is then \begin{equation} U(\mathbf{\bar r}) = \frac{\mu}{r} \sum_{l=0}^\infty C_{l,0} \bigg(\frac{R_{\text{ref}}}{r}\bigg)^l P_l[\sin(\phi)] + \frac{\mu}{r} \sum_{l=0}^\infty \sum_{m=1}^l \bigg(\frac{R_{\text{ref}}}{r}\bigg)^l P_{l,m}[\sin(\phi)] \big[C_{l,m} \cos(m \lambda) + S_{l,m} \sin(m \lambda)\big] \end{equation} Since $P_l[x] = P_{l,0}[x]$ the potential can be written in a more compact way \begin{equation} U(\mathbf{\bar r}) = \frac{\mu}{r} \sum_{l=0}^\infty \sum_{m=0}^l \bigg(\frac{R_{\text{ref}}}{r}\bigg)^l P_{l,m}[\sin(\phi)] \big[C_{l,m} \cos(m \lambda) + S_{l,m} \sin(m \lambda)\big] \end{equation} Some coefficients have a very interesting interpretation. \begin{align} C_{0,0} &= 1\\ S_{l,0} &= 0 \quad \forall l \geq 0\\ C_{1,0} &= \frac{Z_{\text{CoM}}}{R_{\text{ref}}}\\ C_{1,1} &= \frac{X_{\text{CoM}}}{R_{\text{ref}}}\\ S_{1,1} &= \frac{Y_{\text{CoM}}}{R_{\text{ref}}} \end{align} where $[X_\text{CoM}, Y_\text{CoM}, Z_\text{CoM}]$ represents the center of mass of the celestial body. Therefore, if the origin of the coordinate system coincides with the center of mass, all these coefficients are identically zero. Similarly, the second order coefficients are related to the second order moments (moments of inertia). Finally, the coefficients and Legendre polynomials are usually normalized to avoid computational issues. The factor $N_{l,m}$ is called the normalization factor \begin{equation} N_{l,m} = \sqrt{\frac{(l-m)! (2 -\delta_m) (2 l +1)}{(l+m)!}} \end{equation} The normalized coefficients are \begin{align} \bar C_{l,m} &= \frac{C_{l,m}}{N_{l,m}}\\ \bar S_{l,m} &= \frac{S_{l,m}}{N_{l,m}} \end{align} The normalized associated Legendre functions are \begin{equation} \bar P_{l,m}[x] = P_{l,m}[x] N_{l,m} \end{equation} The potential may be written as \begin{equation} U(\mathbf{\bar r}) = \frac{\mu}{r} \sum_{l=0}^\infty \sum_{m=0}^l \bigg(\frac{R_{\text{ref}}}{r}\bigg)^l \bar P_{l,m}[\sin(\phi)] \big[\bar C_{l,m} \cos(m \lambda) + \bar S_{l,m} \sin(m \lambda)\big] \end{equation} \subsubsection{Pines' Representation of Spherical Harmonics Gravity} There are many ways to algorithmically compute the potential and its first and secondary derivatives. One of such algorithms is the one proposed by Pines\cite{pines1973}. The spherical harmonics representation as it was presented has a singularity at the poles for the gravity field. The Pines' formulation avoids this problem and is more numerically stable for high degree and high order terms. Unfortunately, this formulation does not contain the normalization factor which is necessary if the coefficients are normalized. In a paper written by Lundberg and Schutz\cite{lundberg1988}, a normalized representation of the Pines' formulation is given, but it contains an approximation. For this work, and in order to code the spherical harmonics formulation, a formulation similar to Pines' using the Lundberg-Schutz paper will be derived. However, no approximations will be used. Therefore, the algorithm will be developed here without using the exact formulations given in those papers. For the sake of brevity, not every single derivation will be carried out, but it is possible to get the results following the expressions obtained in this section. In the Pines' formulation the radius and the director cosines are used as coordinates. The potential will be given as $U[r, s, t, u]$, where \begin{align} r &= \sqrt{x^2+y^2+z^2}\\ s &= \frac{x}{r}\\ t &= \frac{y}{r}\\ u &= \frac{z}{r} \end{align} For a function of these coordinates, the dependance will be given using square brackets (e.g. $f[r,s,t,u]$). Since $u = \sin(\phi) = \cos(90^\circ - \phi)$, it is possible to write \begin{equation} P_{l,m}[\sin(\phi)] = P_{l,m}[u] \end{equation} The derived Legendre functions $A_{l,m}[u]$ are defined such that \begin{equation} P_{l,m}[u] = (1 - u^2)^\frac{m}{2} A_{l,m}[u] \end{equation} From the definition of $P_{l,m}$ (Equation \refeq{eq:legendre}), it is possible to write \begin{equation} A_{l,m}[u] = \frac{d^m}{d u^m} P_l[u] = \frac{1}{2^l l!} \frac{d^{l+m}}{d u^{l+m}} (u^2 - 1)^l\label{eq:der_leg} \end{equation} The term $(1 - u^2)^\frac{m}{2}$ can be written as $(1 - \sin^2(\phi))^\frac{m}{2} = |\cos(\phi)|^m = \cos^m(\phi)$. If the complex number $\xi$ is defined such that ($j$ is the imaginary unit) \begin{equation} \xi = \cos(\phi) \cos(\lambda) + j \cos(\phi) \sin(\lambda) = \frac{x}{r} + j \frac{y}{r} = s + j t \end{equation} it is possible to write \begin{equation} \xi^m = \cos^m(\phi) e^{j m \lambda} = (s + j t)^m \end{equation} The following sequences may be defined \begin{align} R_m[s,t] &= Re\{\xi^m\}\\ I_m[s,t] &= Im\{\xi^m\} \end{align} Putting all together, it is possible to write \begin{equation} U(\mathbf{\bar r}) = \frac{\mu}{r} \sum_{l=0}^\infty \sum_{m=0}^l \bigg(\frac{R_{\text{ref}}}{r}\bigg)^l A_{l,m}[u] \{C_{l,m} R_m[s,t] + S_{l,m} I_m[s,t]\} \end{equation} In order to normalize the coefficients ($\bar C_{l,m}$ and $\bar S_{l,m}$) and the derived Legendre functions ($\bar A_{l,m} = N_{l,m} A_{l,m}$), each term is divided an multiplied by the normalization factor $N_{l,m}$. Then \begin{equation} U(\mathbf{\bar r}) = \frac{\mu}{r} \sum_{l=0}^\infty \sum_{m=0}^l \bigg(\frac{R_{\text{ref}}}{r}\bigg)^l \bar A_{l,m}[u] \{\bar C_{l,m} R_m[s,t] + \bar S_{l,m} I_m[s,t]\} \end{equation} The sets $D_{l,m}[s,t]$, $E_{l,m}[s,t]$, and $F_{l,m}[s,t]$, are defined as \begin{align} D_{l,m}[s,t] &= \bar C_{l,m} R_m[s,t] + \bar S_{l,m} I_m[s,t]\\ E_{l,m}[s,t] &= \bar C_{l,m} R_{m-1}[s,t] + \bar S_{l,m} I_{m-1}[s,t]\\ F_{l,m}[s,t] &= \bar S_{l,m} R_{m-1}[s,t] - \bar C_{l,m} I_{m-1}[s,t] \end{align} The value $\rho_l[r]$ is also defined as \begin{equation} \rho_l[r] = \frac{\mu}{r} \bigg(\frac{R_{\text{ref}}}{r}\bigg)^l \end{equation} The gravity potential may be finally computed as \begin{equation} U(\mathbf{\bar r}) = \sum_{l=0}^\infty \sum_{m=0}^l \rho_l[r] \bar A_{l,m}[u] D_{l,m}[s,t] \end{equation} This is the final expression that will be used to compute the gravity potential. \subsubsection{Recursion Formulas} Several recursion formulas are needed in order to algorithmically implement the Pines' formulation. They will be given without proof, but they are easily derived using the definitions above. \begin{itemize} \item{Recursion formula for $\rho_l[r]$} Initial condition: $\rho_0[r] = \frac{\mu}{r}$ \begin{equation} \rho_l[r] = \rho \cdot \rho_{l-1}[r] \end{equation} where $\rho = R_{\text{ref}}/r$. \item{Recursion formula for $R_m[s,t]$} Initial condition: $R_0[s,t] = 1$ \begin{equation} R_m[s,t] = s R_{m-1}[s,t] - t I_{m-1}[s,t] \end{equation} \item{Recursion formula for $I_m[s,t]$} Initial condition: $I_0[s,t] = 0$ \begin{equation} I_m[s,t] = s I_{m-1}[s,t] + t R_{m-1}[s,t] \end{equation} \item{Recursion formula for $\bar A_{l,m}[u]$} From Equation \eqref{eq:der_leg}, it is possible to see that \begin{align} A_{l,l}[u] &= (2 l -1) A_{l-1,l-1}[u]\label{eq:All}\\ A_{l,l-1}[u] &= u A_{l,l}[u]\label{eq:All_1} \end{align} \end{itemize} There are several recursion formulas for computing Legendre polynomials $A_{l,m}[u]$, for $m < l-1$. The following formula, which is stable for high degrees\cite{lundberg1988}, will be used: \begin{equation} A_{l,m}[u] = \frac{1}{l-m} ((2 l -1) u A_{l-1,m}[u] - (l+m-1) A_{l-2,m}[u])\label{eq:Alm} \end{equation} Using Equations \eqref{eq:All}, \eqref{eq:All_1}, and \eqref{eq:Alm}, and the definition $\bar A_{l,m}[u] = N_{l,m} A_{l,m}[u]$, the following recursion formulas can be derived. Initial condition: $\bar A_{0,0}[u] = 1$ The diagonal terms are computed as \begin{equation} \bar A_{l,l}[u] = \sqrt{\frac{(2 l - 1) (2 - \delta_l)}{(2 l) (2 - \delta_{l-1})}} \bar A_{l-1,l-1}[u] \end{equation} The low diagonal terms are then calculated as \begin{equation} \bar A_{l,l-1}[u] = u \sqrt{\frac{(2 l) (2 - \delta_{l-1})}{2 - \delta_l}} \bar A_{l,l}[u] \end{equation} Finally, for $l \geq (m+2)$, $N1_{l,m}$ and $N2_{l,m}$ are defined such that \begin{align} N1_{l,m} &= \sqrt{\frac{(2 l + 1) (2 l - 1)}{(l - m) (l + m)}}\\ N2_{l,m} &= \sqrt{\frac{(l + m - 1) (2 l + 1) (l - m -1)}{(l - m) (l + m) (2 l - 3)}} \end{align} and $\bar A_{l,m}[u]$ computed using \begin{equation} \bar A_{l,m}[u] = u N1_{l,m} \bar A_{l-1,m}[u] - N2_{l,m} \bar A_{l-2,m}[u] \end{equation} \subsubsection{Derivatives} The first order derivatives of many of the values given are necessary to compute the gravity field (second order derivatives are needed if the Hessian is to be computed). It is easy to show that \begin{align} \frac{\partial D_{l,m}}{\partial s}[s,t] &= m E_{l,m}[s,t]\\ \frac{\partial D_{l,m}}{\partial t}[s,t] &= m F_{l,m}[s,t] \end{align} \begin{equation} \frac{d \rho_l}{d r}[r] = -\frac{(l+1)}{R_{\text{ref}}} \rho_{l+1}[r] \end{equation} \begin{align} \frac{\partial R_m}{\partial s}[s,t] &= m R_{m-1}[s,t]\\ \frac{\partial R_m}{\partial t}[s,t] &= -m I_{m-1}[s,t]\\ \frac{\partial I_m}{\partial s}[s,t] &= m I_{m-1}[s,t]\\ \frac{\partial I_m}{\partial t}[s,t] &= m R_{m-1}[s,t] \end{align} \begin{equation} \frac{d \bar A_{l,m}}{d u}[u] = \frac{N_{l,m}}{N_{l,m+1}} \bar A_{l,m+1}[u] \end{equation} The gravity field can be computed using all the equations given. However, the gradient of the potential is needed. As a change of variables was realized, the chain rule must be applied. In order to avoid filling up pages with math derivations, the results will be given. With patience, the following results can be obtained applying the chain rule and using all the derivatives given. The gravity field can be computed as \begin{equation} \mathbf{\bar g} = (a_1[r,s,t,u] + s \cdot a_4[r,s,t,u]) \mathbf{\hat i} + (a_2[r,s,t,u] + t \cdot a_4[r,s,t,u]) \mathbf{\hat j} + (a_3[r,s,t,u] + u \cdot a_4[r,s,t,u]) \mathbf{\hat k} \end{equation} where \begin{align} a_1[r,s,t,u] &= \sum_{l=0}^\infty \sum_{m=0}^l \frac{\rho_{l+1}[r]}{R_{\text{ref}}} m \bar A_{l,m}[u] E_{l,m}[s,t]\\ a_2[r,s,t,u] &= \sum_{l=0}^\infty \sum_{m=0}^l \frac{\rho_{l+1}[r]}{R_{\text{ref}}} m \bar A_{l,m}[u] F_{l,m}[s,t]\\ a_3[r,s,t,u] &= \sum_{l=0}^\infty \sum_{m=0}^l \frac{\rho_{l+1}[r]}{R_{\text{ref}}} m \frac{N_{l,m}}{N_{l,m+1}} \bar A_{l,m+1}[u] D_{l,m}[s,t]\\ a_4[r,s,t,u] &= \sum_{l=0}^\infty \sum_{m=0}^l \frac{\rho_{l+1}[r]}{R_{\text{ref}}} m \frac{N_{l,m}}{N_{l+1,m+1}} \bar A_{l+1,m+1}[u] D_{l,m}[s,t] \end{align} In order to avoid computing factorials, it is easy to see that \begin{align} \frac{N_{l,m}}{N_{l,m+1}} &= \sqrt{\frac{(l-m) (2-\delta_m)(l+m+1)}{2- \delta_{m+1}}}\\ \frac{N_{l,m}}{N_{l+1,m+1}} &= \sqrt{\frac{(l+m+2)(l+m+1)(2l+1)(2-\delta_m)}{(2l+3)(2-\delta_{m+1})}} \end{align} Using all these expressions, the potential and the gravity field can be computed. \subsubsection{Simple Gravity} "Simple Gravity", or gravitational potential and acceleration without taking spherical harmonics into account, is equivalent to using only the $0\sup{th}$ term of the spherical harmonics equations. This is the equation that is used in basics physics courses and is most often used in Basilisk simulations. It assumes the gravitational body to be a point mass: \begin{equation} U(\mathbf{\bar r}) = G \frac{ m_\text{Q}}{\rho_\text{Q}} \end{equation}
{ "alphanum_fraction": 0.6939883533, "avg_line_length": 59.3220779221, "ext": "tex", "hexsha": "70c994847fbea522b28cd9548e39d426162e4d3d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_forks_repo_licenses": [ "0BSD" ], "max_forks_repo_name": "ian-cooke/basilisk_mag", "max_forks_repo_path": "src/simulation/dynamics/gravityEffector/_Documentation/secModelDescription.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_issues_repo_issues_event_max_datetime": "2019-03-13T20:52:22.000Z", "max_issues_repo_issues_event_min_datetime": "2019-03-13T20:52:22.000Z", "max_issues_repo_licenses": [ "0BSD" ], "max_issues_repo_name": "ian-cooke/basilisk_mag", "max_issues_repo_path": "src/simulation/dynamics/gravityEffector/_Documentation/secModelDescription.tex", "max_line_length": 763, "max_stars_count": null, "max_stars_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_stars_repo_licenses": [ "0BSD" ], "max_stars_repo_name": "ian-cooke/basilisk_mag", "max_stars_repo_path": "src/simulation/dynamics/gravityEffector/_Documentation/secModelDescription.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7630, "size": 22839 }
\documentclass[11pt]{article} \setlength{\textwidth}{\paperwidth} \addtolength{\textwidth}{-2in} \setlength{\textheight}{\paperheight} \addtolength{\textheight}{-2.5in} \setlength{\evensidemargin}{0in} \setlength{\oddsidemargin}{\evensidemargin} \setlength{\headsep}{0.5in} \addtolength{\headsep}{-\headheight} \setlength{\topmargin}{.25in} \addtolength{\topmargin}{-\headheight} \addtolength{\topmargin}{-\headsep} \usepackage{amsmath} \usepackage{txfonts} %\normalfont %\usepackage[T1]{fontenc} %\usepackage{textcomp} \let\orgnonumber=\nonumber\usepackage{mathenv}\let\nonumb=\nonumber\let\nonumber=\orgnonumber \allowdisplaybreaks \newcommand{\bs}{\symbol{'134}} \def\Ent#1{\csname #1\endcsname & \texttt{\bs #1}} \def\EEnt#1#2{\csname #1\endcsname & \texttt{\bs #1},\,\texttt{\bs #2}} \makeatletter \newcount\curchar \newcount\currow \newcount\curcol \newdimen\indexwd \newdimen\tempcellwd \setbox0\hbox{\ttfamily0\kern.2em} \indexwd=\wd0 \def\ident#1{#1} \def\hexnumber#1{\ifcase\expandafter\ident\expandafter{\number#1} 0\or 1\or 2\or 3\or 4\or 5\or 6\or 7\or 8\or 9\or A\or B\or C\or D\or E\or F\else ?\fi} \def\rownumber{\ttfamily\hexnumber\currow} \def\colnumber{\ttfamily\hexnumber\curcol \global\advance\curcol 1 } \def\charnumber{\setbox0=\hbox{\char\curchar}% \ifdim\ht0>7.5pt\reposition \else\ifdim\dp0>2.5pt\reposition\fi\fi \box0 \global\advance\curchar1 } \def\reposition{\setbox0=\hbox{$\vcenter{\kern1.5pt\box0\kern1.5pt}$}} \def\dochart#1{% \begingroup \global\curchar=0 \global\currow=0 \global\curcol=0 \def\hline{\kern2pt\hrule\kern3pt }% \setbox0\vbox{#1% \def\0{\hbox to\cellwd{\curcol}{\hss\charnumber\hss}}% \colnumbers \hline \setrow\setrow\setrow\setrow \hline \setrow\setrow\setrow\setrow \hline \colnumbers }% \vbox{% \hbox to\hsize{\kern\indexwd \def\fullrule{\hfil\vrule height\ht0 depth\dp0\hfil}% \fullrule\kern\cellwd{0}\kern\cellwd{1}\kern\cellwd{2}\kern\cellwd{3}% \fullrule\kern\cellwd{4}\kern\cellwd{5}\kern\cellwd{6}\kern\cellwd{7}% \fullrule\kern\cellwd{8}\kern\cellwd{9}\kern\cellwd{10}\kern\cellwd{11}% \fullrule\kern\cellwd{12}\kern\cellwd{13}\kern\cellwd{14}\kern\cellwd{15}% \fullrule\kern\indexwd}% \kern-\ht0 \kern-\dp0 \unvbox0}% \endgroup } \def\dochartA#1{% \begingroup \global\curchar=0 \global\currow=0 \global\curcol=0 \def\hline{\kern2pt\hrule\kern3pt }% \setbox0\vbox{#1% \def\0{\hbox to\cellwd{\curcol}{\hss\charnumber\hss}}% \colnumbers \hline \setrow\setrow\setrow\setrow \hline \setrow\setrow\setrow\setrow \hline \setrow\setrowX\setrow\setrowX % % \hline % \setrow\setrowX\setrow\setrowX % \hline % \colnumbers }% \vbox{% \hbox to\hsize{\kern\indexwd \def\fullrule{\hfil\vrule height\ht0 depth\dp0\hfil}% \fullrule\kern\cellwd{0}\kern\cellwd{1}\kern\cellwd{2}\kern\cellwd{3}% \fullrule\kern\cellwd{4}\kern\cellwd{5}\kern\cellwd{6}\kern\cellwd{7}% \fullrule\kern\cellwd{8}\kern\cellwd{9}\kern\cellwd{10}\kern\cellwd{11}% \fullrule\kern\cellwd{12}\kern\cellwd{13}\kern\cellwd{14}\kern\cellwd{15}% \fullrule\kern\indexwd}% \kern-\ht0 \kern-\dp0 \unvbox0}% \endgroup } \def\dochartB#1{% \begingroup \global\curchar=0 \global\currow=0 \global\curcol=0 \def\hline{\kern2pt\hrule\kern3pt }% \setbox0\vbox{#1% \def\0{\hbox to\cellwd{\curcol}{\hss\charnumber\hss}}% \colnumbers \hline \setrow\setrow\setrow\setrow \hline \setrow\setrow%\setrow\setrow \hline \colnumbers }% \vbox{% \hbox to\hsize{\kern\indexwd \def\fullrule{\hfil\vrule height\ht0 depth\dp0\hfil}% \fullrule\kern\cellwd{0}\kern\cellwd{1}\kern\cellwd{2}\kern\cellwd{3}% \fullrule\kern\cellwd{4}\kern\cellwd{5}\kern\cellwd{6}\kern\cellwd{7}% \fullrule\kern\cellwd{8}\kern\cellwd{9}\kern\cellwd{10}\kern\cellwd{11}% \fullrule\kern\cellwd{12}\kern\cellwd{13}\kern\cellwd{14}\kern\cellwd{15}% \fullrule\kern\indexwd}% \kern-\ht0 \kern-\dp0 \unvbox0}% \endgroup } \def\dochartC#1{% \begingroup \global\curchar=0 \global\currow=0 \global\curcol=0 \def\hline{\kern2pt\hrule\kern3pt }% \setbox0\vbox{#1% \def\0{\hbox to\cellwd{\curcol}{\hss\charnumber\hss}}% \colnumbers \hline \setrow\setrow\setrow\setrow \hline \setrow\setrow\setrow\setrow \hline \setrow\setrow \hline \colnumbers }% \vbox{% \hbox to\hsize{\kern\indexwd \def\fullrule{\hfil\vrule height\ht0 depth\dp0\hfil}% \fullrule\kern\cellwd{0}\kern\cellwd{1}\kern\cellwd{2}\kern\cellwd{3}% \fullrule\kern\cellwd{4}\kern\cellwd{5}\kern\cellwd{6}\kern\cellwd{7}% \fullrule\kern\cellwd{8}\kern\cellwd{9}\kern\cellwd{10}\kern\cellwd{11}% \fullrule\kern\cellwd{12}\kern\cellwd{13}\kern\cellwd{14}\kern\cellwd{15}% \fullrule\kern\indexwd}% \kern-\ht0 \kern-\dp0 \unvbox0}% \endgroup } \def\dochartD#1{% \begingroup \global\curchar=0 \global\currow=0 \global\curcol=0 \def\hline{\kern2pt\hrule\kern3pt }% \setbox0\vbox{#1% \def\0{\hbox to\cellwd{\curcol}{\hss\charnumber\hss}}% \colnumbers \hline \setrow\setrow\setrow\setrow \hline \setrow\setrow\setrow\setrow \hline \setrow\setrow\setrow\setrow \hline \setrow\setrow\setrow\setrow \hline \colnumbers }% \vbox{% \hbox to\hsize{\kern\indexwd \def\fullrule{\hfil\vrule height\ht0 depth\dp0\hfil}% \fullrule\kern\cellwd{0}\kern\cellwd{1}\kern\cellwd{2}\kern\cellwd{3}% \fullrule\kern\cellwd{4}\kern\cellwd{5}\kern\cellwd{6}\kern\cellwd{7}% \fullrule\kern\cellwd{8}\kern\cellwd{9}\kern\cellwd{10}\kern\cellwd{11}% \fullrule\kern\cellwd{12}\kern\cellwd{13}\kern\cellwd{14}\kern\cellwd{15}% \fullrule\kern\indexwd}% \kern-\ht0 \kern-\dp0 \unvbox0}% \endgroup } \def\dochartE#1{% \begingroup \global\curchar=0 \global\currow=0 \global\curcol=0 \def\hline{\kern2pt\hrule\kern3pt }% \setbox0\vbox{#1% \def\0{\hbox to\cellwd{\curcol}{\hss\charnumber\hss}}% \colnumbers \hline \setrow\setrow\setrow\setrow \hline \setrow\setrow\setrow\setrow \hline \setrow\setrow\setrow\setrow \hline \setrowX\setrow\setrowX\setrow \hline \colnumbers }% \vbox{% \hbox to\hsize{\kern\indexwd \def\fullrule{\hfil\vrule height\ht0 depth\dp0\hfil}% \fullrule\kern\cellwd{0}\kern\cellwd{1}\kern\cellwd{2}\kern\cellwd{3}% \fullrule\kern\cellwd{4}\kern\cellwd{5}\kern\cellwd{6}\kern\cellwd{7}% \fullrule\kern\cellwd{8}\kern\cellwd{9}\kern\cellwd{10}\kern\cellwd{11}% \fullrule\kern\cellwd{12}\kern\cellwd{13}\kern\cellwd{14}\kern\cellwd{15}% \fullrule\kern\indexwd}% \kern-\ht0 \kern-\dp0 \unvbox0}% \endgroup } \def\colnumbers{\hbox to\hsize{\global\curcol 0 \def\1{\hbox to\cellwd{\curcol}{\hfil\colnumber\hfil}}% \kern\indexwd\hfil\hfil \1\1\1\1\hfil\hfil \1\1\1\1\hfil\hfil \1\1\1\1\hfil\hfil \1\1\1\1\hfil\hfil \kern\indexwd}% } \def\dochartF#1{% \begingroup \global\curchar=0 \global\currow=0 \global\curcol=0 \def\hline{\kern2pt\hrule\kern3pt }% \setbox0\vbox{#1% \def\0{\hbox to\cellwd{\curcol}{\hss\charnumber\hss}}% \colnumbers \hline \setrow\setrow\setrow\setrow \hline \setrow\setrow\setrow\setrow \hline \setrow\setrow\setrow \hline \colnumbers }% \vbox{% \hbox to\hsize{\kern\indexwd \def\fullrule{\hfil\vrule height\ht0 depth\dp0\hfil}% \fullrule\kern\cellwd{0}\kern\cellwd{1}\kern\cellwd{2}\kern\cellwd{3}% \fullrule\kern\cellwd{4}\kern\cellwd{5}\kern\cellwd{6}\kern\cellwd{7}% \fullrule\kern\cellwd{8}\kern\cellwd{9}\kern\cellwd{10}\kern\cellwd{11}% \fullrule\kern\cellwd{12}\kern\cellwd{13}\kern\cellwd{14}\kern\cellwd{15}% \fullrule\kern\indexwd}% \kern-\ht0 \kern-\dp0 \unvbox0}% \endgroup } \def\setrow{\hbox to\hsize{% \hbox to\indexwd{\hfil\rownumber\kern.2em}\hfil\hfil \0\0\0\0\hfil\hfil \0\0\0\0\hfil\hfil \0\0\0\0\hfil\hfil \0\0\0\0\hfil\hfil \hbox to\indexwd{\ttfamily\kern.2em \rownumber\hfil}}% \global\advance\currow 1 }% \def\setrowX{\global\advance\curchar16\global\advance\currow 1\relax} \def\cellwd#1{20pt}% initialize \def\measurecolwidths#1{% \tempcellwd\hsize \advance\tempcellwd-2\indexwd \advance\tempcellwd -12pt \divide\tempcellwd 16 \xdef\cellwd##1{\the\tempcellwd}% } \def \table #1#2#3{\par\penalty-200 \bigskip \font #1=#2 \relax \vbox{\hsize=29pc \measurecolwidths{#1}% \centerline{#3 -- {\tt#2}}% \medskip \dochart{#1}% }} \def \tableA #1#2#3{\par\penalty-200 \bigskip \font #1=#2 \relax \vbox{\hsize=29pc \measurecolwidths{#1}% \centerline{#3 -- {\tt#2}}% \medskip \dochartA{#1}% }} \def \tableB #1#2#3{\par\penalty-200 \bigskip \font #1=#2 \relax \vbox{\hsize=29pc \measurecolwidths{#1}% \centerline{#3 -- {\tt#2}}% \medskip \dochartB{#1}% }} \def \tableC #1#2#3{\par\penalty-200 \bigskip \font #1=#2 \relax \vbox{\hsize=29pc \measurecolwidths{#1}% \centerline{#3 -- {\tt#2}}% \medskip \dochartC{#1}% }} \def \tableD #1#2#3{\par\penalty-200 \bigskip \font #1=#2 \relax \vbox{\hsize=29pc \measurecolwidths{#1}% \centerline{#3 -- {\tt#2}}% \medskip \dochartD{#1}% }} \def \tableE #1#2#3{\par\penalty-200 \bigskip \font #1=#2 \relax \vbox{\hsize=29pc \measurecolwidths{#1}% \centerline{#3 -- {\tt#2}}% \medskip \dochartE{#1}% }} \def \tableF #1#2#3{\par\penalty-200 \bigskip \font #1=#2 \relax \vbox{\hsize=29pc \measurecolwidths{#1}% \centerline{#3 -- {\tt#2}}% \medskip \dochartF{#1}% }} \makeatother \begin{document} \title{The \texttt{TX} Fonts% \thanks{Special thanks to those who reported problems of \texttt{TX} fonts and provided suggestions!}} \author{Young Ryu} \date{December 15, 2000} \maketitle \tableofcontents \clearpage \section{Introduction} The \texttt{TX} fonts consist of \begin{enumerate}\itemsep=0pt \item virtual text roman fonts using Adobe Times (or URW NimbusRomNo9L) with some modified and additional text symbols in OT1, T1, TS1, and LY1 encodings \item \textsf{virtual text sans serif fonts using Adobe Helvetica (or URW NimbusSanL) with additional text symbols in OT1, T1, TS1, and LY1 encodings} \item \texttt{monospaced typewriter fonts in OT1, T1, TS1, and LY1 encodings} \item math alphabets using Adobe Times (or URW NimbusRomNo9L) with modified metrics \item math fonts of all symbols corresponding to those of Computer Modern math fonts (CMSY, CMMI, CMEX, and Greek letters of CMR) \item math fonts of all symbols corresponding to those of AMS fonts (MSAM and MSBM) \item additional math fonts of various symbols \end{enumerate} % All fonts are in the Type 1 format (in \texttt{afm} and \texttt{pfb} files). Necessary \texttt{tfm} and \texttt{vf} files together with \LaTeXe\ package files and font map files for \texttt{dvips} are provided. \begin{bfseries}%\itshape The \texttt{TX} fonts and related files are distributed without any guaranty or warranty. I do not assume responsibility for any actual or possible damages or losses, directly or indirectly caused by the distributed files. \end{bfseries} The \texttt{TX} fonts are distributed under the GNU public license (GPL)\@. \section{Changes} \begin{description} \item[1.0] (October 25, 2000) 1st public release \item[2.0] (November 2, 2000) \begin{itemize} \item An encoding error in txi and txbi (`\textdollar' \texttt{"24}) is fixed. \item Mistakes in symbol declarations for `\AA' and `\aa' in \texttt{txfonts.sty} are fixed. \item $\lambda$ (\texttt{"15} of txmi and txbmi), $\lambdaslash$ (\texttt{"6E} of txsyc and txbsyc), and $\lambdabar$ (\texttt{"6F} of txsyc and txbsyc) are updated to be more slanted. \item More symbols added in txexa and txbexa (\texttt{"29}--\texttt{"2E}) and in txsyc and txbsyc (\texttt{"80}--\texttt{"94}). \item Some fine tuning of a few glyphs. \item Math italic font metrics are improved. \item Text font metrics are improved. \item T1 and TS1 encodings are supported. (Not all TS1 encoding glyphs are implemented.) \end{itemize} \item[2.1] (November 18, 2000) \begin{itemize} \item Complete implementation of TS1 encoding fonts. \item Various improvements of font metrics and font encodings. For instance, the bogus entry of char \texttt{'27} in T1 encoding virtual font files are removed. (This bogus entry caused ``warning char 23 replaced by \bs.notdef'' with PDF\TeX/PDF\LaTeX.) \item Helvetica-based TX sans serif fonts in OT1, T1, and TS1 encodings. \item Monospaced TX typewriter fonts, which are thicker than Courier (and thus may look better with Times), in OT1, T1, and TS1 encodings. \end{itemize} \item[2.2] (November 22, 2000) \begin{itemize} \item LY1 encoding support \item Monospaced typewriter fonts redone (Uppercase letters are tall enough to match with Times.) \item Various glyph and metric improvement \end{itemize} \item[2.3] (December 6, 2000) \begin{itemize} \item Math extension fonts (radical symbols) updated \item Alternative blackboard bold letters ($\varmathbb{A}\ldots\varmathbb{Z}$ and $\varBbbk$) introduced. (Enter \verb|$\varmathbb{...}$| and \verb|$\varBbk$| to get them.) \item More large operators symbols \item Now \verb|\lbag| ($\lbag$) and \verb|\rbag| ($\rbag$) are delimiters. \item Alternative math alphabets $\varg$ and $\vary$ added \end{itemize} \item[2.4] (December 12, 2000) \begin{itemize} \item An encoding mistake in text companion typewriter fonts fixed \item Bugs in \LaTeX\ input files fixed \end{itemize} \item[3.0] (December 14, 2000) \begin{itemize} \item Minor problem fixes. \item Manual fine-tuning of Type 1 font files \end{itemize} \item[3.1] (December 15, 2000) \begin{itemize} \item Alternative math alphabets $\varv$ and $\varw$ added \item Hopefully, this is the final release \ldots \end{itemize} \end{description} \section{A Problem: \texttt{DVIPS} Partial Font Downloading} It was reported that when \texttt{TX} fonts are partially downloaded with \texttt{dvips}, some HP Laserprinters (with Postscript) cannot print documents. To resolve this problem, turn the partial font downloading off. See the \texttt{dvips} document for various ways to turn off partial font downloading. \textbf{\itshape Even though one does not observe such a problem, I would like to strongly recommend to turn off \texttt{dvips} partial font downloading.} %I think the \texttt{dvips} partial font downloading %mechanism appears to have some problems. For instance, %when Adobe Times fonts are set to be downloaded, e.g., %\begin{verbatim} % ptmr8r Times-Roman "TeXBase1Encoding ReEncodeFont" <8r.enc <tir_____.pfb %\end{verbatim} %\TeX ing \texttt{testfont.tex} on \texttt{ptmr8r} %and \texttt{dvips}ing \texttt{testfont.dvi} %with partial font download on give %\begin{verbatim} % WARNING: Not all chars found %\end{verbatim} %This specific warning seems to be harmless. But, %in my opinion, this should not happen. \section{Installation} Put all files in \texttt{afm}, \texttt{tfm}, \texttt{vf}, and \texttt{pfb} files in proper locations of your \TeX\ system. For Mik\TeX, they may go to \begin{verbatim} \localtexmf\fonts\afm\txr\ \localtexmf\fonts\tfm\txr\ \localtexmf\fonts\vf\txr\ \localtexmf\fonts\type1\txr\ \end{verbatim} All files of the \texttt{input} directory must be placed where \LaTeX\ finds its package files. For Mik\TeX, they may go to \begin{verbatim} \localtexmf\tex\latex\txr\ \end{verbatim} Put the \texttt{txr.map}, \texttt{txr1.map}, \texttt{txr2.map}, and \texttt{tx8r.enc}% \footnote{The \texttt{tx8r.enc} file is identical to \texttt{8r.enc}. I included \texttt{tx8r.enc} because (1)~some \TeX\ installation might not have \texttt{8r.enc} and (2)~including \texttt{8r.enc} would result in multiple copies of \texttt{8r.enc} for \TeX\ systems that already have it. \texttt{xdvi} users may do global search-and-replacement of \texttt{tx8r.enc} by \texttt{8r.enc} in the \texttt{map} files.} files of the \texttt{dvips} directory in a proper place that \texttt{dvips} refers to. For Mik\TeX, they may go to \begin{verbatim} \localtexmf\dvips\config\ \end{verbatim} Also add the reference to \texttt{txr.map} in the \texttt{dvips} configuration file (\texttt{config.ps}) \begin{verbatim} . . . % Configuration of postscript type 1 fonts: p psfonts.map p +txr.map . . . \end{verbatim} and in the PDF\TeX\ configuration file (\texttt{pdftex.cfg}) \begin{verbatim} . . . % pdftex.map is set up by texmf/dvips/config/updmap map pdftex.map map +txr.map . . . \end{verbatim} (The \texttt{txr.map} file has only named references to the Adobe Times fonts; the \texttt{txr1.map} file makes \texttt{dvips} load Adobe Times font files; and the \texttt{txr2.map} file makes \texttt{dvips} load URW NimbusRomNo9L font files.) For \texttt{dvipdfm} users, \texttt{txr3.map} (by Dan Luecking) is included. Read comments in the beginning of the file. \section{Using the \texttt{TX} Fonts with \LaTeX} It is as simple as \begin{verbatim} \documentclass{article} \usepackage{txfonts} %\normalfont % Just in case ... %\usepackage[T1]{fontenc} % To use T1 encoding fonts %\usepackage[LY1]{fontenc} % To use LY1 encoding fonts %\usepackage{textcomp} % To use text companion fonts \begin{document} This is a very short article. \end{document} \end{verbatim} The standard \LaTeX\ distribution does not include files supporting the LY1 encoding. One needs at least \texttt{ly1enc.def}, which is available from both CTAN and Y\&Y (\texttt{www.yandy.com}). At the time this document was written, CTAN had an old version (1997/03/21 v0.3); \texttt{ly1enc.def} available from Y\&Y's downloads site was dated on 1998/04/21 v0.4. \section{Additional Symbols in the \texttt{TX} Math Fonts} \emph{All} CM symbols are included in the \texttt{TX} math fonts. In addition, the \texttt{TX} math fonts provide or modify the following symbols, including all of AMS and most of \LaTeX\ symbols. \subsubsection*{Binary Operator Symbols} \begin{eqnarray*}[c@{\enskip}l@{\qquad\qquad\qquad}c@{\enskip}l@{\qquad\qquad\qquad}c@{\enskip}l] \Ent{medcirc}& \Ent{medbullet}& \Ent{invamp}\\ \Ent{circledwedge}& \Ent{circledvee}& \Ent{circledbar}\\ \Ent{circledbslash}& \Ent{nplus}& \Ent{boxast}\\ \Ent{boxbslash}& \Ent{boxbar}& \Ent{boxslash}\\ \Ent{Wr}& \Ent{sqcupplus}& \Ent{sqcapplus}\\ \Ent{rhd}& \Ent{lhd}& \Ent{unrhd}\\ \Ent{unlhd} \end{eqnarray*} \subsubsection*{Binary Relation Symbols} \begin{eqnarray*}[c@{\enskip}l@{\quad}c@{\enskip}l@{\quad}c@{\enskip}l] \Ent{mappedfrom}& \Ent{longmappedfrom}& \Ent{Mapsto}\\ \Ent{Longmapsto}& \Ent{Mappedfrom}& \Ent{Longmappedfrom}\\ \Ent{mmapsto}& \Ent{longmmapsto}& \Ent{mmappedfrom}\\ \Ent{longmmappedfrom}& \Ent{Mmapsto}& \Ent{Longmmapsto}\\ \Ent{Mmappedfrom}& \Ent{Longmmappedfrom}& \Ent{varparallel}\\ \Ent{varparallelinv}& \Ent{nvarparallel}& \Ent{nvarparallelinv}\\ \Ent{colonapprox}& \Ent{colonsim}& \Ent{Colonapprox}\\ \Ent{Colonsim}& \Ent{doteq}& \Ent{multimapinv}\\ \Ent{multimapboth}& \Ent{multimapdot}& \Ent{multimapdotinv}\\ \Ent{multimapdotboth}& \Ent{multimapdotbothA}& \Ent{multimapdotbothB}\\ \Ent{VDash}& \Ent{VvDash}& \Ent{cong}\\ \Ent{preceqq}& \Ent{succeqq}& \Ent{nprecsim}\\ \Ent{nsuccsim}& \Ent{nlesssim}& \Ent{ngtrsim}\\ \Ent{nlessapprox}& \Ent{ngtrapprox}& \Ent{npreccurlyeq}\\ \Ent{nsucccurlyeq}& \Ent{ngtrless}& \Ent{nlessgtr}\\ \Ent{nbumpeq}& \Ent{nBumpeq}& \Ent{nbacksim}\\ \Ent{nbacksimeq}& \EEnt{neq}{ne}& \Ent{nasymp}\\ \Ent{nequiv}& \Ent{nsim}& \Ent{napprox}\\ \Ent{nsubset}& \Ent{nsupset}& \Ent{nll}\\ \Ent{ngg}& \Ent{nthickapprox}& \Ent{napproxeq}\\ \Ent{nprecapprox}& \Ent{nsuccapprox}& \Ent{npreceqq}\\ \Ent{nsucceqq}& \Ent{nsimeq}& \Ent{notin}\\ \EEnt{notni}{notowns}& \Ent{nSubset}& \Ent{nSupset}\\ \Ent{nsqsubseteq}& \Ent{nsqsupseteq}& \Ent{coloneqq}\\ \Ent{eqqcolon}& \Ent{coloneq}& \Ent{eqcolon}\\ \Ent{Coloneqq}& \Ent{Eqqcolon}& \Ent{Coloneq}\\ \Ent{Eqcolon}& \Ent{strictif}& \Ent{strictfi}\\ \Ent{strictiff}& \Ent{circledless}& \Ent{circledgtr}\\ \Ent{lJoin}& \Ent{rJoin}& \EEnt{Join}{lrJoin}\\ \Ent{openJoin}& \Ent{lrtimes}& \Ent{opentimes}\\ \Ent{nsqsubset}& \Ent{nsqsupset}& \Ent{dashleftarrow}\\ %\EEnt{dashrightarrow}{dasharrow}& \Ent{dashrightarrow}& \Ent{dashleftrightarrow}& \Ent{leftsquigarrow}\\ \Ent{ntwoheadrightarrow}& \Ent{ntwoheadleftarrow}& \Ent{Nearrow}\\ \Ent{Searrow}& \Ent{Nwarrow}& \Ent{Swarrow}\\ \Ent{Perp}& \Ent{leadstoext}& \Ent{leadsto}\\ \Ent{boxright}& \Ent{boxleft}& \Ent{boxdotright}\\ \Ent{boxdotleft}& \Ent{Diamondright}& \Ent{Diamondleft}\\ \Ent{Diamonddotright}& \Ent{Diamonddotleft}& \Ent{boxRight}\\ \Ent{boxLeft}& \Ent{boxdotRight}& \Ent{boxdotLeft}\\ \Ent{DiamondRight}& \Ent{DiamondLeft}& \Ent{DiamonddotRight}\\ \Ent{DiamonddotLeft}& \Ent{circleright}& \Ent{circleleft}\\ \Ent{circleddotright}& \Ent{circleddotleft}& \Ent{multimapbothvert}\\ \Ent{multimapdotbothvert}& \Ent{multimapdotbothAvert}& \Ent{multimapdotbothBvert} \end{eqnarray*} \subsubsection*{Ordinary Symbols} \begin{eqnarray*}[c@{\enskip}l@{\qquad\qquad\qquad}c@{\enskip}l@{\qquad\qquad\qquad}c@{\enskip}l] \Ent{alphaup}& \Ent{betaup}& \Ent{gammaup}\\ \Ent{deltaup}& \Ent{epsilonup}& \Ent{varepsilonup}\\ \Ent{zetaup}& \Ent{etaup}& \Ent{thetaup}\\ \Ent{varthetaup}& \Ent{iotaup}& \Ent{kappaup}\\ \Ent{lambdaup}& \Ent{muup}& \Ent{nuup}\\ \Ent{xiup}& \Ent{piup}& \Ent{varpiup}\\ \Ent{rhoup}& \Ent{varrhoup}& \Ent{sigmaup}\\ \Ent{varsigmaup}& \Ent{tauup}& \Ent{upsilonup}\\ \Ent{phiup}& \Ent{varphiup}& \Ent{chiup}\\ \Ent{psiup}& \Ent{omegaup}& \Ent{Diamond}\\ \Ent{Diamonddot}& \Ent{Diamondblack}& \Ent{lambdaslash}\\ \Ent{lambdabar}& \Ent{varclubsuit}& \Ent{vardiamondsuit}\\ \Ent{varheartsuit}& \Ent{varspadesuit}& \Ent{Top}\\ \Ent{Bot} \end{eqnarray*} \subsubsection*{Math Alphabets} \begin{eqnarray*}[c@{\enskip}l@{\qquad\qquad\qquad}c@{\enskip}l@{\qquad\qquad\qquad}c@{\enskip}l@{\qquad\qquad\qquad}c@{\enskip}l] \Ent{varg} & \Ent{varv} & \Ent{varw} & \Ent{vary} \end{eqnarray*} In order to replace math alphabets $g$, $v$, $w$, and $y$ by these alternatives, use the \texttt{varg} option with the \texttt{txfonts} package: \begin{verbatim} \usepackage[varg]{txfonts} \end{verbatim} Then, \verb|$g$|, \verb|$v$|, \verb|$w$|, and \verb|$y$| will produce these $\varg$, $\varv$, $\varw$, and~$\vary$ (instead of $g$, $v$, $w$, and~$y$). % Notice that $\varv$ (the alternative \textit{v}) is more clearly distingiushed from $\nu$ (the lowercase Greek nu). However, this is not without cost: it looks similar to $\upsilon$ (the lowercase Greek upsilon). %\footnote{A comment on Times New Roman fonts: %the italic \textit{v} of Times New Roman (both Type 1 and TrueType versions), %but not that of Times, is very badly designed. The starting serif at the left-top %corner of the letter is very different from other letters' corresponding portion. %However, that of Times New Roman bold italic is consistent with others. %Further, in the TrueType version of Times New Roman italic and bold italic, %the lowercase Greek $\nu$ (nu) is exactly same as \textit{v} (i.e., linked to \textit{v}). %For the mathematical typesetting purpose, this is undesirable. %In \texttt{TX} fonts, the lowercase Greek $\nu$ (nu) is not identical to %\textit{v}, but very similar. The alternative $\varv$ is provided to %be more clearly distingiushed from the lowercase Greek $\nu$ (nu). %The alternative $\varw$ is provided to ensure consistency.} \subsubsection*{Large Operator Symbols} \begin{eqnarray*}[c@{\enskip}l@{\quad}c@{\enskip}l@{\quad}c@{\enskip}l] \Ent{bignplus}& \Ent{bigsqcupplus}& \Ent{bigsqcapplus}\\ \Ent{bigsqcap}& \Ent{bigsqcap}& \Ent{varprod}\\ \Ent{oiint}& \Ent{oiiint}& \Ent{ointctrclockwise}\\ \Ent{ointclockwise}& \Ent{varointctrclockwise}& \Ent{varointclockwise}\\ \Ent{sqint}& \Ent{sqiintop}& \Ent{sqiiintop}\\ \Ent{fint}& \Ent{iint}& \Ent{iiint}\\ \Ent{iiiint}& \Ent{idotsint}& \Ent{oiintctrclockwise}\\ \Ent{oiintclockwise}& \Ent{varoiintctrclockwise}& \Ent{varoiintclockwise}\\ \Ent{oiiintctrclockwise}& \Ent{oiiintclockwise}& \Ent{varoiiintctrclockwise}\\ \Ent{varoiiintclockwise}& \end{eqnarray*} \subsubsection*{Delimiters} \begin{eqnarray*}[c@{\enskip}l@{\qquad\qquad\qquad}c@{\enskip}l@{\qquad\qquad\qquad}c@{\enskip}l@{\qquad\qquad\qquad}c@{\enskip}l] \Big\llbracket&\texttt{\bs llbracket}& \Big\rrbracket&\texttt{\bs rrbracket}& \Big\lbag&\texttt{\bs lbag}& \Big\rbag&\texttt{\bs rbag} \end{eqnarray*} %\subsubsection*{Parentheses} %\begin{eqnarray*}[c@{\enskip}l@{\qquad\qquad\qquad}c@{\enskip}l@{\qquad\qquad\qquad}c@{\enskip}l@{\qquad\qquad\qquad}c@{\enskip}l] %\Ent{lbag}& %\Ent{rbag}& %\Ent{Lbag}& %\Ent{Rbag} %\end{eqnarray*} \subsubsection*{Miscellaneous} \verb|$\mathfrak{...}$| produces $\mathfrak{A} \ldots \mathfrak{Z}$ and $\mathfrak{a} \ldots \mathfrak{z}$. \verb|$\varmathbb{...}$| produces $\varmathbb{A} \ldots \varmathbb{Z}$ (lowercase letters only); \verb|\varBbbk| produces $\varBbbk$. Note that the \AmS\ math font command \verb|$\mathbb{...}$| produces $\mathbb{A} \ldots \mathbb{Z}$; \verb|\Bbbk| produces $\Bbbk$. If you find the alternative blackboard letters are better, then do \begin{verbatim} \let\mathbb=\varmathbb \let\Bbbk=\varBbbk \end{verbatim} \section{Remarks} \subsection{Some Font Design Issues} The Adobe Times fonts are thicker than the CM fonts. Designing math fonts for Times based on the rule thickness of Times `$=$', `$-$', `$+$', `/', `$<$', etc.\ would result in too thick math symbols, in my opinion.\footnote{I have designed many math symbols (corresponding to those in CMMI and CMSY) based on the rule thickness of original Times `$=$', etc. At that time, I noticed that the symbols, especially some bold math symbols, are extremely thick. Perhaps, in the future, I will complete all math symbols based on the rule thickness of original Times `$=$', etc.\ and release in public, so that users will judge whether they are acceptable or not~\ldots.} In the \texttt{TX} fonts, these glyphs are thinner than those of original Times fonts. That is, the rule thickness of these glyphs is around 85\% of that of the Times fonts, but still thicker than that of the CM fonts. For negated relation symbols, the CM fonts composes relation symbols with the negation slash (\texttt{"36} in CMSY). Even though the CM fonts were very carefully designed to look reasonable when negated relation symbols are composed (except `$\notin$' \verb|\notin|, which is composed of `$\in$' and the normal slash `$/$'), the AMS font set includes many negated relation symbols, mainly because the vertical placement and height\slash depth of the negation slash are not optimal when composed with certain relation symbols, I guess. The \texttt{TX} fonts include the negation slash symbol (\texttt{"36} in txsy), which could be composed with relation symbols to give reasonably looking negated related symbols. I believe, however, explicitly designed negated relation symbols are looking better than composed relation symbols. Thus, in addition to negated relation symbols matching those of the AMS fonts, many negated symbols such as `$\neq$' are introduced in the \texttt{TX} fonts. Further, in order to maintain editing compatibility with vanilla \LaTeXe\ typesetting, \verb|\not| is redefined in \texttt{txfonts.sty} so that when \verb|\not\XYZ| is processed, if \verb|\notXYZ| or \verb|\nXYZ| is defined, it will be used in place of \verb|\not\XYZ|; otherwise, \verb|\XYZ| is composed with the negation slash. For instance, `$\nprecsim$' is available as \verb|\nprecsim| in the \texttt{TX} fonts. Thus, if \verb|\not\precsim| is typed in the document, the \verb|\nprecsim| symbol, instead of \verb|\precsim| composed with the negation slash, is printed. \subsection{Times vs.\ Times New Roman} The recent version of Acrobat is shipped with Times New Roman instead of Times fonts. Times New Roman fonts' italic letters (e.g., `\textit{A}') are substantially different from those of Times fonts. Thus, when documents with the \texttt{TX} fonts are processed with Acrobat, accents may not be correctly placed. If this is a noticeable problem, use the NimbusRomNo9L fonts (included in the Ghostscript distribution) with the \texttt{TX} fonts through \texttt{txr2.map}. \subsection{PDF\TeX/PDF\LaTeX\ and Standard Postscript Fonts} PDF\TeX/PDF\LaTeX\ does not handle slanting of fonts not embedded in the document. Note, in the standard setup, PDF\TeX/PDF\LaTeX\ does not embed the 14 standard Postscript fonts (Times $\times$~4, Helvetica $\times$~4, Courier $\times$~4, Symbol, and ZapfDingbats). As the result, PDF\TeX/PDF\LaTeX\ issues warning (and may try to generate and use bitmapped fonts for these fonts). If it is not desirable, a solution would be to use URW NimbusRomNo9L and NimbusSanL fonts which are an Adobe Times and Helvetica fonts clone. That is, in the PDF\TeX/PDF\LaTeX\ configuration file (\texttt{pdftex.cfg}), put \texttt{txr2.map} instead of \texttt{txr.map} \begin{verbatim} . . . % pdftex.map is set up by texmf/dvips/config/updmap map pdftex.map map +txr2.map . . . \end{verbatim} Be sure to properly install URW NimbusRomNo9L and NimbusSanL fonts (which are included in the Ghostscript distribution) in your texmf tree. If you have Adobe Times and Helvetica font files, and want to embed them in your PDF document file, do the following trick to fool PDF\TeX/PDF\LaTeX. \begin{enumerate}\itemsep=0pt%\parskip=0pt \item Copy \texttt{txr1.map} in the dvips configuration directory to \texttt{txrpdf.map} in the PDF\TeX/PDF\LaTeX\ configuration directory. \item Edit txrpdf.map and have \begin{small} \begin{verbatim} rtxptmb "TeXBase1Encoding ReEncodeFont" <tx8r.enc <tib_____.pfb rtxptmbo ".167 SlantFont TeXBase1Encoding ReEncodeFont" <tx8r.enc <tib_____.pfb rtxptmbi "TeXBase1Encoding ReEncodeFont" <tx8r.enc <tibi____.pfb rtxptmr "TeXBase1Encoding ReEncodeFont" <tx8r.enc <tir_____.pfb rtxptmro ".167 SlantFont TeXBase1Encoding ReEncodeFont" <tx8r.enc <tir_____.pfb rtxptmri "TeXBase1Encoding ReEncodeFont" <tx8r.enc <tii_____.pfb . . . \end{verbatim} \end{small} instead of \begin{small} \begin{verbatim} rtxptmb Times-Bold "TeXBase1Encoding ReEncodeFont" <tx8r.enc <tib_____.pfb . . . . . . \end{verbatim} \end{small} Note, the actual standard Postscript fonts names such as \texttt{"Times-Bold"} are removed. As the result, PDF\TeX/PDF\LaTeX\ will embed these standard Postscript fonts and there will be no warning for slanting them. \item Put \texttt{txrpdf.map} in the PDF\TeX/PDF\LaTeX\ configuration file (\texttt{pdftex.cfg}). \begin{small} \begin{verbatim} . . . % pdftex.map is set up by texmf/dvips/config/updmap map pdftex.map map +txrpdf.map . . . \end{verbatim} \end{small} \end{enumerate} \subsection{Glyph Hinting} The hinting of the \texttt{TX} fonts is far from ideal. As a result, when documents with the \texttt{TX} fonts are \emph{viewed} with Gsview (or Ghostview), you might notice some display quality problem. When they are \emph{viewed} with Acrobat, they look much better. However, when they are \emph{printed} in laser printers, there will be no quality problem. (Note, hinting is to improve display quality on low resolution devices such as display screens.) \subsection{Glyphs in Low Positions} It is known that Acrobat often does not properly handle CM font glyphs placed between \texttt{"00} and \texttt{"1F}. Thus, most Type 1 versions of CM fonts publicly available have these glyphs in higher positions above \texttt{"7F}. When the \texttt{-G} flag is used with \texttt{dvips}, those glyphs in low positions are shifted to higher positions. The \texttt{TX} text fonts have glyphs in the low positions between \texttt{"00} and \texttt{"1F}. As of now, these glyphs are not available in higher positions above \texttt{"7F}. Thus, when run \texttt{dvips}, do not use the \texttt{-G} flag (or remove \texttt{G} in the \texttt{dvips} configuration file). Especially, do not use \texttt{config.pdf}. In my computer systems, Acrobat correctly handles glyphs in low positions. However, if this known Acrobat problem occurs in other computer systems, I will modify the \texttt{TX} fonts so that glyphs in low positions are also available in higher positions. \section{Font Charts} The original Computer Modern (CM) text fonts (aka \TeX\ text fonts) have the OT1 encoding. The OT1 \texttt{TX} text fonts follow the CM fonts' encoding as much as possible, but have some variations and additions: \begin{itemize}\parskip=0pt\itemsep=0pt \item The position \texttt{"24} of text italic fonts has the dollar symbol (\textit{\textdollar}), not the sterling symbol (\textit{\textsterling}). \item The uppercase and lowercase lslash (\L, \l) and aring (\AA, \aa) letters are added. \item The cent (\ifx\textcentoldstyle\undefined\textcent\else\textcentoldstyle\fi) and sterling (\textsterling) symbols are added. \end{itemize} The original CM text fonts have somewhat different encodings in \textsc{cap \& small cap} and \texttt{typewriter} fonts. \texttt{TX} fonts corresponding to them have the original CM encodings, not the strict OT1 encoding. The T1 encoding text fonts (known as EC fonts) are designed to replace the CM text fonts in the OT1 encoding. The LY1 encoding is another text font encoding, which is based on both \TeX\ and ANSI encodings. Both T1 and LY1 encoding fonts are especially useful to typeset European languages with proper hyphenation. The TS1 encoding text companion fonts (known as TC fonts) have additional text symbols. All corresponding \texttt{TX} fonts are implemented. The Computer Modern (CM) math fonts (aka \TeX\ math fonts) consist of three fonts: math italic (CMMI), math symbols (CMSY), and math extension (CMEX). The American Mathematical Society provided two additional math symbol fonts (MSAM and MSBM). The \texttt{TX} math fonts include those exactly corresponding to them. In addition, the \texttt{TX} math fonts include math italic A, math symbols C, and math extension A fonts. \subsection{OT1 (CM) Encoding Text Fonts} These fonts' encodings are identical to those of corresponding CM fonts, except 6~additional glyphs. \begin{center} \centering \leavevmode\hbox{\tableA \fonttab{txr}{Text Roman Upright}} \bigskip\bigskip \leavevmode\hbox{\tableA \fonttab{txi}{\textit{Text Roman Italic}}} \bigskip\bigskip \leavevmode\hbox{\tableA \fonttab{txsl}{\textsl{Text Roman Slanted}}} \bigskip\bigskip \leavevmode\hbox{\tableA \fonttab{txsc}{\textsc{Text Roman Cap \& Small Cap}}} \bigskip\bigskip \leavevmode\hbox{\tableA \fonttab{txss}{\textsf{Text Sans Serif Upright}}} \bigskip\bigskip \leavevmode\hbox{\tableA \fonttab{txsssl}{\textsf{\slshape Text Sans Serif Slanted}}} \bigskip\bigskip \leavevmode\hbox{\tableA \fonttab{txsssc}{\textsf{\scshape Text Sans Serif Cap \& Small Cap}}} \bigskip\bigskip \leavevmode\hbox{\tableA \fonttab{txtt}{\texttt{Text Typewriter Upright}}} \bigskip\bigskip \leavevmode\hbox{\tableA \fonttab{txttsl}{\texttt{\slshape Text Typewriter Slanted}}} \bigskip\bigskip \leavevmode\hbox{\tableA \fonttab{txttsc}{\texttt{\scshape Text Typewriter Cap \& Small Cap}}} \end{center} \subsection{T1 (EC) Cork Encoding Text Fonts} These fonts' encodings are identical to those of corresponding EC fonts. \begin{center} \centering \leavevmode\hbox{\tableD \fonttab{t1xr}{Text Roman Upright}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{t1xi}{\textit{Text Roman Italic}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{t1xsl}{\textsl{Text Roman Slanted}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{t1xsc}{\textsc{Text Roman Cap \& Small Cap}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{t1xss}{\textsf{Text Sans Serif Upright}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{t1xsssl}{\textsf{\slshape Text Sans Serif Slanted}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{t1xsssc}{\textsf{\scshape Text Sans Serif Cap \& Small Cap}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{t1xtt}{\texttt{Text Typewriter Upright}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{t1xttsl}{\texttt{\slshape Text Typewriter Slanted}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{t1xttsc}{\texttt{\scshape Text Typewriter Cap \& Small Cap}}} \end{center} \subsection{LY1 \TeX-and-ANSI Encoding Text Fonts} \begin{center} \centering \leavevmode\hbox{\tableD \fonttab{tyxr}{Text Roman Upright}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{tyxi}{\textit{Text Roman Italic}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{tyxsl}{\textsl{Text Roman Slanted}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{tyxsc}{\textsc{Text Roman Cap \& Small Cap}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{tyxss}{\textsf{Text Sans Serif Upright}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{tyxsssl}{\textsf{\slshape Text Sans Serif Slanted}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{tyxsssc}{\textsf{\scshape Text Sans Serif Cap \& Small Cap}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{tyxtt}{\texttt{Text Typewriter Upright}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{tyxttsl}{\texttt{\slshape Text Typewriter Slanted}}} \bigskip\bigskip \leavevmode\hbox{\tableD \fonttab{tyxttsc}{\texttt{\scshape Text Typewriter Cap \& Small Cap}}} \end{center} \subsection{TS1 (TC) Encoding Text Companion Fonts} These fonts' encodings are identical to those of corresponding TC fonts. \begin{center} \centering \leavevmode\hbox{\tableE \fonttab{tcxr}{Text Companion Roman Upright}} \bigskip\bigskip \leavevmode\hbox{\tableE \fonttab{tcxi}{\textit{Text Companion Roman Italic}}} \bigskip\bigskip \leavevmode\hbox{\tableE \fonttab{tcxsl}{\textsl{Text Companion Roman Slanted}}} \bigskip\bigskip \leavevmode\hbox{\tableE \fonttab{tcxss}{\textsf{Text Companion Sans Serif Upright}}} \bigskip\bigskip \leavevmode\hbox{\tableE \fonttab{tcxsssl}{\textsf{\slshape Text Companion Sans Serif Slanted}}} \bigskip\bigskip \leavevmode\hbox{\tableE \fonttab{tcxtt}{\texttt{Text Companion Typewriter Upright}}} \bigskip\bigskip \leavevmode\hbox{\tableE \fonttab{tcxttsl}{\texttt{\slshape Text Companion Typewriter Slanted}}} \end{center} \subsection{Math Fonts} These fonts' encodings are identical to those of corresponding CM and AMS Math fonts. Additional math fonts are provided. \begin{center} \centering \leavevmode\hbox{\table \fonttab{txmi}{Math Italic (Corresponding to CMMI)}} \bigskip\bigskip \leavevmode\hbox{\table \fonttab{txmi1}{Math Italic (Corresponding to CMMI) used with the \texttt{varg} option}} \bigskip\bigskip \leavevmode\hbox{\tableF \fonttab{txmia}{Math Italic A}} \bigskip\bigskip \leavevmode\hbox{\table \fonttab{txsy}{Math Symbols (Corresponding to CMSY)}} \bigskip\bigskip \leavevmode\hbox{\table \fonttab{txsya}{Math Symbols A (Corresponding to MSAM)}} \bigskip\bigskip \leavevmode\hbox{\table \fonttab{txsyb}{Math Symbols B (Corresponding to MSBM)}} \bigskip\bigskip \leavevmode\hbox{\tableC \fonttab{txsyc}{Math Symbols C}} \bigskip\bigskip \leavevmode\hbox{\table \fonttab{txex}{Math Extension (Corresponding to CMEX)}} \bigskip\bigskip \leavevmode\hbox{\tableB \fonttab{txexa}{Math Extension A}} \end{center} Bold versions of all fonts are available. \end{document}
{ "alphanum_fraction": 0.7170617663, "avg_line_length": 32.0798387097, "ext": "tex", "hexsha": "2292666ba4fa2e23d102db34682fe86e85920311", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3172016a9038e69e24b912f4e06b3213686abda1", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "tigloo/openshift-origin-cartridge-texlive", "max_forks_repo_path": "texmf-dist/doc/fonts/txfonts/txfontsdoc.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3172016a9038e69e24b912f4e06b3213686abda1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "tigloo/openshift-origin-cartridge-texlive", "max_issues_repo_path": "texmf-dist/doc/fonts/txfonts/txfontsdoc.tex", "max_line_length": 131, "max_stars_count": null, "max_stars_repo_head_hexsha": "3172016a9038e69e24b912f4e06b3213686abda1", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "tigloo/openshift-origin-cartridge-texlive", "max_stars_repo_path": "texmf-dist/doc/fonts/txfonts/txfontsdoc.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 14355, "size": 39779 }
\xname{dynamic} \chapter{Dynamic Analysis} \label{chap:dynamic} This chapter describes all aspect of dynamic analysis in Petablox. Section \ref{sec:writing-dynamic} describes how to write a dynamic analysis, Section \ref{sec:running-dynamic} describes how to compile and run it, and Section \ref{sec:instr-events} describes common dynamic analysis events supported in Petablox. \section{Writing a Dynamic Analysis} \label{sec:writing-dynamic} Follow the following steps to write your own dynamic analysis. Create a class extending \code{petablox.project.analyses.DynamicAnalysis} and override the appropriate methods in it. The only methods that must be compulsorily overridden are method \code{getInstrScheme()}, which must return an instance of the ``instrumentation scheme" required by your dynamic analysis (i.e., the kind and format of events to be generated during an instrumented program's execution) plus each \code{process<event>(<args>)} method that corresponds to event {\tt <event>} with format {\tt <args>} enabled by the chosen instrumentation scheme. See Section \ref{sec:instr-events} for the kinds of supported events and their formats. A sample such class \code{MyDynamicAnalysis} is shown below: \texonly{\newpage} \begin{framed} {\small \begin{verbatim} import petablox.project.Petablox; import petablox.project.analyses.DynamicAnalysis; import petablox.instr.InstrScheme; // ***TODO***: analysis won't be recognized by Petablox without this annotation @Petablox(name = "<ANALYSIS_NAME>") public class MyDynamicAnalysis extends DynamicAnalysis { InstrScheme scheme; @Override public InstrScheme getInstrScheme() { if (scheme != null) return scheme; scheme = new InstrScheme(); // ***TODO***: Choose (<event1>, <args1>), ... (<eventN>, <argsN>) // depending upon the kind and format of events required by this // dynamic analysis to be generated for this during an instrumented // program's execution. scheme.set<event1>(<args1>); ... scheme.set<eventN>(<argsN>); return scheme; } @Override public void initAllPasses() { // ***TODO***: User code to be executed once and for all // before all instrumented program runs start. } @Override public void doneAllPasses() { // ***TODO***: User code to be executed once and for all // after all instrumented program runs finish. } @Override public void initPass() { // ***TODO***: User code to be executed once before each instrumented program run starts. } @Override public void donePass() { // ***TODO***: User code to be executed once after each instrumented program run finishes. } @Override public void process<event1>(<args1>) { // ***TODO***: User code for handling events of kind <event1> with format <args1>. } ... @Override public void process<eventN>(<argsN>) { // ***TODO***: User code for handling events of kind <eventN> with format <argsN>. } } \end{verbatim} } \end{framed} \section{Compiling and Running a Dynamic Analysis} \label{sec:running-dynamic} Compile the analysis by placing the directory containing class \code{MyDynamicAnalysis} created above in the path defined by property \code{petablox.ext.java.analysis.path}. Provide the IDs of program runs to be generated (say 1, 2, ..., M) and the command-line arguments to be used for the program in each of those runs (say \code{<args1>}, ..., \code{<argsM>}) via properties \code{petablox.run.ids=1,2,...,N} and \code{petablox.args.1=<args1>}, ..., \code{petablox.args.M=<argsM>}. By default, \code{petablox.run.ids=0} and \code{petablox.args.0=""}, that is, the program will be run only once (using run ID 0) with no command-line arguments. To run the analysis, set property \code{petablox.run.analyses} to \code{<ANALYSIS_NAME>} (recall that \code{<ANALYSIS_NAME>} is the name provided in the \code{@Petablox} annotation for class \code{MyDynamicAnalysis} created above). {\bf Note:} The IBM J9 JVM on Linux is highly recommended if you intend to use Petablox for dynamic program analysis, as it allows you to instrument the entire JDK; using any other platform will likely require excluding large parts of the JDK from being instrumented. Additionally, if you intend to use online (load-time) bytecode instrumentation in your dynamic program analysis, then you will need JDK 6 or higher, since this functionality requires the \code{java.lang.instrument} API with class retransformation support (the latter support is available only in JDK 6 and higher). You can change the default values of various properties for configuring your dynamic analysis; see Section \ref{sec:scope-props} and Section \ref{sec:instr-props} in Chapter \ref{chap:properties}. For instance: \begin{itemize} \item You can set property \code{petablox.scope.kind} to {\tt dynamic} so that the program scope is computed dynamically (i.e., by running the program) instead of statically. \item You can exclude certain classes (e.g., JDK classes) from being instrumented by setting properties \code{petablox.std.scope.exclude}, \code{petablox.ext.scope.exclude}, and \code{petablox.scope.exclude}. \item You can choose between online (i.e. load-time) and offline bytecode instrumentation by setting property \code{petablox.instr.kind} to {\tt online} or {\tt offline}. \item You can require the event-generating and event-handling JVMs to be one and the same (by setting property \code{petablox.trace.kind} to {\tt none}), or to be separate (by setting property {\tt petablox.trace.kind} to {\tt full} or {\tt pipe}, depending upon whether you want the two JVMs to exchange events by a regular file or a POSIX pipe, respectively). Using a single JVM can cause correctness/performance issues if event-handling Java code itself is instrumented (e.g., say the event-handling code uses class \code{java.util.ArrayList} which is not excluded from program scope). Using separate JVMs prevents such issues since the event-handling JVM runs uninstrumented bytecode (only the event-generating JVM runs instrumented bytecode). If a regular file is used to exchange events between the two JVMs, then the JVMs run serially: the event-generating JVM first runs to completion, dumps the entire dynamic trace to the regular file, and then the event-handling JVM processes the dynamic trace. If a POSIX pipe is used to exchange events between the two JVMs, then the JVMs run in lockstep. Obviously, a pipe is more efficient for very long traces, but it not portable (e.g., it does not currently work on Windows/Cygwin), and the traces cannot be reused across Petablox runs (see the following item). \item You can reuse dynamic traces from a previous Petablox run by setting property \code{petablox.reuse.traces} to {\tt true}. In this case, you must also set property {\tt petablox.trace.kind} to {\tt full}. \item You can set property \code{petablox.dynamic.haltonerr} to {\tt false} to prevent Petablox from terminating even if the program on which dynamic analysis is being performed crashes. \end{itemize} Petablox offers much more flexibility in crafting dynamic analyses. You can define your own instrumentor (by subclassing \code{petablox.instr.CoreInstrumentor} instead of using the default \code{petablox.instr.Instrumentor}) and your own event handler (by subclassing \code{petablox.runtime.CoreEventHandler} instead of using the default \code{petablox.runtime.EventHandler}). You can ask the dynamic analysis to use your custom instrumentor and/or your custom event handler by overriding methods \code{getInstrumentor()} and \code{getEventHandler()}, respectively, defined in \code{petablox.project.analyses.CoreDynamicAnalysis}. Finally, you can define your own dynamic analysis template by subclassing \code{petablox.project.analyses.CoreDynamicAnalysis} instead of subclassing the default \code{petablox.project.analyses.DynamicAnalysis}. \section{Common Dynamic Analysis Events} \label{sec:instr-events} Petablox provides support for instrumenting common dynamic analysis events. The below table describes these events. \begin{mytable}{|l|p{4.3in}|} \hline {\bf Event Kind} & {\bf Description} \\ \hline EnterMainMethod(\bt) & After thread \bt\ enters method \bm\ (in domain M). \\ \hline EnterMethod(\bm, \bt) & After thread \bt\ enters method \bm\ (in domain M). \\ \hline LeaveMethod(\bm, \bt) & Before thread \bt\ leaves method \bm\ (in domain M). \\ \hline EnterLoop(\bb, \bt) & Before thread \bt\ begins loop \bb\ (in domain B). \\ \hline LoopIteration(\bb, \bt) & Before thread \bt\ starts a new iteration of loop \bb\ (in domain B). \\ \hline LeaveLoop(\bb, \bt) & After thread \bt\ finishes loop \bb\ (in domain B). \\ \hline BasicBlock(\bb, \bt) & Before thread \bt\ enters basic block \bb\ (in domain B). \\ \hline Quad(\bp, \bt) & Before thread \bt\ executes quad at program point \bp\ (in domain P). \\ \hline BefMethodCall(\bi, \bt, \bo) & \begin{tabular}{p{4.3in}} Before thread \bt\ executes the method invocation statement at program point \bi\ (in domain I) with this argument as object \bo. \\ {\bf Note:} Not generated before constructor calls; use BefNew event. \end{tabular} \\ \hline AftMethodCall(\bi, \bt, \bo) & \begin{tabular}{p{4.3in}} After thread \bt\ executes the method invocation statement at program point \bi\ (in domain I) with this argument as object \bo. \\ {\bf Note:} Not generated after constructor calls; use AftNew event. \end{tabular} \\ \hline BefNew(\bh, \bt, \bo) & Before thread \bt\ executes a \code{new} bytecode instruction and allocates fresh object \bo\ at program point \bh\ (in domain H). \\ \hline AftNew(\bh, \bt, \bo) & After thread \bt\ executes a \code{new} bytecode instruction and allocates fresh object \bo\ at program point \bh\ (in domain H). \\ \hline NewArray(\bh, \bt, \bo) & After thread \bt\ executes a \code{newarray} bytecode instruction and allocates fresh object \bo\ at program point \bh\ (in domain H). \\ \hline GetstaticPrimitive(\be, \bt, \bg) & After thread \bt\ reads primitive-typed static field \bg\ (in domain F) at program point \be\ (in domain E). \\ \hline GetstaticReference(\be, \bt, \bg, \bo) & After thread \bt\ reads object \bo\ from reference-typed static field \bg\ (in domain F) at program point \be\ (in domain E). \\ \hline PutstaticPrimitive(\be, \bt, \bg) & After thread \bt\ writes primitive-typed static field \bg\ (in domain F) at program point \be\ (in domain E). \\ \hline PutstaticReference(\be, \bt, \bg, \bo) & After thread \bt\ writes object \bo\ to reference-typed static field \bg\ (in domain F) at program point \be\ (in domain E). \\ \hline GetfieldPrimitive(\be, \bt, \bb, \bg) & After thread \bt\ reads primitive-typed instance field \bg\ (in domain F) of object \bb\ at program point \be\ (in domain E). \\ \hline GetfieldReference(\be, \bt, \bb, \bg, \bo) & After thread \bt\ reads object \bo\ from reference-typed instance field \bg\ (in domain F) of object \bb\ at program point \be\ (in domain E). \\ \hline PutfieldPrimitive(\be, \bt, \bb, \bg) & After thread \bt\ writes primitive-typed instance field \bg\ (in domain F) of object \bb\ at program point \be\ (in domain E). \\ \hline PutfieldReference(\be, \bt, \bb, \bg, \bo) & After thread \bt\ writes object \bo\ to reference-typed instance field \bg\ (in domain F) of object \bb\ at program point \be\ (in domain E). \\ \hline AloadPrimitive(\be, \bt, \bb, \bi) & After thread \bt\ reads the primitive-typed element at index \bi\ of array object \bb\ at program point \be\ (in domain E). \\ \hline AloadReference(\be, \bt, \bb, \bi, \bo) & After thread \bt\ reads object \bo\ from the reference-typed element at index \bi\ of array object \bb\ at program point \be\ (in domain E). \\ \hline AstorePrimitive(\be, \bt, \bb, \bi) & After thread \bt\ writes the primitive-typed element at index \bi\ of array object \bb\ at program point \be\ (in domain E). \\ \hline AstoreReference(\be, \bt, \bb, \bi, \bo) & After thread \bt\ writes object \bo\ to the reference-typed element at index \bi\ of array object \bb\ at program point \be\ (in domain E). \\ \hline %ReturnPrimitive(\bp, \bt) & Not yet supported. %\\ %ReturnReference(\bp, \bt) & Not yet supported. %\\ %ExplicitThrow(\bp, \bt, \bo) & Not yet supported. %\\ %ImplicitThrow(\bp, \bt, \bo) & Not yet supported. %\\ ThreadStart(\bi, \bt, \bo) & Before thread \bt\ calls the \code{start()} method of \code{java.lang.Thread} at program point \bi\ (in domain I) and spawns a thread \bo. \\ \hline ThreadJoin(\bi, \bt, \bo) & Before thread \bt\ calls the \code{join()} method of \code{java.lang.Thread} at program point \bi\ (in domain I) to join with thread \bo. \\ \hline AcquireLock(\bl, \bt, \bo) & After thread \bt\ executes a statement of the form ``monitorenter \bo'' or enters a method synchronized on \bo\ at program point \bl\ (in domain L). \\ \hline ReleaseLock(\br, \bt, \bo) & Before thread \bt\ executes a statement of the form ``monitorexit \bo'' or leaves a method synchronized on \bo\ at program point \br\ (in domain R). \\ \hline Wait(\bi, \bt, \bo) & Before thread \bt\ calls the \code{wait()} method of \code{java.lang.Object} on object \bo\ at program point \bi\ (in domain I). \\ \hline Notify(\bi, \bt, \bo) & Before thread \bt\ calls the \code{notify()} or \code{notifyAll()} method of \code{java.lang.Object} on object \bo\ at program point \bi\ (in domain I). \T \\ \hline \end{mytable}
{ "alphanum_fraction": 0.7160772876, "avg_line_length": 48.8078291815, "ext": "tex", "hexsha": "96d2a595b36cd6e605b5d908eb1be6bfa6491969", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2018-04-22T08:49:47.000Z", "max_forks_repo_forks_event_min_datetime": "2016-01-04T08:59:17.000Z", "max_forks_repo_head_hexsha": "860e9441834041392e37282884a51a5603d8c132", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "KihongHeo/petablox", "max_forks_repo_path": "doc/dynamic.tex", "max_issues_count": 24, "max_issues_repo_head_hexsha": "860e9441834041392e37282884a51a5603d8c132", "max_issues_repo_issues_event_max_datetime": "2019-03-14T02:04:46.000Z", "max_issues_repo_issues_event_min_datetime": "2016-01-01T01:29:36.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "KihongHeo/petablox", "max_issues_repo_path": "doc/dynamic.tex", "max_line_length": 187, "max_stars_count": 25, "max_stars_repo_head_hexsha": "860e9441834041392e37282884a51a5603d8c132", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "KihongHeo/petablox", "max_stars_repo_path": "doc/dynamic.tex", "max_stars_repo_stars_event_max_datetime": "2019-06-11T18:30:14.000Z", "max_stars_repo_stars_event_min_datetime": "2016-02-03T19:14:38.000Z", "num_tokens": 3710, "size": 13715 }
\documentclass{article} \usepackage{epsfig} \usepackage{psfrag} \bibliographystyle{plain} %&t&{\tt #}& %&v&\verb|#|& \newcounter{lemma1} \newtheorem{theorem}{Theorem} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[lemma1]{Lemma} % Texts on the EPS figures. % \psfrag{zs}{$s$} \psfrag{zt}{$t$} \psfrag{zt0}{$t_0$} \psfrag{zs-Xpaths}{$s$-$X$ paths} \psfrag{zGs+}{$G_s^+$} \psfrag{zX}{$X$} \psfrag{zX1*}{$X_1^*$} \psfrag{zX1a}{$X_1^a$} \psfrag{zX1n}{$X_1^n$} \psfrag{z(a)}{(a)} \psfrag{z(b)}{(b)} \psfrag{z(c)}{(c)} \psfrag{zNGs+-X1a(X1n)}{$N_{G_s^+ - X_1^a}(X_1^n)$} \psfrag{zGs+-X1a}{$G_s^+ - X_1^a$} \psfrag{zG1-X1a}{$G_1 - X_1^a$} \psfrag{zM1*}{$M_1^*$} \psfrag{zVM1*}{$V_{M_1^*}$} \psfrag{zG1-X1a/M1*}{$(G_1 - X_1^a)/M_1^*$} \psfrag{zx'inX1*}{$x' \in X_1^*$} \psfrag{zxinXr*}{$x \in X_r^*$} \psfrag{zxinGr}{$x \in G_r$} \psfrag{zXr*}{$X_r^*$} \psfrag{zX}{$X$} \psfrag{zY}{$Y$} \psfrag{zX0}{$X_0$} \psfrag{zY0}{$Y_0$} \psfrag{zM0}{$M_0$} \psfrag{zGb}{$G_\beta$} \psfrag{zA0}{$A_0$} \psfrag{zB0}{$B_0$} \psfrag{zC0}{$C_0$} \psfrag{zD0}{$D_0$} \psfrag{zF0}{$F_0$} \psfrag{zI0}{$I_0$} \psfrag{zS0}{$S_0$} \psfrag{zM}{$M$} \psfrag{zEK0}{$E_{K_0}$} \psfrag{zEK'0}{$E_{K'_0}$} \psfrag{zJ-0}{$J_-^0$} \psfrag{zL0}{$L^0$} \psfrag{zQ0}{$Q_0$} \psfrag{zQ'0}{$Q'_0$} \psfrag{zJQ0}{$J_Q^0$} \psfrag{zJR0}{$J_Q^0$} \psfrag{zu0}{$u_0$} \psfrag{zw}{$w$} \psfrag{zR0}{$R_0$} \psfrag{zA1}{$A_1$} \psfrag{zB1}{$B_1$} \psfrag{zC1}{$C_1$} \psfrag{zD1}{$D_1$} \psfrag{zF1}{$F_1$} \psfrag{zI1}{$I_1$} \psfrag{zS1}{$S_1$} \psfrag{zEK1}{$E_{K_1}$} \psfrag{zEK'1}{$E_{K'_1}$} \psfrag{zX1}{$X_1$} \psfrag{zY1}{$Y_1$} \psfrag{zt1}{$t_1$} \psfrag{zL1}{$L^1$} \psfrag{zR1}{$R_1$} \psfrag{zu1}{$u_1$} \psfrag{zJR1}{$J_R^1$} \psfrag{zLR1}{$L_R^1$} \psfrag{zSn}{$S_n$} \psfrag{zLR0-t0}{$L_R^0-t_0$} \psfrag{zLR1-t1}{$L_R^1-t_1$} \psfrag{zLRn-tn}{$L_R^n-t_n$} \psfrag{zW}{$W$} \psfrag{zNG(s)}{$N_G(s)$} \psfrag{zv1}{$v_1$} \psfrag{zG'}{$G'$} \psfrag{zG'-v1}{$G'-v_1$} \psfrag{zv2}{$v_2$} \psfrag{zvl-k}{$v_{l-k}$} \psfrag{zu2}{$u_2$} \psfrag{zul-k}{$u_{l-k}$} \psfrag{zS2}{$S_2$} \psfrag{zSl-k}{$S_{l-k}$} \psfrag{zvl}{$v_l$} \psfrag{zv3}{$v_3$} \psfrag{zvk}{$v_k$} \psfrag{zvk+1}{$v_{k+1}$} \psfrag{zuk-1}{$u_{k-1}$} \psfrag{zuk}{$u_k$} \psfrag{zleft}{left} \psfrag{zright}{right} \psfrag{zMa}{$M_a$} \psfrag{zP}{$P$} \psfrag{zGs}{$G_s$} \psfrag{zGt}{$G_t$} \psfrag{zs1}{$s_1$} \psfrag{zsk-1}{$s_{k-1}$} \psfrag{zsk}{$s_{k}$} \psfrag{zt}{$t$} \psfrag{za1}{$a_1$} \psfrag{za2}{$a_2$} \psfrag{zab-1}{$a_{\beta-1}$} \psfrag{zab}{$a_{\beta}$} \psfrag{zab+1}{$a_{\beta+1}$} \psfrag{zam}{$a_m$} \psfrag{za'}{$a'$} \psfrag{zw1}{$w_1$} \psfrag{zw2}{$w_2$} \psfrag{zw3}{$w_3$} \psfrag{zwb-1}{$w_{\beta-1}$} \psfrag{zwb}{$w_{\beta}$} \psfrag{zwb+1}{$w_{\beta+1}$} \psfrag{zwb+2}{$w_{\beta+2}$} \psfrag{zwk}{$w_k$} \psfrag{zwbk}{$w_{\beta+k}$} \psfrag{zwkpG}{$w_{\kappa(G)}$} \psfrag{Wst}{$W(s, t)$} \psfrag{D0}{$D_0=s$} \psfrag{D1}{$D_1$} \psfrag{D2}{$D_2$} \psfrag{D3}{$D_3$} \psfrag{Dk-1}{$D_{k-1}$} \psfrag{Dk}{$D_k$} \psfrag{NGS}{$N_G(s)$} \psfrag{NG1S}{$N_{G'}(s_1)$} \psfrag{NGk-1S}{$N_{G^{(k-1)}}(s_{k-1})$} \psfrag{NGkS}{$N_{G^{(k)}}(s_{k})$} \psfrag{zNGkS}{$N_{G^{(k)}}(s_{k})$} \psfrag{zu11}{$u_{1,1}$} \psfrag{zu12}{$u_{1,2}$} \psfrag{zu13}{$u_{1,3}$} \psfrag{zu1k}{$u_{1,\kappa(G)}$} \psfrag{zu21}{$u_{2,1}$} \psfrag{zu22}{$u_{2,2}$} \psfrag{zu23}{$u_{2,3}$} \psfrag{zu2k}{$u_{2,\kappa(G)}$} \psfrag{zu31}{$u_{3,1}$} \psfrag{zu32}{$u_{3,2}$} \psfrag{zu33}{$u_{3,3}$} \psfrag{zu3k}{$u_{3,\kappa(G)}$} \psfrag{zum-21}{$u_{m-2,1}$} \psfrag{zum-22}{$u_{m-2,2}$} \psfrag{zum-23}{$u_{m-2,3}$} \psfrag{zum-2k}{$u_{m-2,\kappa(G)}$} \psfrag{zum-11}{$u_{m-1,1}$} \psfrag{zum-12}{$u_{m-1,2}$} \psfrag{zum-13}{$u_{m-1,3}$} \psfrag{zum-1k}{$u_{m-1,\kappa(G)}$} \psfrag{zum1}{$u_{m,1}$} \psfrag{zum2}{$u_{m,2}$} \psfrag{zum3}{$u_{m,3}$} \psfrag{zumk}{$u_{m,\kappa(G)}$} \psfrag{zU1}{$U_1$} \psfrag{zU2}{$U_2$} \psfrag{zU3}{$U_3$} \psfrag{zUm-2}{$U_{m-2}$} \psfrag{zUm-1}{$U_{m-1}$} \psfrag{zUm}{$U_{m}$} \title{A Proof of Vertex-disjoint Menger's Theorem by Bipartite Matching and Contraction} \author{SHOICHIRO YAMANISHI} \begin{document} \maketitle \begin{abstract} A proof of vertex-disjoint Menger's theorem between two distinctive vertices $s$ and $t$ in $G$ is proposed. Starting from a minimum separator $X$ and the component $G_t$ of $G - X$ to which $t$ belongs, $|X|$ vertex-disjoint $s$-$X$ paths are found in $G - V(G_t)$ by recursively applying contraction to bipartite matchings of $X$. Similarly, $|X|$ vertex-disjoint $X$-$t$ paths are found. Concatenating two paths at each vertex in $X$ yields $|X|$ vertex-disjoint $s$-$t$ paths. The contraction of the bipartite matchings must not decrease the connectivity. Existence of such bipartite matchings are proven by induction on $|X|$. \end{abstract} \section{Introduction} Menger's theorem is one of the early fundamental discoveries in graph theory. Since the original theorem was proposed by Menger \cite{menger1}, some variants have been proposed, which are roughly divided into vertex-disjoint ones, e.g. Whitney \cite{whitney1}, and edge-disjoint one by Ford and Fulkerson \cite{fordfulkerson1}. For the vertex-disjoint theorems, several proofs have been proposed by Dirac \cite{dirac1}; B\"{o}hme, G\"{o}ring, and Harant \cite{bohmegoringharant1}; Pym \cite{pym1}; and Gr\"{u}nwald (later Gallai) \cite{grunwald1}. Also, the edge-disjoint theorem is proven by the min-cut/max-flow theorem \cite{fordfulkerson1}. Of all the variants, we prove the following vertex disjoint theorem. \begin{theorem}\label{theorem1} {\bf (Menger's Theorem)}\\Given a graph $G=(V,\,E)$, let $s$ and $t$ be two distinct non-adjacent vertices in $G$. The size of the minimum $s$-$t$ separators is equal to the maximum number of internally vertex-disjoint $s$-$t$ paths. \end{theorem} In this article, we call the minimum number of separating vertices between $s$ and $t$ {\it $s$-$t$ connectivity} of $G$ and denote it by $\kappa_G(s, t)$ hereinafter. The purpose of this article is to propose a new proof by recursively applying contraction of a maximum bipartite matching of a minimum $s$-$t$ separator in a graph. It is easy to see the maximum number of disjoint $s$-$t$ paths does not exceed $\kappa_G(s, t)$. We prove that we can actually construct $\kappa_G(s, t)$ disjoint $s$-$t$ paths in $G$. It is also trivial to prove the case for $\kappa_G(s, t)=0$, so the following discussion assumes $\kappa_G(s, t)>0$. \section{Proof} %Put all the notational conventions here. We follow the notational conventions in Diestel's \cite{diestel1}. We treat a matching $M$ as a set of edges in a graph $G$. We denote the graph obtained from $G$ by contracting all the edges in $M$ by $G/M$, and the set of vertices in $G/M$ into which the edges in $M$ are contracted by $V_M$. Let $k:=\kappa_G(s, t)$. Let $X$ be a minimum $s$-$t$ separator ($X\cap\{s, t\}=\emptyset$). Let $G_s$, $G_t$ be the two components of $G - X$ to which $s$ and $t$ belong respectively. Let $G_s^+$, $G_t^+$ be $G[V(G_s)\cup X]$, $G[V(G_t)\cup X]$ respectively. We prove $G_s^+$ has $k$ vertex-disjoint $s$-$X$ paths. Similarly, $G_t^+$ has $k$ vertex-disjoint $X$-$t$ paths by symmetry. Concatenating the two paths at each vertex in $X$ from each of $s$-$X$ paths and $X$-$t$ paths in $G$ yields $k$ vertex-disjoint $s$-$t$ paths (Fig. \ref{label_fig1}). \begin{figure}\begin{center} \epsfig{file=figs/fig1.eps, width = 5cm} \caption[Fig1]{$X$, $G_s^+$, and $s$-$X$ paths.} \label{label_fig1} \end{center}\end{figure} The rest of the proof is for finding $k$ $s$-$X$ paths in $G_s^+$. Our strategy is as follows. Let $G_1 := G$ and $X_1^* := X$. Split $X_1^*$ into two subsets $X_1^a$ and $X_1^n$ such that $X_1^a = \left\{ x\in X_1^* | \left\{s, x\right\}\in E(G_s^+)\right\}$ and $X_1^n = \left\{ x\in X_1^* | \left\{s, x\right\}\not\in E(G_s^+)\right\}$ (Fig. \ref{label_fig2}(a)). We have already found $|X_1^a|$ vertex-disjoint $s$-$X_1^*$ paths between $s$ and $X_1^a$ in $G_s^+$, each of which is merely an edge. \begin{figure}\begin{center} \epsfig{file=figs/fig2.eps, height = 4cm} \caption[Fig2]{$X_1^a$, $X_1^n$, and $G_s^+ - X_a$.} \label{label_fig2} \end{center}\end{figure} Please observe that $X_1^n$ is a minimum separator of $G_1 - X_1^a$ (Fig. \ref{label_fig2}(b)). It is easy to see $X_1^n$ is a separator of $G_1 - X_1^a$. Suppose it were not minimum. Let $X'$ be a minimum separator of $G_1 - X_1^a$ such that $|X'| < |X_1^n|$. Then $X' \cup X_1^a$ would give a minimum separator of $G_1$, which contradicts the minimality of $X_1^*$. We prove there is a bipartite matching $M_1^*$ of $X_1^n$ to $N_{G_s^+ - X_1^a}(X_1^n)$ such that $|M_1^*| = |X_1^n|$, and contracting all the edges in $M_1^*$ does not decrease the $s$-$t$ connectivity of $(G_1 - X_1^a)/M_1^*$, i.e. $\kappa_{G_1 - X_1^a}(s, t) \le \kappa_{(G_1 - X_1^a)/M_1^*}(s, t)$ (Fig. \ref{label_fig3}(a)). We eventually obtain the equality here since $V_{M_1^*} \cup X_1^a$ form an $s$-$t$ separator in $G_1/M_1^*$, which indicates they are again a minimum separator $X_2^*$ in $G_1/M_1^*$ and $\kappa_{G_1}(s, t) = \kappa_{G_1/M_1^*}(s, t)$ (Fig. \ref{label_fig3}(b)). Let $G_2 = G_1/M_1^*$. \begin{figure}\begin{center} \epsfig{file=figs/fig3.eps, height = 6cm} \caption[Fig3]{$G_1 - X_1^a$, and $(G_1 - X_1^a)/M_1^*$.} \label{label_fig3} \end{center}\end{figure} We can recursively apply this process of finding a matching and contracting all the edges in it $r$ times until all the vertices in the minimum $s$-$t$ separator $X_r^*$ in $G_r$ are adjacent to $s$ (Fig. \ref{label_fig4} (a)). This process is guaranteed to terminate as $G$ is finite, and at each iteration at least one edge is contracted. If we ``unfold'' the edge $\{s, x\}$ and the vertex $x$ in $X_r^*$, to which some incident edges have been contracted, we obtain $|X|$ trees in $G_s^+$, which are mutually vertex-disjoint except at $s$. In each tree we can find a unique $s$-$x'$ path for each $x'\in X_1^*$. Those paths form a set of $k$ vertex-disjoint $s$-$X$ paths (Fig. \ref{label_fig4} (b)). \begin{figure}\begin{center} \epsfig{file=figs/fig4.eps, width =10cm} \caption[Fig4]{$G_r, X_r^*, and G_\beta$.} \label{label_fig4} \end{center}\end{figure} The rest of the proof is dedicated to prove existence of a bipartite matching $M$ of $X_1^n$ to $N_{G_s^+ - X_1^a}(X_1^n)$ such that contraction of all the edges in $M$ does not decrease the $s$-$t$ connectivity of in $G - X_1^a$. In the following discussion, we assume $X_1^a = \emptyset$, i.e. $X = X_1^n$. If $X_1^a \ne \emptyset$, consider the graph $G - {X_1^a}$ instead of $G$ for its connectivity $|X_1^n|$. We later add the edges (disjoint paths) in $X_1^a$, after we find vertex-disjoint $s$-$X_1^n$ paths. First we prove existence of a bipartite matching of $X$. Let $Y := N_{G_s^+}(X)$. Since we assume $X_1^a$ is empty, $s$ is not in $Y$. Consider the bipartite graph $G_\beta$ with the bipartition $\{X, Y\}$ (Fig. \ref{label_fig4} (c)). Formally $G_\beta = G[X\cup Y] - \left(E\left(G[X]\right)\cup E\left(G[Y]\right)\right)$. We can easily check that $G_\beta$ contains a bipartite matching of $X$. Indeed, for all $S\subseteq X$, $d_{G_\beta}(S)\ge|S|$. Suppose $\exists S \subseteq X$ such that $d_{G_\beta}(S) < |S|$, then we can replace $S$ with $N_{G_\beta}(S)$ in $X$, which would be an $s$-$t$ separator in $G$ whose cardinality is less than $|X|$, which contradicts the minimality of $X$. By the marriage theorem by Hall \cite{hall1}, $G_\beta$ contains a bipartite matching of $X$. Next we prove existence of a bipartite matching $M$ of $X$ which does not reduce the $s$-$t$ connectivity when contraction of $M$ is applied to $G$. We prove this by induction on $\kappa_G(s, t)$, or on $|X|$. If $|X| = 1$, it is trivial to prove contraction of any bipartite matching $M$ does not reduce the $s$-$t$ connectivity. So the induction starts. Let $X_0 := X$, $Y_0 := Y$, and $M_0$ be any bipartite matching of $X_0$ in $N_{G_\beta}$ (Fig. \ref{label_fig5}(a)). If it preserves the $s$-$t$ connectivity in $G/M_0$, we are done. So we assume $\kappa_{G/M_0}(s, t) < \kappa_{G}(s, t)$. $G/{M_0}$ has an $s$-$t$ separator $W_0$ in $G_s^+$ such that $|W_0| < |X_0|$. $W_0$ contains at least one vertex in $V_{M_0}$, otherwise $W_0$ would also be an $s$-$t$ separator in $G$ whose cardinality is less than $|X|$, a contradiction. So $W_0\cap V_{M_0}\ne \emptyset$. Let $K_0 = W_0\cap V_{M_0}$ and $S_0 = W_0 \backslash K_0$. Here we categorize the edges and vertices in $G$ for the following discussion (Fig. \ref{label_fig5}(b)). \begin{itemize} \item Let $E_{K_0}$ be the set of edges in $G$ which corresponds to $K_0$ in $G/M_0$. \item Let $E_{K'_0}$ be $M_0\backslash E_{K_0}$. \item Let $A_0$ and $C_0$ be the sets of vertices in $X_0$ and $Y_0$ respectively, which are incident to the edges in $E_{K_0}$. \item Let $B_0$ and $D_0$ be the sets of vertices in $X_0$ and $Y_0$ respectively, which are incident to the edges in $E_{K'_0}$. \item Let $F_0$ be the set of vertices in $Y_0\backslash (C_0\cup D_0)$ which are not incident to any vertices in $B_0$. \item Let $I_0$ be the set of vertices in $Y_0\backslash (C_0\cup D_0)$ which are incident to any vertices in $B_0$. \end{itemize} Please observe that $A_0 \cup C_0 \cup S_0$ separates $s$ from $t$ in $G$, since $K_0 \cup S_0$ separates $s$ from $t$ in $G/M_0$. Please also note that $S_0$ can be $\emptyset$, but neither $B_0$ nor $D_0$ can be empty. \begin{figure}\begin{center} \epsfig{file=figs/fig5.eps, height=6cm} \caption[Fig5]{$X$, $Y$, and the subsets of $V$.} \label{label_fig5} \end{center}\end{figure} Here we further define six subgraphs of $G$ and one new graph as follows. \begin{itemize} \item Let $H_s^0$ and $H_t^0$ be the two components of $G - (A_0 \cup C_0 \cup S_0)$ to which $s$ and $t$ belong respectively. \item Let $H_{s+}^0$, $H_{t+}^0$ be $G[V(H_s^0)\cup A_0 \cup C_0 \cup S_0]$, $G[V(H_t^0)\cup A_0 \cup C_0 \cup S_0]$ respectively. \item Let $J_-^0$ be a subgraph of $G$ induced by $V(H_t^0) \cap V(G_s^+)$ (Fig. \ref{label_fig6}(a)). \item Let $J_+^0$ be a subgraph of $G$ induced by $\left(V(H_t^0) \cap V(G_s^+)\right) \cup S_0$,\,\, i.e. $J_+^0 = G[V(J_-^0)\cup S_0]$. \item Let $L^0$ be a graph based on $G_s^+ - J_-^0$ augmented by a new vertex $t_0$, and new edges between $t_0$ and all the vertices in $A_0 \cup S_0$ (Fig. \ref{label_fig6}(b)). \end{itemize} Please note $H_s^0$ is a subgraph of $G_s^+$, since $W_0 \subseteq V(G_s^+/M_0)$ and $M_0 \subseteq E(G_s^+)$. Also please note that $F_0$ belongs to $L^0$ and $I_0$ belongs to $J_-^0$. \begin{figure}\begin{center} \epsfig{file=figs/fig6.eps, height=5.3cm} \caption[Fig6]{$J_-^0$ and $L^0$.} \label{label_fig6} \end{center}\end{figure} As $A_0 \cup S_0$ does not separate $s$ from $t$ in $G$, there are some vertices in $C_0$ which are incident to edges into $J_-^0$. \begin{itemize} \item Let $Q_0 := \{q\in C_0 | q$ is adjacent to at least one vertex in $J_-^0$ \}. \item Let $Q'_0 := \{q'\in V(J_-^0) | q'$ is adjacent to at least one vertex in $Q_0$ \}. \end{itemize} From the $s$-$t$ connectivity of $G$, we claim $|Q_0|\ge |B_0|-|S_0|$, and $|Q'_0|\ge |B_0|-|S_0|$. Please observe that both $S_0\cup Q_0 \cup A_0$ and $S_0\cup Q'_0 \cup A_0$ form $s$-$t$ separators in $G$ (Fig. \ref{label_fig7}(a)). For the induction process, we consider two graphs $L_R^n$ and $J_R^n$ as follows, where $n$ indicates a number of iterations. First, let $J_Q^0$ be a graph based on the subgraph of $G$ induced by $V(J_+^0)\cup Q_0$, and augmented by two new vertices $u_0$ and $w$, and new edges between $u_0$ and all the vertices in $S_0 \cup Q_0$, and new edges between all the vertices in $B_0$ and $w$ (Fig. \ref{label_fig7}(b)). We claim $\kappa_{J_Q^0}(u_0, w) \ge |B_0|$. Suppose not. Then there would be a minimum $u_0$-$w$ separator $U_0$ such that $|U_0| < |B_0|$. However, $U_0\cup A_0$ would form an $s$-$t$ separator in $G$, which contradicts the minimality of $X$. Please note that there is no edge between $L^0 - (S_0\cup A_0)$ and $J_-^0$ except $E(Q_0, Q'_0)$. \begin{figure}\begin{center} \epsfig{file=figs/fig7.eps, height=7cm} \caption[Fig7]{$Q_0$, $Q'_0$, and $J_Q^0$.} \label{label_fig7} \end{center}\end{figure} By Proposition \ref{prop1}, we can find at least one subset $R_0$ of $Q_0$ such that $|R_0| = |B_0| - |S_0|$ and $\kappa_{J_Q^0 - (Q_0\backslash R_0)}(u_0, w) = |B_0|$. Let the family of such sets be ${\cal R}_0$, and pick any set $R_0$ in ${\cal R}_0$. Let $J_R^0 := J_Q^0 - (Q_0\backslash R_0)$, and let $L_R^0$ be a graph based on $L^0$ augmented by new edges between all the vertices in $R_0$ and $t_0$ (Fig. \ref{label_fig8}). It is easy to see $\kappa_{L_R^0}(s, t_0) \ge |X|$, since existence of an $s$-$t_0$ separator whose cardinality is less than $|X|$ in $L_R^0$ implies that such a separator would also be an $s$-$t$ separator in $G$. Also, that $A_0\cup R_0\cup S_0$ form an $s$-$t_0$ separator implies $\kappa_{L_R^0}(s, t_0) = |X|$, and $A_0\cup R_0\cup S_0$ form a minimum $s$-$t_0$ separator. \begin{figure}\begin{center} \epsfig{file=figs/fig8.eps, height=7cm} \caption[Fig8]{$J_R^0$ and $L_R^0$.} \label{label_fig8} \end{center}\end{figure} Let $X_1 := A_0$ and $Y_1 := F_0\cup (C_0\backslash R_0)$. Let $M_1$ be any bipartite matching of the bipartition $\{Y_1,\,\, X_1\}$. Existence of the matching is proven using the Hall's marriage theorem similar to the proof of existence of $M_1$. If $\kappa_{L_R^0/M_1}(s, t_0) \ge |X|$, we set $L_R^n := L_R^0$, $J_R^n := J_R^0$, and we are done. So we assume $\kappa_{L_R^0/M_1}(s, t_0) < |X|$. $L_R^0/{M_1}$ has an $s$-$t_0$ separator $W_1$ such that $|W_1| < |X|$. $W_1$ contains at least one vertex in $V_{M_1}$, otherwise $W_1$ would also be an $s$-$t$ separator in $G$ whose cardinality is less than $|X|$, a contradiction. So $W_1\cap V_{M_1}\ne \emptyset$. Let $K_1 = W_1\cap V_{M_1}$ and $S_1 = W_1 \backslash K_1$. Here we categorize the edges and vertices in $L_R^0$ according to $W_1$ similar to the discussion above when we found $W_0$ (Fig \ref{label_fig9}(a)). \begin{itemize} \item Let $E_{K_1}$ be the set of edges in $L_R^0$ which corresponds to $K_1$ in $L_R^0/M_1$. \item Let $E_{K'_1}$ be $M_1\backslash E_{K_1}$. \item Let $A_1$ and $C_1$ be the sets of vertices in $X_1$ and $Y_1$ respectively, which are incident to the edges in $E_{K_1}$. \item Let $B_1$ and $D_1$ be the sets of vertices in $X_1$ and $Y_1$ respectively, which are incident to the edges in $E_{K'_1}$. \item Let $F_1$ be the set of vertices in $Y_1\backslash (C_1\cup D_1)$ which are not incident to any vertices in $B_1$. \item Let $I_1$ be the set of vertices in $Y_1\backslash (C_1\cup D_1)$ which are incident to any vertices in $B_1$. \end{itemize} Please observe that $A_1 \cup C_1 \cup S_1$ separates $s$ from $t_0$ in $L_R^0$, since $K_1 \cup S_1$ separates $s$ from $t_0$ in $L_R^0/M_1$. Please also note that $S_1$ can be $\emptyset$, but neither $B_1$ nor $D_1$ can be empty. \begin{figure}\begin{center} \epsfig{file=figs/fig9.eps, height=5.7cm} \caption[Fig9]{Subsets of $L_R^0$ and $L^1$.} \label{label_fig9} \end{center}\end{figure} Here we further define two subgraphs of $L_R^0$ and one new graph as follows. \begin{itemize} \item Let $J_-^1$ be the component of $L_R^0 - (A_1 \cup C_1 \cup S_1)$ to which $t_0$ belongs. \item Let $J_+^1$ be a subgraph of $L_R^0$ induced by $V(J_-^1)\cup S_1$. \item Let $L^1$ be a graph based on $L_R^0 - J_-^1$ augmented by a new vertex $t_1$, and new edges between $t_1$ and all the vertices in $A_1 \cup S_1$ (Fig \ref{label_fig9}(b)). \end{itemize} Also please note that $F_1$ belongs to $L^1$ and $I_1$ belongs to $J_-^1$. As $A_1 \cup S_1$ does not separate $s$ from $t_0$ in $L_R^0$, there are some vertices in $C_1$ which are incident to edges into $J_-^1$. \begin{itemize} \item Let $Q_1 := \{q\in C_1 | q$ is adjacent to at least one vertex in $J_-^1$ \}. \item Let $Q'_1 := \{q'\in V(J_-^1) | q'$ is adjacent to at least one vertex in $Q_1$ \}. \end{itemize} From the $s$-$t_0$ connectivity of $L_R^0$, we claim $|Q_1|\ge |B_1|+|R_0|+|S_0|-|S_1|$, and $|Q'_1|\ge |B_1|+|R_0|+|S_0|-|S_1|$. Please observe that both $S_1\cup Q_1 \cup A_1$ and $S_1\cup Q'_1 \cup A_1$ form $s$-$t_0$ separators in $L_R^0$. Let $J_Q^1$ be a graph based on the subgraph of $G$ induced by $V(J_+^1)\cup Q_1$, and augmented by two new vertices $u_1$ and $w_1$, and new edges between $u_1$ and all the vertices in $S_1 \cup Q_1$, and new edges between all the vertices in $B_1 \cup R_0 \cup S_0$ and $w_1$. We claim $\kappa_{J_Q^1}(u_1, w_1) \ge |B_0|+|B_1|$ similar to the reasoning for $\kappa_{J_Q^0}(u_0, w) \ge |B_0|$. Please note that $|B_1| = |R_0| + |S_0|$. By Proposition \ref{prop1}, we can find at least one subset $R_1$ of $Q_1$ such that $|R_1| = |B_0|+|B_1| - |S_1|$ and $\kappa_{J_Q^1 - (Q_1\backslash R_1)}(u_1, w_1) = |B_0|+|B_1|$. Let the family of such sets be ${\cal R}_1$ and pick any $R_1$ in ${\cal R}_1$. Let $J_R^1$ be a graph based on $J_R^0 - u_0$ and $J_Q^1 - (Q_1\backslash R_1) - w_1$, pasted along ${S_0 \cup R_0}$, and add new edges between $B_1$ and $w$ (Fig \ref{label_fig10}(a)). Please note that $J_R^1 - \{u_1, w\}$ is an induced subgraph of $G$. \begin{figure}\begin{center} \epsfig{file=figs/fig10.eps, height=8cm} \caption[Fig10]{Subsets of $J_R^1$ and $L_R^1$.} \label{label_fig10} \end{center}\end{figure} Let $L_R^1$ be a graph based on $L^1$ augmented by new edges between all the vertices in $R_1$ and $t_1$ (Fig \ref{label_fig10}(b)). It is easy to see $\kappa_{L_R^1}(s, t_1) = |X|$ due to the similar reasoning for $\kappa_{L_R^0}(s, t_0) = |X|$, and $A_1\cup R_1\cup S_1$ form a minimum $s$-$t_1$ separator in $L_R^1$. We repeat this process $n$ times until we obtain $L_R^n$, $J_R^n$, such that there is a bipartite matching $M_n$ for $L_R^n$ such that $\kappa_{L_R^n/M_n}(s, t_n) = |X|$ (Fig \ref{label_fig11}). \begin{figure}\begin{center} \epsfig{file=figs/fig11.eps} \caption[Fig11]{$G_s^+, L_R^0, \ldots, L_R^n$.} \label{label_fig11} \end{center}\end{figure} This process is guaranteed to terminate at the $n$-th iteration due to the following reason. Assume we are at the $i$-th iteration. Let $\alpha_i = \kappa_{L_R^i}(s, t_i) - \kappa_{L_R^i/M_i}(s, t_i)$. The discussion above implies the necessary condition for $\alpha_i$ to be any positive integer is existence of $R_i$ in $F_i \cup C_i$ such that $|R_i| = \alpha_i$, and $|A_i| \le |F_i \cup (C_i \backslash R_i)|$. Thus, taking the contraposition, $\alpha_i$ has to be less than or equal to $|Y_i| - |X_i|$. However, $|Y_{i-1}| - |X_{i-1}| > |Y_{i}| - |X_{i}|$ due to existence of $R_{i-1}$. This implies that $\alpha_i$ is strictly monotone decreasing, and there exists $n \in {\bf N}$ such that at the $n$-th iteration, $\alpha_n = 0$. Also, $V(L_R^n)\cap |X| \ne \emptyset$. So $V(J_R^n)\cap |X|$ is a proper subset of $X$, and $\kappa_{J_R^n}(u_n, w) < |X|$. By the induction hypothesis, $J_R^n$ has a bipartite matching $M'_n$ of $V(J_R^n)\cap |X|$ such that $\kappa_{J_R^n/M'_n}(u_n, w) = \kappa_{J_R^n}(u_n, w)$. Finally we obtain a bipartite matching $M_n \cup M'_n$ of $X$ in $G_s^+$, contraction of which does not decrease the $s$-$t$ connectivity of $G$. {\bf Q.E.D.} \begin{proposition}\label{prop1} Let $G=(V, E)$ be a graph and $s$ and $t$ be two distinct vertices in $G$ such that $\{s, t\}\not\in E \wedge d_G(s)=\kappa_G(s, t)$. Let $m$ and $n$ be any positive integers such that $m \ge n$. We add to $G$ $m$ new vertices $W$, and for each $v \in W$ add an edge $\{s, v\}$. We also add some new edges between $W$ and $V\backslash N_G(s)$, and denote the resultant graph by $G'$. If $\kappa_{G'}(s, t)\ge \kappa_G(s, t) + n$, then we can find a subset $X$ of $W$ such that $|X| = n$, and $\kappa_{G'[V(G)\cup X]}(s, t) = \kappa_G(s, t) + n$ (Fig. \ref{label_fig12}(a)). \end{proposition} \begin{figure}\begin{center} \epsfig{file=figs/fig12.eps, height = 7cm} \caption[Fig12]{$G'$ and $G'-v_1$.} \label{label_fig12} \end{center}\end{figure} {\bf Proof}\\ This is obvious, but we include a proof for formality. Let $\kappa_G(s, t) = k$,\\ $\kappa_{G'}(s, t) = l$. First, we prove we can remove $k + m - l$ vertices in $W$ without decreasing the $s$-$t$ connectivity below $l$. If $k = l$, then we can remove all the vertices in $W$ so we assume $k < l$. Pick any vertex $v_1$ in $W$ whose removal decreases the $s$-$t$ connectivity in $G'$, i.e. $\kappa_{G'}(s, t) - 1 = \kappa_{G' - v_1}(s, t)$. We can find such a vertex in $W$ otherwise $\kappa_G(s, t)$ would be greater than $k$. Let $G^1$ be $G' - v_1$, and $S_1$ be a minimum $s$-$t$ separator in $G^1$. Let $G_t^1$ be the component of $G^1 - S_1$ to which $t$ belongs, and $G_s^1$ be $G^1 - G_t^1$. Please note $G_s^1$ contains $S_1$. As $S_1\cup\{v_1\}$ form a minimum $s$-$t$ separator in $G'$, $v_1$ is adjacent to at least one vertex $u_1$ in $G_t^1$ (Fig. \ref{label_fig12}(b)). If $l - k = 1$, removal of any vertex in $W\backslash\{v_1\}$ does not decrease $s$-$t$ connectivity of $G^1$, otherwise $\kappa_G(s, t)$ would be less than $k$. So we can remove all the vertices in $W\backslash\{v_1\}$ from $G'$ and obtain a graph $G^*$ such that $\kappa_{G^*}(s, t) = l$. If $l - k > 1$, let $G_+^1$ be a graph based on $G_s^1$ augmented by a new vertex $t_1$ and new edges between all the vertices in $S_1$ and $t_1$. Please note that $\kappa_{G_+^1}(s, t_1) = l - 1$. Pick any vertex $v_2$ in $W\backslash\{v_1\}$ whose removal decreases the $s$-$t$ connectivity in $G_+^1$, i.e. $\kappa_{G_+^1}(s, t_1) - 1 = \kappa_{G_+^1 - v_2}(s, t_1)$. We can find such a vertex in $W\backslash\{v_1\}$ otherwise $\kappa_G(s, t)$ would be greater than $k$. Let $G^2$ be $G_+^1 - v_2$, and $S_2$ be a minimum $s$-$t_1$ separator in $G^2$. Let $G_t^2$ be the component of $G^2 - S_2$ to which $t$ belongs, and $G_s^2$ be $G^2 - G_t^2$. Please note $G_s^2$ contains $S_2$. As $S_2\cup\{v_2\}$ form a minimum $s$-$t_1$ separator in $G_+^1$, $v_2$ is adjacent to at least one vertex $u_2$ in $G_t^2$. We can continue this process up to $l-k$ times and obtain $v_{l-k}$, $S_{l-k}$, and $G_+^{l-k}$. Now $\kappa_{G_+^{l-k}}(s, t_{l-k}) = l - (l - k) = k$, and removal of any vertex in $W\backslash\{v_1, \ldots, v_{l_k}\}$ does not decrease the $s$-$t_{l-k}$ connectivity of $G_+^{l-k}$, otherwise $\kappa_G(s, t)$ would be less than $k$. So we can remove all the vertices in $W\backslash\{v_1, \ldots, v_{l-k}\}$ from $G'$, and obtain a graph $G^*$ such that $\kappa_{G^*}(s, t) = l$ (Fig. \ref{label_fig13}). Let $W' := \{v_1, \ldots, v_{l-k}\}$. Now $N_G(s) \cup W'$ is a minimum $s$-$t$ separator of $G^*$. We can pick any $n$ vertices in $W'$ and remove the rest from $G^*$. The $s$-$t$ connectivity of the resultant graph is exactly $n + k$. {\bf Q.E.D.} \begin{figure}\begin{center} \epsfig{file=figs/fig13.eps, height = 7cm} \caption[Fig13]{$G^1, \dots, G^{l-k}$.} \label{label_fig13} \end{center}\end{figure} \bibliography{menger} \end{document}
{ "alphanum_fraction": 0.656158872, "avg_line_length": 46.2303754266, "ext": "tex", "hexsha": "8f6bb3570a9ba3acecfec36ef0fb49d449f6e3b6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f614e151d9015c17976b22b8b95e7d44fc8b0535", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "ShoYamanishi/ProofOfMengersTheorem", "max_forks_repo_path": "menger.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f614e151d9015c17976b22b8b95e7d44fc8b0535", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "ShoYamanishi/ProofOfMengersTheorem", "max_issues_repo_path": "menger.tex", "max_line_length": 130, "max_stars_count": null, "max_stars_repo_head_hexsha": "f614e151d9015c17976b22b8b95e7d44fc8b0535", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "ShoYamanishi/ProofOfMengersTheorem", "max_stars_repo_path": "menger.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 11280, "size": 27091 }
\section{Ergo Modifiers Processing} \dnote{information in this section is outdated and implementation-specific. Consider to remove this section, or rewrite and move to~\ref{part-impl}} This section describes processing algorithm for Ergo modifiers in all security modes. Unlike most of blockchain systems, Ergo have the following types of modifiers: In-memory: \begin{enumerate} \item In-memory: \begin{itemize} \item Transaction - in-memory modifier. \item TransactionIdsForHeader - ids of transactions in concrete block. \item UTXOSnapshotManifest - ids of UTXO chunks and \end{itemize} \item Persistent: \begin{itemize} \item BlockTransactions - Sequence of transactions, corresponding to 1 block. \item ADProofs - proof of transaction correctness relative to corresponding UTXO. \item Header, that contains data required to verify PoW, link to previous block, state root hash and root hash to it's payload (BlockTransactions, ADProofs, Interlinks ...). \item UTXOSnapshotChunk - part of UTXO. \item PoPoWProof \end{itemize} Ergo will have the following parameters, that will determine concrete security regime: \begin{itemize} \item ADState: Boolean - keep state roothash only. \item VerifyTransactions: Boolean - download block transactions and verify them (requires BlocksToKeep == 0 if disabled). \item PoPoWBootstrap: Boolean - download PoPoW proof only. \item BlocksToKeep: Int - number of last blocks to keep with transactions, for all other blocks it keep header only. Keep all blocks from genesis if negative. \item MinimalSuffix: Int - minimal suffix size for PoPoW proof (may be pre-defined constant). \begin{minted}{java} if(VerifyTransactions == false) require(BlocksToKeep == 0) \end{minted} \end{itemize} Mode from "multimode.md" can be determined as follows: \begin{minted}{java} mode = if(ADState == false && VerifyTransactions == true && PoPoWBootstrap == false && BlocksToKeep < 0) "full" else if(ADState == false && VerifyTransactions == true && PoPoWBootstrap == false && BlocksToKeep >= 0) "pruned-full" else if(ADState == true && VerifyTransactions == true && PoPoWBootstrap == false) "light-full" else if(ADState == true && VerifyTransactions == false && PoPoWBootstrap == true && BlocksToKeep == 0) "light-spv" else if(ADState == true && VerifyTransactions == true && PoPoWBootstrap == true && BlocksToKeep == 0) "light-full-PoPoW" else //Other combinations are possible \end{minted} \end{enumerate} \subsection{Modifiers processing} \begin{minted}{java} def updateHeadersChainToBestInNetwork() = { 1.2.1. Send ErgoSyncInfo message to connected peers 1.2.2. Get response with INV message, containing ids of blocks, better than our best block 1.2.3. Request headers for all ids from 1.2.2. 1.2.4. On receiving header if(History.apply(header).isSuccess) { if(!(localScore == networkScore)) GOTO 1.2.1 } else { blacklist peer GOTO 1.2.1 } } \end{minted} \subsection{bootstrap} \subsubsection{Download headers:} \begin{minted}{java} if(PoPoW) { 1.1.1. Send GetPoPoWProof(suffix = Max(MinimalSuffix ,BlocksToKeep)) for all connections 1.1.2. On receive PoPoWProof apply it to History /* History should be able to determine, whether this PoPoWProof is better, than it's current best header chain */ } else { updateHeadersChainToBestInNetwork() } \end{minted} \subsubsection{Download initial State to start process transactions:} \begin{minted}{java} if(ADState == true) { Initialize state with state roothash from block header BlocksToKeep ago } else if(BlocksToKeep < 0 || BlocksToKeep > History.headersHeight) { Initialize state with genesis State } else { /* We need to download full state BlocksToKeep back in history TODO what if we can download state only "BlocksToKeep - N" or "BlocksToKeep + N" blocks back? */ 2.1. Request historical UTXOSnapshotManifest for at least BlocksToKeep back 2.2. On receiving UTXOSnapshotManifest: UTXOSnapshotManifest.chunks.foreach ( chunk => request chunk from sender() /*Or from random fullnode*/ 2.3. On receiving UTXOSnapshotChunk State.applyChunk(UTXOSnapshotChunk) match { case Success(Some(newMinimalState)) => GOTO 3 case Success(None) => stay at 2.3 /*we need more chunks to construct state. TODO periodicaly request missed chunks*/ case Failure(e) => ??? /*UTXOSnapshotChunk or constcucted state roothash is invalid*/ } } \end{minted} \subsubsection{Update State to best headers height:} \begin{minted}{java} if(State.bestHeader == History.bestHeader) { //Do nothing, State is already updated } else if(VerifyTransactions == false) { /*Just update State rootshash to best header in history*/ State.setBestHeader(History.bestHeader) } else { /*we have headers chain better than full block */ 3.1. assert(history contains header chain from State.bestHeader to History.bestHeaders) History.continuation(from = State.bestHeader, size = ???).get.foreach { header => sendToRandomFullNode(GetBlockTransactionsForHeader(header)) if(ADState == true) sendToRandomFullNode(GetADProofsForHeader(header)) } 3.2. On receiving modifiers ADProofs or BlockTransactions /*TODO History should return non-empty ProgressInfo only if it contains both ADProofs and BlockTransactions, or it contains BlockTransactions and ADState==false*/ if(History.apply(modifier) == Success(ProgressInfo)) { if(State().apply(ProgressInfo) == Success((newState, ADProofs))) { if(ADState==false) ADProofs.foreach ( ADProof => History.apply(ADProof)) if(BlocksToKeep>=0) /*remove BlockTransactions and ADProofs older than BlocksToKeep from history*/ } else { /*Drop Header from history, because it's transaction sequence is not valid*/ History.drop(modifier.headerId) } } else { blacklistPeer } GOTO 3 } \end{minted} \subsubsection{GOTO regular mode.} \begin{minted}{java} \end{minted} \subsection{Regular} Two infinite loops in different threads with the following functions inside: \begin{enumerate} \item UpdateHeadersChainToBestInNetwork() \item Download and update full blocks when needed \end{enumerate} \begin{minted}{java} if(State.bestHeader == History.bestHeader) { //Do nothing, State is already updated } else if(VerifyTransactions == false) { //Just update State rootshash to best header in history State.setBestHeader(History.bestHeader) } else { //we have headers chain better then full block 3.1. Request transaction ids from all headers without transactions assert(history contains header chain from State.bestHeader to History.bestHeaders) History.continuation(from = State.bestHeader, size = ???).get.foreach { header => sendToRandomFullNode(GetTransactionIdsForHeader(header)) if(ADState == true) sendToRandomFullNode(GetADProofsForHeader(header)) } 3.2. On receiving TransactionIdsForHeader: Mempool.apply(TransactionIdsForHeader) TransactionIdsForHeader.filter(txId => !MemPool.contains(txId)).foreach { txId => request transaction with txId } 3.3. On receiving a transaction: if(Mempool.apply(transaction).isSuccess) { Broadcast INV for this transaction Mempool.getHeadersWithAllTransactions { BlockTransactions => GOTO 3.4 //now we have BlockTransactions } } 3.4. (same as 3.2. from bootstrap) } \end{minted}
{ "alphanum_fraction": 0.7333155366, "avg_line_length": 42.3276836158, "ext": "tex", "hexsha": "f7a5eb8383817e1bf38ecb7b718d61cccd83b5f2", "lang": "TeX", "max_forks_count": 131, "max_forks_repo_forks_event_max_datetime": "2022-03-22T01:08:16.000Z", "max_forks_repo_forks_event_min_datetime": "2017-07-19T12:46:49.000Z", "max_forks_repo_head_hexsha": "55af449ace6a7fd605130e8498dc5304f2ccffe4", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "kettlebell/ergo", "max_forks_repo_path": "papers/yellow/modifiersProcessing.tex", "max_issues_count": 886, "max_issues_repo_head_hexsha": "55af449ace6a7fd605130e8498dc5304f2ccffe4", "max_issues_repo_issues_event_max_datetime": "2022-03-31T10:21:25.000Z", "max_issues_repo_issues_event_min_datetime": "2017-07-20T21:59:30.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "kettlebell/ergo", "max_issues_repo_path": "papers/yellow/modifiersProcessing.tex", "max_line_length": 175, "max_stars_count": 424, "max_stars_repo_head_hexsha": "55af449ace6a7fd605130e8498dc5304f2ccffe4", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "kettlebell/ergo", "max_stars_repo_path": "papers/yellow/modifiersProcessing.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-29T13:33:57.000Z", "max_stars_repo_stars_event_min_datetime": "2017-07-17T12:33:06.000Z", "num_tokens": 1902, "size": 7492 }
\documentclass[10pt]{article} % options include 12pt or 11pt or 10pt % classes include article, report, book, letter, thesis \usepackage[margin=0.3in]{geometry} \begin{document} \section{Question 1, Type: rate} \textbf{[1, 2, 3, 7, 8]} \begin{itemize} \item Current admission guidelines for US colleges admissions offices result in the colleges failing to admit many top quality students. \item Cumulative high school grade is widely used in the admissions process but grade point average (GPA) in senior years is more predictive of college success than high GPA in junior years. \item Average ACT scores are widely used in college admissions but scores for only the Math and English sections are more predictive of student success in college. \item There is currently an inherent disconnect between admissions offices and graduation rates (e.g. there is no direct feedback from graduation to the admissions decisions that were made four years prior). \item Admissions officers should be more engaged in the long term goals of the university rather than optimizing for world rankings. \end{itemize} \section{Question 1, Type: sort} \textbf{[1, 2, 3, 6, 7]} \begin{itemize} \item Current admission guidelines for US colleges admissions offices result in the colleges failing to admit many top quality students. \item Cumulative high school grade is widely used in the admissions process but grade point average (GPA) in senior years is more predictive of college success than high GPA in junior years. \item Average ACT scores are widely used in college admissions but scores for only the Math and English sections are more predictive of student success in college. \item Improved admissions, resulting in improved graduation rates, will show in world rankings four years after initial implementation. \item There is currently an inherent disconnect between admissions offices and graduation rates (e.g. there is no direct feedback from graduation to the admissions decisions that were made four years prior). \end{itemize} \section{Question1, Type: Plurality} \textbf{[2, 5, 7, 1, 9]} \begin{itemize} \item Cumulative high school grade is widely used in the admissions process but grade point average (GPA) in senior years is more predictive of college success than high GPA in junior years. \item Schools should be unconcerned about brief fluctuations in college ranking scores (caused by different admissions criteria), if it results in a stronger student body. \item There is currently an inherent disconnect between admissions offices and graduation rates (e.g. there is no direct feedback from graduation to the admissions decisions that were made four years prior). \item Current admission guidelines for US colleges admissions offices result in the colleges failing to admit many top quality students. \item Currently colleges are not taking advantage of advanced statistical methods that can be used to predict student success. \end{itemize} \section{Question 2, Type: rate} \textbf{[1, 4, 7, 8, 10]} \begin{itemize} \item Circumstance may be highly misleading in an interview scenario as an interviewee's demeanor may depend highly on external, hidden circumstances. \item Interviews can actually be harmful to the hiring process, undercutting the impact of other, more valuable information about interviewees. \item When interviewees respond randomly to interview questions, the interviewer has a strong belief that she `got to know' the interviewee (even though the responses have no bearing on the actual beliefs of the interviewee). \item Interviewers naturally turn irrelevant information into a coherent narrative, biasing their conclusions. \item Interviews should be used to test job related skills. \end{itemize} \section{Question 2, Type: sort} \textbf{[2, 4, 6, 7, 10]} \begin{itemize} \item An interviewee's conduct may be interpreted differently by different interviewers. \item Interviews can actually be harmful to the hiring process, undercutting the impact of other, more valuable information about interviewees. \item When interviewees respond randomly to interview questions, the interviewer is unable to detect this trend. \item When interviewees respond randomly to interview questions, the interviewer has a strong belief that she `got to know' the interviewee (even though the responses have no bearing on the actual beliefs of the interviewee). \item Interviews should be used to test job related skills. \end{itemize} \section{Question2, Type: Plurality} \textbf{[4, 3, 1, 2, 8]} \begin{itemize} \item Interviews can actually be harmful to the hiring process, undercutting the impact of other, more valuable information about interviewees. \item Unstructured, `get-to-know' interviews are becoming popular in the workspace and in college admissions, yet these form a poor metric for predicting the future job performance of the interviewee. \item Circumstance may be highly misleading in an interview scenario as an interviewee's demeanor may depend highly on external, hidden circumstances. \item An interviewee's conduct may be interpreted differently by different interviewers. \item Interviewers naturally turn irrelevant information into a coherent narrative, biasing their conclusions. \end{itemize} \section{Question 3, Type: rate} \textbf{[3, 4, 5, 6, 8]} \begin{itemize} \item The fact that a pleasurable activity released dopamine is uninformative, as dopamine is released while playing video games, taking drugs and while partaking in any form of pleasurable activity. \item The American Journal of Psychiatry has published a study showing that at most 1 percent of video game players might exhibit characteristics of an addiction. \item The American Journal of Psychiatry has published a study showing that gambling is more addictive than video games. \item The American Journal of Psychiatry has published a study showing that the mental and social heath of the purported video game addicts is no different from individuals who are not addicted to video games. \item We and our children are `addicted' to new technologies because they improve our lives or are plainly enjoyable to use. \end{itemize} \section{Question 3, Type: sort} \textbf{[1, 2, 3, 7, 10]} \begin{itemize} \item Video gaming is not damaging or disruptive to one's life and thus should not be compared to a drug. \item Dopamine levels that are released while playing video games are vastly lower than those released while taking a drug such as methamphetamine. \item The fact that a pleasurable activity released dopamine is uninformative, as dopamine is released while playing video games, taking drugs and while partaking in any form of pleasurable activity. \item Treating the immoderate playing of video games as an addiction is pathologizing relatively normal behavior. \item Using video gaming to relax does not constitute an addiction in much the same way as watching sports is not addictive. \end{itemize} \section{Question3, Type: Plurality} \textbf{[9, 3, 4, 10, 6]} \begin{itemize} \item Evidence for addiction to video games is virtually nonexistent. \item The fact that a pleasurable activity released dopamine is uninformative, as dopamine is released while playing video games, taking drugs and while partaking in any form of pleasurable activity. \item The American Journal of Psychiatry has published a study showing that at most 1 percent of video game players might exhibit characteristics of an addiction. \item Using video gaming to relax does not constitute an addiction in much the same way as watching sports is not addictive. \item The American Journal of Psychiatry has published a study showing that the mental and social heath of the purported video game addicts is no different from individuals who are not addicted to video games. \end{itemize} \section{Qualitative Feedback} \begin{itemize} \item Ranking was quite challenging, especially given that several statements were highly similar or conceptually related. I think it would be helpful to have some very stupid arguments thrown in so that you can have greater variability in your measurement. Glad to see you taking an interest in social psychology, little sister! \item ranking is difficult, scoring and/or selecting is easier \item Ticking the 5 most relevant points to support my argument is the easiest form of feedback. Placing the different points in order is the most difficult and requires the most time as you have to read through each point multiple times in order to make comparisons and form the list in the order you want. Ranking the points on a scale of 1-10 is also relatively easy, but may not give the most accurate results as I often just end up choosing a random number in the region of important (6-10) or unimportant (which in this case I just left as 1 as I had 5 points already). \item I liked the first metric (separate scores) most, because I didn't have to do any artificial ranking or make difficult decisions between N things at once. I could. This is interesting though. One factor that will complicate your analysis is that more of the arguments for the first position were compelling than for the second two (though perhaps I'm biased by the format). So direct quantitative comparison between the formats, which doesn't account for the inherent differences in the distribution of argument persuasivenesses, might be misleading. \item I liked the first metric (separate scores) most, because I didn't have to do any artificial ranking or make difficult decisions between N things at once. I could. This is interesting though. One factor that will complicate your analysis is that more of the arguments for the first position were compelling than for the second two (though perhaps I'm biased by the format). So direct quantitative comparison between the formats, which doesn't account for the inherent differences in the distribution of argument persuasivenesses, might be misleading. \item Selection was easiest to complete, but I think the rank-ordering will be the most highly informative. \item Giving a score of 1 to 10 was the most difficult as it was hard to be fair and determine what I thought deserved a certain number. Choosing the 5 best arguments was fairly easy as I didn't really need to rank the statements. Dragging to rank all the choices was somewhat difficult but visually it was simple and easy because I was able to see my choices in order, rather than just attributing them a number 1 through 10. \item Ranking is the most difficult. I prefer the format that lets me choose on a scale from 1-10 how strong I think the argument is. \item Assigning values from 1-10 was the most difficult voting format. Simple selection was the easiest, and ranking fell in the middle. I believe simple selection will produce the most coherent points, but it may be a very small difference between ranking and selection. Assigning values, while probably being better to analyze statistically, will probably produce the worst bias. \item I think ranking was the easiest format in this survey \item The checkbox question was the easiest to complete. The ranking question took the longest - I put greater emphasis on the first 5 ranks and cared less about the remaining 5. In the coring question from 1-10 I felt that my numbers were fairly arbitrary and I did not need to use the full 1-10 scale (only used scores 1,5,6,7,8,9). \item 2/3rds of everything is irrelevant. What matters I suppose is being able to justifiably argue your point in a way which will support the evidence that preceded. Furthermore question the obvious and simply do what you can with what you have in the time you have left (desperado) \item Question 1's ranking scheme was the easiest as it's easier to visually arrange points. However, Question 2's ranking scheme might produce a better sub-selection of 5 because the focus was on finding only the most relevant points (minimizing cognitive load a bit). \item Ranking was more difficult than selection. The selection will result in the most coherent sub-selection of 5 points. The constraint of only choosing the 5 most relevant points necessitates the person to do some sort of ranking. I like the simplicity of selecting the 5 points (the third format presented), but my preferred format of scoring each point from 1 to 10. This allowed me to assign 1 to the points I would never use and also put together a list of the points I would use in my debate. Furthermore, the usable points are now ranked in order, which may be handy when selecting points in a debate. \item Ranking was more difficult. Particularly when needing to evaluate several equally poor statements, needing to determine an ordering among those was not as easy as simply assigning a score to each statement. I believe that the scoring mechanism will allow you to see definitive separation between stronger and weaker statements as the gap between their position/score can be demonstrably widened. \item Was confusing that the first two questions went in opposite directions - easy to miss \item I thought ranking the options (Question 3) was the best approach for comparing arguments. It's easier and faster to compare a given argument with two or one adjacent arguments than it is to judge them in absolute terms. I think the formats in Questions 2 and 3 are probably equivalent for selecting the best subset of five arguments. I disliked the format in Question 1 because it requires more work to compare different arguments. \item the drag-and-drop method facilitates focus on comparing two points when I repeatedly ask myself "is this option better than the one above it?" whereas the "select top 5" makes me compare the option in question to up to 5 others (if I've already selected 5) to decide if it deserves to be selected above one of the others; and the "rate each option" voting mechanism forces me to weigh up the relative strength of each option against all the other options in order to balance my scoring. \item Several theorems in game theory and social science [Arrow's, Gibbard–Satterthwaite] state that there's no excellent method of taking preferences from the members of a group and building a set of preferences (or single top choice) that the group would agree with. There might be a loophole around "separate into a top half and a bottom half", but it seems unlikely. Whatever mechanism you decide on will be vulnerable to some particular set of participant votes. That said, it may be possible to find a mechanism that works well for common voting patterns. It's also not clear why collecting LESS data (ordinal or top-5) would ever be more useful than collecting the full scores, unless perhaps the task of scoring 1-10 is more noisy for reasons of cognitive load and greater breaking of IIA. But certainly whatever math is run on the top-5 data could instead be run on the top 5 of the ranking data. Unless the aim of this research is to show that less taxing question formats are less noisy, I don't see the point. Overall, I suspect that I'm about to click through and get told I've been lied to and the real purpose of this research is something else altogether. \item I liked the voting format that automatically resorted the choices as they were selected. This made it visually easy to follow the order of 10 items and reorder as needed. I also liked the "choose the best 5" voting format, because I didn't have to select options that I felt were irrelevant, and also because I didn't have to put them in an order and found many choices to be equally as good as another. It was also quick and easy. I disliked the "insert order" voting option because it was cumbersome to go back and re enter numbers and ensure I didn't rank multiple options with the same number. I liked it better when it resorted for me.. \item Drag drop provides the most feedback -- you can immediately see the other items reordered. \item I found that question 1 required a fair amount of previous understanding of the US college application process and of acronyms used (such as ACT). I wasn't clear on what a few of the statements meant, not being familiar with the US system myself. I found selecting 5 relevant points the easiest selection, probably because I didn't need to have organized my thoughts as much as the first two that required specific ranking or scoring. I therefore believe that the relevant selection will probably result in the most whereby subsection of 5 points :) \item I thought tanking was a bit more difficult. Trying to assess the strength of a single sentence set of claims based on a value of 1-10 is arbitrary. I thought it was a fascinating way to ask questions and fruitful to uncover interesting dynamics in the way that questions are answered. \end{itemize} \end{document}
{ "alphanum_fraction": 0.8031233236, "avg_line_length": 130.0542635659, "ext": "tex", "hexsha": "230f6f9a4ef6f444df255c68dd56ecff1d9a5445", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "393eefa12a6f3de8793c8ed48429099de8703b66", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "NickHoernle/group-mediator-bot", "max_forks_repo_path": "experiment/results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "393eefa12a6f3de8793c8ed48429099de8703b66", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NickHoernle/group-mediator-bot", "max_issues_repo_path": "experiment/results.tex", "max_line_length": 1172, "max_stars_count": null, "max_stars_repo_head_hexsha": "393eefa12a6f3de8793c8ed48429099de8703b66", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NickHoernle/group-mediator-bot", "max_stars_repo_path": "experiment/results.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3565, "size": 16777 }
\documentclass[letterpaper,twocolumn,10pt]{article} \usepackage{usenix,epsfig,endnotes} \begin{document} %don't want date printed \date{} \title{\Large \bf OpenBSD - pf+rdomains create splendid multi-tenancy firewalls} \author{ {\rm Philipp B\"uhler}\\ sysfive.com GmbH } \maketitle % Comment it out when you first submit the paper for review. \thispagestyle{empty} \subsection*{Abstract} This paper presents a working OpenBSD environment establishing a multi-tenant firewall with {\tt pf(4)}, {\tt rdomain/pair(4)} and {\tt relayd(8)} as work horses only. The environment shows how to provision, operate, isolate and manage all the components needed - and what isn't. It'll reveal how even complex setups can be developed, tested and provisioned in a straightforward way. Besides detailing on the OpenBSD bolts, there will be quick walkthrough how to create testing setups easily using Vagrant in preparation for live usage. For easy re-enacting all configuration of OpenBSD and Vagrant being used will be available online. \section{Introduction} Modern networks can grow already in small scale to quite complex setups, putting security considerations some sophisticated segregation is a must. While this might be manageable for a single tenant (customer/project/..), it can overwhelm easily if adding multilple tenants. The components seggregated in such a 'simple' network are typically: \begin{itemize} \item {\tt Management} \-- IPMI, KVM, PXE, monitoring, backup, .. \item {\tt Services} \-- Proxies, email, ntp, DNS, .. \item {\tt Application} \-- devel, test/stage, main/live, DR \item {\tt Data} \-- RDBMS, NoSQL, LDAP, redis, .. \item {\tt Others} \-- payment services, weather widget, .. \end{itemize} These adds up in realiter environments to numbers like ~40 interfaces, 250-300 pf rules and ~30 relayd rules. Traditionally the approach is to either use massivly detailled firewall configuration to seperate the tenants from each other - or even use multiple physical firewall servers. In situations where multiple tenants using overlapping TCP/IP addressing schemes, this adds another problem layer to deal with. Routing domains ( {\tt rdomains } ) can help substantially to keep tenants isolated from each other. It also helps to circumevent routing problems in case of overlapping IP addresses. If such setups are combined with a streamlined testing and provisioning the installation, rollout and maintenance of multi-tenant firewalls can be way more feasible. \section{Traditional Approach} The common approach to address such configurations have the following 'patterns' - each posing it's own set of problems adding to it. \subsection{handcruft} A very carefully handcrafted pf.conf is possible, but not feasible in long term. Esp. when a shared team should operate it. There will be isolated understanding/knowhow of certain "quirks". It's also difficult to replicate into a testing environment to evaluate bigger changes before they happen. With skipped testing, it's common that changes result in failures and can trigger panic situations - e.g. when one tenant can suddenly "see" another. \subsection{templates} The use of configuration blocks or similar templating approach might help in speed of deployment, but can create just multiple the havoc from above. \subsection{multiple servers} Using multiple servers ensures seggregation more easily but get's to new problems. Which tenant(s) are on which firewall; are there enough IPv4 addresses left to install yet another new server. Would you have enough rackspace to host the next one and how long does it take to get the machine physically into the rack and get it operable. \section{Routing Domains} The use of rdomains can addresses several problems from above. By using this software each tenant will be able to use its own network/routing configuration that is logically fully seperated from other tenants without additional firewall configuration. Using additional configuration would be needed to allow certain cross communication - e.g. to a centralized backup "tenant". \subsection{Setup} OpenBSD is using the rdomains stack by default. If no further setup happens, everything is {\tt rdomain 0}. This fact is a bit hidden from the user by not showing default use in e.g. {\tt ifconfig(8)}. To create additional rdomains one has to put interfaces into one by flagging the interface with {\tt ifconfig(8)}. Daemons can then be set to be started within such an rdomain, pf.conf can make use this by seggregation configurations (include/anchors) into rdomains. If need be, the special interface {\tt pair(4)} can be used to route between rdomains directly. \subsection{Tools} Several tools for configuration and/or debugging aswell as daemons in the base system are aware of rdomains in OpenBSD. Following a brief list on how to invoke these for a given rdomain. Furthermore some more detail on how to configure the system and daemons to be properly aware of rdomains. \begin{itemize} \item {\tt netstat -T <tableid>} \-- show L3 network information \item {\tt route -T <tableid>} \-- show/modify routing \item {\tt route -T <tableid> exec somedaemon} \-- start {\tt somedaemon} in this rdomain \item {\tt arp/ndp -V <tableid> } \-- show L2 network information \item {\tt ping -V <tableid> } \-- emit ping packets from this rdomain \item {\tt traceroute -V <tableid> } \-- trace routes from this rdomain \item {\tt nc -V <tableid> } \-- bind socket in this rdomain \item {\tt ps -o rtable } \-- adds ID of rdomain the process runs within \item {\tt pkill /pgrep -T <tableid> } \-- limit results to this rdomain \item {\tt tcpbench -V <tableid> } \-- run benchmark in this rdomain \item {\tt telnet -V <tableid> } \-- \item {\tt ftp-proxy } \-- via {\tt pf(4)} tagging \item {\tt bgpd/ospfd/ripd/eigrpd/ldpd } \-- via config options \item {\tt authpf } \-- via multiple {tt pf(4)} anchors \item {\tt relayd, rc.d, rcctl, ntpd, ifconfig, hostname.if } \-- see extra subsections \end{itemize} \subsubsection{route(8)} Daemons or other tools that are not directly aware of rdomins can be started via route(8) with the {\tt exec <daemon>} option. Examples: \begin{verbatim} route -T 23 exec iked -ddvvf \ /etc/iked.conf.23 route -T 42 exec iked -ddvvf \ /etc/iked.conf.42 \end{verbatim} Be aware that daemons sharing information in the kernel as {\tt ntpd(8)} can create havoc if invoked multiple times. \subsubsection{pf.conf(5)} The pf(4) configuration supports rdomain on three different keywords: 'on rdomain, 'rtable' and 'anchor'. E.g. \begin{verbatim} pass in on rdomain 21 from $tenant-app \ to $tenant-email # pass in from $backup to <tenant1> rtable 21 # anchor "tenant1.21" on rdomain 21 { block pass out proto tcp from any to any \ port { 80 443 } } anchor "tenant2.41" on rdomain 41 { block match out to any nat-to $ext-41-ip \ rtable 0 tag TENANT_41 pass out tagged TENANT_41 } \end{verbatim} \subsubsection{hostname.if(5)} Creating persistent rdomains is done by assigning {\tt rdomain N} to an interface config file. Since any {\tt rdomain(4)} configuration removes {\tt inet / inet6 } settings it's important to add this after any of those. Examples for physical, vlan(4) and carp(4) should look like this: \begin{verbatim} /etc/hostname.em0: rdomain 0 inet 10.40.40.254/24 /etc/hostname.vlan41: description "gw-vlan-41" vlan 41 vlandev em2 rdomain 41 inet 10.40.41.1/24 /etc/hostname.carp1 description "gw-carp-1" rdomain 0 vhid 1 pass onetwomany carpdev em0 inet 10.60.5.1/24 \end{verbatim} To include 'patch' and routing information for the specialized {\tt pair(4)} interfaces the following ordering should be amended: rdomain, inet, patch, route, e.g.: \begin{verbatim} /etc/hostname.pair0: description "gw-pair-0" rdomain 0 inet 10.200.21.1/30 /etc/hostname.pair21: description "gw-pair-21" rdomain 21 inet 10.200.21.2/30 patch pair0 !/sbin/route -T 21 -qn add default 10.200.21.1 \end{verbatim} \subsubsection{rc.d(8)} For automated startup, {\tt rc.d(8)} has 'daemonname\_rtable=N' support, which defaults to 0. Consequently this can be configured using {\tt rcctl(8)} yada. \begin{verbatim} $ doas rcctl set httpd status on $ doas rcctl set httpd rtable 21 $ doas rcctl get httpd httpd_class=daemon httpd_flags= httpd_rtable=21 httpd_timeout=30 httpd_user=root $ doas rcctl start httpd httpd(ok) $ ps auxo rtable | grep http # last column www 46042 0.0 0.7 744 1740 ?? Sp \ 4:43PM 0:00.00 httpd: server (h 21 \end{verbatim} From above example the httpd(8) was started like a manually invoked {\tt route -T 21 exec httpd}. \subsubsection{ntpd(8)} This is a 'famous' example for daemons that shall not be started multiple times to cover several routing domains! Each invoked daemon will try to overwrite the kernel's clock resulting in skews that can cover 'months' within some wallclock seconds. To overcome this, ntpd has been taught to operate in multiple rdomains from one invocation. The {\tt listen} socket goes to the rdomain it was invoked in (typically '0') and the {\tt server} can be bound to other domains, e.g. for {\tt /etc/ntpd.conf}: \begin{verbatim} server jp.pool.ntp.org listen 127.0.0.1 listen 127.0.0.1 rtable 69 listen 10.20.41.1 rtable 41 listen 10.20.21.1 rtable 21 \end{verbatim} \subsubsection{pair(4)} With pair(4) and route(8) one can interconnect rdomains. Being virtualized ethernet it needs two endpoints that are then patched to each other: \begin{verbatim} $ doas ifconfig pair0 rdomain 0 10.200.21.1/30 up $ doas ifconfig pair21 rdomain 21 10.200.21.2/30 up $ doas ifconfig pair0 patch pair21 \end{verbatim} The above ad-hoc setup can be persisted by using hostname.if as below: \begin{verbatim} /etc/hostname.pair0: description "gw-pair-0" rdomain 0 inet 10.200.21.1/30 /etc/hostname.pair21: description "gw-pair-21" rdomain 21 inet 10.200.21.2/30 patch pair0 !/sbin/route -T 21 -qn add default 10.200.21.1 \end{verbatim} The pair(4) devices can be added to a bridge(4), too, but STP configuration must be added to avoid looping packets. \subsubsection{pf.conf(5)} The use of rdomains simplifies the burden to be careful along means of first/last match wins or using 'quick'. By using 'anchor X on rdomain N' the rules needed for a tenant are completly isolated and the different match rules wont influence each other. In the following example the first three 'match' lines will influence each other and the result will not be the desired one. In this case the first 'match' will change the 'to' address and thus the second 'match' will not be in effect'. Using anchors on rdomains, this wont happen. Packets on rdomain-41 will never be processed by the rules of anchor on rdomain-21 and vice versa. \begin{verbatim} match out on $ext inet proto { udp tcp } \ from <net_tenant1> to !<rfc1918> \ nat-to $nat_tenant1 match out on $ext inet proto tcp from any \ to !<rfc1918> port 25 \ nat-to $nat_tenant1_mail match out on $ext inet proto { udp tcp } \ from <net_tenant2> to !<rfc1918> \ nat-to $nat_tenant2 #rdomains: anchor "tenant1" on rdomain 21 { match out on $ext inet proto { udp tcp } \ from <net_tenant1> to !<rfc1918> \ nat-to $nat_tenant1 match out on $ext inet proto tcp from any \ to !<rfc1918> port 25 nat-to \ $nat_tenant1_mail } anchor "tenant2" on rdomain 41 { match out on $ext inet proto { udp tcp } \ from <net_tenant2> to !<rfc1918> \ nat-to $nat_tenant2 } \end{verbatim} Using includes the manageability of pf.conf can improve dramatically in terms of oversight and delegation. \begin{verbatim} #all-in-one /etc/pf.conf: set skip on lo0 enc0 enc1 set optimization aggressive # important below! block in from $tenant1 to $tenant2 pass from $tenant1 to any \ nat-to $tenant1_public match out from $tenant2 to any nat-to \ $tenant2_public #on request call 3am match out from any to any nat-to (egress) ### #rdomains with includes /etc/pf.conf: include "/etc/pf/globals.conf" include "/etc/pf/management.conf" anchor "tenant1" on rdomain 21 { include "/etc/pf/tenant1.conf" } anchor "tenant1" on rdomain 41 { include "/etc/pf/tenant2.conf" } # EOF \end{verbatim} \subsubsection{relayd(8)} As of now {\tt relayd(8)} isnt aware of routing domains, but by forementioned 'route exec' some achievements can be made. Besides a full blown support, a little patch for relayd is available[1] which enables it to use multiple 'anchors' in pf. \subsection{other quirks} For certain cases, the defaults or limitations can be overcome by compiling or creative network interface stacking \subsubsection{limit in number} A maximum of 256 routing domains are possible with a GENERIC kernel as in the default install. If need be, this can be changed in {\tt sys/socket.h} for the define of {\tt RT\_TABLEID\_MAX}. This will need full 'release' build -- or you know what you do. \subsubsection{carp(4)} Interfaces of this driver must be in the same rdomain as its {\tt carpdev}. Besides using {\tt vlan(4)} as the underlying device, it's also possible to use {\tt vether(4)} to "break" the linkage between carp and it's physical counterpart. To put use on the later, the physical interface and two vether interfaces are joined in the same {\tt bridge(4)} and the {\tt carp(4)} devices are stacked on top of it: https://gist.github.com/double-p/d3a20fded7e8ced30735705e1dfea5c4 \section{External Tooling} Testing is essential and is of more fun, when one can do it "whereever" - like on a trip on the laptop. Using a stack of 'packer', 'Vagrant' and ansible it's feasible to run extensive testing. Be it to get more familiar with the concept or trying to debug a live problem -- on your way home. \subsection{Testbed Layout} The following configuration result in a virtualized networking environment that consists of two tenant "clients" connected to one firewall, this one to a second and with the second firewall is a mock internet "server". \subsection{Provisioning} The following setup uses {\tt packer}, {\tt Vagrant} with {\tt Virtualbox} and {\tt ansible} to create and run the testbed. \subsubsection{packer} This tool takes a stock OpenBSD install image and converts it into a {\tt vbox} VM-image, which is the base for all five VMs the testbed needs. It leverages the {\tt autoinstall(8)} feature of OpenBSD to achieve this. Furthermore it will install {\tt sudo(8)} and {\tt python(1)} to enable Vagrant and ansible. \subsubsection{Vagrant} Based on above vbox 'base box' the referenced configuration sets the parameters needed to launch the testbed. Most notable are the network settings. The duo of Vagrant and Virtualbox will put the VMs with adjacent (same subnet) networks on the same virtualized cable. Given the corresponding routing, the VMs networks wont see each other unless explicitly told so by routing, rdomains and pf. \subsection{Automation} To lessen the burden of configuring the networking {\tt ansible} can be used to automatically generate {\tt hostname.if(5)} files and call {\tt netstart(8)} with those. On top of the packer/vagrant based VMs, a reproducable testbed networking is ensured and available within minutes. \subsubsection{ansible} The referenced {\tt ansible} code will read global configuration data from {\tt group\_vars/testbed} and host/VM specific data from {\tt host\_vars/hostname}. It takes this data to fill in jinja2-templates in {\tt roles/firewall/templates/} to generate and upload the needed {\tt hostname.if(5)} files on the testbed firewall/rdomain host. \section{Acknowledgments} Following people have helped me directly or indirectly by writing the used software, documentation, other talks etc. \begin{itemize} \item "Peter Hessler" \-- for the talks, experiences and help in rdomains \item "Ingo Schwarze" \-- for helping out with roff/gpresent to create the presentation slides \item "OpenBSD developers" \-- for adding pf, rdomains and OpenBSD itself \item "sysfive.com GmbH" \-- for giving enough working hours to get this done \end{itemize} \section{Availability} This paper, presentation slides, Vagrant and packer templates and also ansible code can be found on github: \begin{center} {\tt https://github.com/double-p/smtf/} \end{center} \end{document}
{ "alphanum_fraction": 0.7600468442, "avg_line_length": 49.3130699088, "ext": "tex", "hexsha": "9f6939f186543f73d0a3ae7ef8f86eb7223af5f8", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-05-18T17:20:28.000Z", "max_forks_repo_forks_event_min_datetime": "2019-05-18T17:20:28.000Z", "max_forks_repo_head_hexsha": "3d8071c8f8315a17bfe78d76a4a9d6181da82b0a", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "double-p/presentations", "max_forks_repo_path": "AsiaBSDCon/2017/asiabsdcon2017-paper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3d8071c8f8315a17bfe78d76a4a9d6181da82b0a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "double-p/presentations", "max_issues_repo_path": "AsiaBSDCon/2017/asiabsdcon2017-paper.tex", "max_line_length": 473, "max_stars_count": 1, "max_stars_repo_head_hexsha": "3d8071c8f8315a17bfe78d76a4a9d6181da82b0a", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "double-p/presentations", "max_stars_repo_path": "AsiaBSDCon/2017/asiabsdcon2017-paper.tex", "max_stars_repo_stars_event_max_datetime": "2019-09-24T14:02:42.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-24T14:02:42.000Z", "num_tokens": 4331, "size": 16224 }
\subsection{Discovery of the the First Gravitational Wave} \hspace{0.5cm} GW150914 was detected for the first time by the two detectors of LIGO at 09:50:45 UTC on the 14th of September, 2015 \cite{PhysRevLett.116.061102}. The signal received opened up a gateway with deeper understanding of astronomy and particle physics \cite{Abbott_2016}. The source was discovered to be of a binary black hole coalescence. This detection serves to be groundbreaking in terms of both GW and binary black hole systems. The possibility of detecting gravitational waves were feeble due to the technology available during Einstein’s theory of relativity, although, experiments in search for the signal began in the 1960s with resonant mass detectors. The interferometers were suggested in the 60s and 70s, finally by 2000s they were set up. The evidence for the presence of GW was observed by Hulse and Taylor by the discovery of a binary pulsar system PSR B1913+16.1 The system depicted subsequent loss of energy. \cite{PhysRevLett.116.061102}\\ \subsubsection{Source of GW150914} \hspace{0.5cm} The source of GW150914 is a binary black hole merger. On analysis it was theorized that the two black holes were an undisturbed binary star system whose approximate masses were 36 and 29 solar masses successfully collapsing into a single black hole \cite{Abbott_2016} . Studies suggested that the mass of the system decreased considerably after the merger, indicating the emission of gravitational waves \cite{Ligo_org},\cite{LIGO_org}. From the merger, energy with three times the mass of our sun was converted into gravitational wave energy \cite{LIGO_org}. This system is located 1.3 billion light years away from our solar system. The coalescence produced tremendous power and energy during the final 20 milliseconds of the merger. The increase in their tangential velocity to 60 percent the speed of light, the short separation of 350km between them, orbital frequency of 75 Hz, half the gravitational frequency of 150 Hz, confirms the signal to be from a merger of two enormous black holes because no other compact objects other than black holes can come that close without merging, not even neutron stars as they wouldn't have the required mass. \cite{PhysRevLett.116.061102},\cite{LIGO_org}\\ \subsubsection{Detection of GW150914} \hspace{0.5cm} The two LIGO detectors at Washington State and Louisiana received the GW150914 signal, however, they were running in engineering mode. Hence, it required a 16 day analysis to confirm the signal to be legitimate and not a test simulation \cite{LIGO_org}. In order to confirm its validity, the environment detectors were checked to have no disturbances having similar properties as the GW150914 signal. At the time, LIGO was the only observing detector, the Virgo detector was not functional since it was being upgraded while GEO 600 was not sensitive enough to catch the signal \cite{PhysRevLett.116.061102}. The LIGO detector at Hanford suffered a 7 millisecond delay than Livingston. The signal was processed in only 3 minutes after detection. It lasted for 0.2 seconds during which its frequency increased in 8 cycles from 35 Hz to 150 Hz. By Signal conversion process, when the signal was converted, it was in the audible range and created a noise similar to the chirp of a bird and was termed as the chirp signal \cite{PhysRevLett.116.061102},\cite{LIGO_org}. The LIGO detectors had successfully detected the gravitational wave signal emitted from a binary black hole system in 2015. It was a successful prediction of the general theory of relativity. The observations served influential in terms of both the signal as well as existence of binary black hole mergers \cite{PhysRevLett.116.061102}. \pagebreak \pagebreak
{ "alphanum_fraction": 0.7931930372, "avg_line_length": 38.1089108911, "ext": "tex", "hexsha": "22976271b4397ab1a64538956ca8592efddc5897", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "939ab1890906ba82f26723683e34d91175116de4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "TasmiMemon/review-papers-2021", "max_forks_repo_path": "gw-physics-and-ligo/9 Detection_by_ligo.tex/9.1 First Observation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "939ab1890906ba82f26723683e34d91175116de4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "TasmiMemon/review-papers-2021", "max_issues_repo_path": "gw-physics-and-ligo/9 Detection_by_ligo.tex/9.1 First Observation.tex", "max_line_length": 640, "max_stars_count": null, "max_stars_repo_head_hexsha": "939ab1890906ba82f26723683e34d91175116de4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "TasmiMemon/review-papers-2021", "max_stars_repo_path": "gw-physics-and-ligo/9 Detection_by_ligo.tex/9.1 First Observation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 874, "size": 3849 }
\clearpage \chapter{\textbf{Métodos}}\label{methods} \section{Inicio de um capítulo}\label{met:chapters} In the two-sided layout a new chapter always begins on a right side. Therefore, empty pages will be generated automatically at the end of the previous chapter if necessary. The one-sided layout does not do this. If you want an additional empty page, you have to insert it manually. \section{Tabelas grandes}\label{met:table} Tables are a bit complex to generate in latex, a big help are table generators like: \url{https://www.tablesgenerator.com/} . Small tables fit within the text flow. \\ \begin{table} [ht] \begin{center} %\centering \caption[Small table]{Material needed for gels } \begin{tabular}{|l|C{3cm}|C{3cm}|} \hline material (conc.) & volume per gel & conc. in gel\\ \hline Chemical 1 (40~mg/mL) & 15~\textmu L & 3.00~ \textmu g/mL\\ \hline TBS & 15~\textmu L & ---\\ \hline Chemical 2 & 15~\textmu L & 3.75~\textmu g/mL\\ \hline Chemical 3 & 27.5~\textmu L & 5.7x10$^5$ cells\\ \hline Chemical 4 & 27.5~\textmu L & 5.7x10$^5$ cells\\ \hline \hline Chemical 5 & 100~\textmu L & 5.00 mg/mL \\ \hline \end{tabular} \label{tabfibrin} \end{center} \end{table} Bigger tables can be put on separate pages. \\ \begin{table} [p] %p makes to table to show up on a separate page. This is useful when the table is really big or two tables should be aligned underneath each other on a separate page. \begin{center} \caption[FACS analysis]{A lot of complex columns and rows } \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline \multicolumn{2}{|c|}{Tube 1} & & \multicolumn{2}{c|}{Tube 2} & & \multicolumn{2}{c|}{Tube 3} & \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{Tube 4} \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \multicolumn{1}{|c|}{Antibody} & \multicolumn{1}{c|}{Volume} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Antibody} & \multicolumn{1}{c|}{Volume} & & \multicolumn{1}{c|}{Antibody} & \multicolumn{1}{c|}{Volume} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Antibody} & \multicolumn{1}{c|}{Volume} \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} AB1 & 20~\textmu L & & AB2 & 20~\textmu L & & AB3 & 20~\textmu L & & AB4 &20~\textmu L \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} APC & MSC- & & FITC & MSC- & & PE & MSC- & & PE & MSC- \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \multicolumn{1}{|c|}{Antibody} & \multicolumn{1}{c|}{Volume} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Antibody} & \multicolumn{1}{c|}{Volume} & & \multicolumn{1}{c|}{Antibody} & \multicolumn{1}{c|}{Volume} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Antibody} & \multicolumn{1}{c|}{Volume}\\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} AB5 & 2~\textmu L & & AB6 & 5~\textmu L & & AB7 & 5~\textmu L & & AB8 & 5~\textmu L \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} FITC & MSC+ & & APC & MSC+ & & \scriptsize{PerCP-Cy5.5} & MSC+ & & \scriptsize{PerCP-Cy5.5} & MSC- \\ \hline \end{tabular} \label{tabfacsmsc} \end{center} \end{table} \begin{table} [p] \begin{center} %\centering \caption[Another list of antibodies]{Some more antibodies for FACS analysis } \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline \multicolumn{2}{|c|}{Tube 1} & & \multicolumn{2}{c|}{Tube 2} & & \multicolumn{2}{c|}{Tube 3} & \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{Tube 4} \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \multicolumn{1}{|c|}{Antibody} & \multicolumn{1}{c|}{Volume} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Antibody} & \multicolumn{1}{c|}{Volume} & & \multicolumn{1}{c|}{Antibody} & \multicolumn{1}{c|}{Volume} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Antibody} & \multicolumn{1}{c|}{Volume} \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} AB1 & 20~\textmu L & & AB2 & 20~\textmu L & & AB3 & 5~\textmu L & & AB4 & 5~\textmu L \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} FITC & EC+ & & FITC & EC+ & & FITC & EC+ & & FITC & EC- \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \multicolumn{1}{|c|}{Antibody} & \multicolumn{1}{c|}{Volume} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Antibody} & \multicolumn{1}{c|}{Volume} & & \multicolumn{1}{c|}{Antibody} & \multicolumn{1}{c|}{Volume} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Antibody} & \multicolumn{1}{c|}{Volume}\\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} AB5 & 20~\textmu L & & AB6 & 2~\textmu L & & AB7 & 20~\textmu L & & AB8 & 20~\textmu L \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} APC & EC- & & APC & EC- & & APC & EC+ & & APC & EC+ \\ \hline \end{tabular} \label{tabfacsec} \end{center} \end{table} \blindtext \section{Secção} \subsection{Subsecção} \subsubsection{Subsubsecção}\label{subsub} Here we have some random equations: \begin{equation}\label{Volume} V = d^2\cdot \frac{\pi}{4}\cdot L \end{equation} \begin{equation}\label{Surface} S = d^2 \cdot \frac{\pi}{4} + \pi \cdot d \cdot L \thickapprox \pi \cdot d \cdot L \end{equation} \begin{equation}\label{Volume1} V = \frac{S^2}{\pi ^2 \cdot L^2} \cdot \frac{\pi}{4}\cdot L = \frac{S^2}{4\pi \cdot L} \end{equation} \begin{equation}\label{Length} L = \frac{S^2}{4\pi \cdot V} \end{equation} \begin{equation}\label{stdv} s = \sqrt{\frac{\sum_{i=1}^n (x_i-\bar{x})^2}{(n-1)}} \end{equation} \addtocontents{toc}{\vspace{0.8cm}}
{ "alphanum_fraction": 0.5663299663, "avg_line_length": 56.0377358491, "ext": "tex", "hexsha": "84772a633db235f54722d5401bf4b2cb649d5a4d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2919f08204de15c6b99470b8daad5de4e4fce03c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gualdrapa/ISEL_LaTex_OneSided_Report_PT", "max_forks_repo_path": "doc/103methods.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2919f08204de15c6b99470b8daad5de4e4fce03c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gualdrapa/ISEL_LaTex_OneSided_Report_PT", "max_issues_repo_path": "doc/103methods.tex", "max_line_length": 351, "max_stars_count": null, "max_stars_repo_head_hexsha": "2919f08204de15c6b99470b8daad5de4e4fce03c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gualdrapa/ISEL_LaTex_OneSided_Report_PT", "max_stars_repo_path": "doc/103methods.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2417, "size": 5940 }
\lab{Profiling}{Profiling} \objective{Efficiency is essential to algorithmic programming. Profiling is the process of measuring the complexity and efficiency of a program, allowing the programmer to see what parts of the code need to be optimized. In this lab we present common techniques for speeding up Python code, including the built-in profiler and the Numba module. } \section*{Magic Commands in IPython} % ---------------------------------------- IPython has tools for quickly timing and profiling code. These ``magic commands'' start with one or two \li{\%} characters---one for testing a single line of code, and two for testing a block of code. \begin{itemize} \item \li{<p<\%time>p>}: Execute some code and print out its execution time. \item \li{<p<\%timeit>p>}: Execute some code several times and print out the average execution time. \item \li{<p<\%prun>p>}: Run a statement through the Python code profiler,\footnote{{\color{purple}{\texttt{\%prun}}} is a shortcut for \texttt{cProfile.run()}; see \url{https://docs.python.org/3/library/profile.html} for details.} printing the number of function calls and the time each takes. We will demonstrate this tool a little later. \end{itemize} \begin{lstlisting} # Time the construction of a list using list comprehension. <g<In [1]:>g> <p<%time>p> x = [i**2 for i in range(int(1e5))] <<CPU times: user 36.3 ms, sys: 3.28 ms, total: 39.6 ms Wall time: 40.9 ms>> # Time the same list construction, but with a regular for loop. <g<In [2]:>g> <p<%%time>p> # Use a double %% to time a block of code. <g<...:>g> x = [] <g<...:>g> for i in range(int(1e5)): <g<...:>g> x.append(i**2) <g<...:>g> <<CPU times: user 50 ms, sys: 2.79 ms, total: 52.8 ms Wall time: 55.2 ms>> # The list comprehension is faster! \end{lstlisting} % Use \li{<p<\%time>p>} and \li{<p<\%timeit>p>} to select fast code snippets, functions, and algorithms (for example, using a list comprehension where possible instead of a regular loop). % For the complete list of magic IPython commands, see \url{http://ipython.readthedocs.io/en/stable/interactive/magics.html}. \subsection*{Choosing Faster Algorithms} % ------------------------------------ The best way to speed up a program is to use an efficient algorithm. A bad algorithm, even when implemented well, is never an adequate substitute for a good algorithm. \begin{problem} % Triangle path sums from Project Euler. This problem comes from \url{https://projecteuler.net} (problems 18 and 67). By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23. \begin{center} \textbf{\color{red}{3}}\\ \textbf{\color{red}{7}} 4\\ 2 \textbf{\color{red}{4}} 6\\ 8 5 \textbf{\color{red}{9}} 3 \end{center} That is, $3 + 7 + 4 + 9 = 23$. The following function finds the maximum path sum of the triangle in \texttt{triangle.txt} by recursively computing the sum of every possible path---the ``brute force'' approach. \begin{lstlisting} def max_path(filename="triangle.txt"): """Find the maximum vertical path in a triangle of values.""" with open(filename, 'r') as infile: data = [[int(n) for n in line.split()] for line in infile.readlines()] def path_sum(r, c, total): """Recursively compute the max sum of the path starting in row r and column c, given the current total. """ total += data[r][c] if r == len(data) - 1: # Base case. return total else: # Recursive case. return max(path_sum(r+1, c, total), # Next row, same column. path_sum(r+1, c+1, total)) # Next row, next column. return path_sum(0, 0, 0) # Start the recursion from the top. \end{lstlisting} The data in \texttt{triangle.txt} contains 15 rows and hence 16384 paths, so it is possible to solve this problem by trying every route. However, for a triangle with 100 rows, there are $2^{99}$ paths to check, which would take billions of years to compute even for a program that could check one trillion routes per second. No amount of improvement to \li{max_path()} can make it run in an acceptable amount of time on such a triangle---we need a different algorithm. Write a function that accepts a filename containing a triangle of integers. Compute the largest path sum with the following strategy: starting from the next to last row of the triangle, replace each entry with the sum of the current entry and the greater of the two ``child entries.'' Continue this replacement up through the entire triangle. The top entry in the triangle will be the maximum path sum. In other words, work from the bottom instead of from the top. \begin{center} \begin{tabular}{ccccccc} \begin{tabular}{c} 3\\ 7 4\\ 2 4 6\\ \color{red}{8 5 9 3} \end{tabular} &$\longrightarrow$& \begin{tabular}{c} 3\\ 7 4\\ \color{red}{10 13 15}\\ \color{black}{8 5 9 3} \end{tabular} &$\longrightarrow$& \begin{tabular}{c} 3\\ \color{red}{20 19}\\ \color{black}{10 13 15}\\ \color{black}{8 5 9 3} \end{tabular} &$\longrightarrow$& \begin{tabular}{c} \color{red}{\textbf{23}}\\ \color{black}{20 19}\\ \color{black}{10 13 15}\\ \color{black}{8 5 9 3} \end{tabular} \end{tabular} \end{center} Use your function to find the maximum path sum of the 100-row triangle stored in \texttt{triangle\_large.txt}. Make sure that your new function still gets the correct answer for the smaller \texttt{triangle.txt}. Finally, use \li{<p<\%time>p>} or \li{<p<\%timeit>p>} to time both functions on \texttt{triangle.txt}. Your new function should be about 100 times faster than the original. \end{problem} \subsection*{The Profiler} % -------------------------------------------------- The profiling command \li{<p<\%prun>p>} lists the functions that are called during the execution of a piece of code, along with the following information. \begin{table}[H] \centering \begin{tabular}{c|l} Heading & Description \\ \hline \li{primitive calls} & The number of calls that were not caused by recursion.\\ \li{ncalls} & The number of calls to the function. If recursion occurs, the output\\ & is \texttt{<total number of calls>/<number of primitive calls>}.\\ \li{tottime} & The amount of time spent in the function, not including calls to other functions.\\ \li{percall} & The amount of time spent in each call of the function.\\ \li{cumtime} & The amount of time spent in the function, including calls to other functions.\\ \end{tabular} \end{table} \begin{lstlisting} # Profile the original function from Problem 1. <g<In[3]:>g> <p<%prun>p> max_path("triangle.txt") \end{lstlisting} {\small \begin{verbatim} 81947 function calls (49181 primitive calls) in 0.036 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 32767/1 0.025 0.000 0.034 0.034 profiling.py:18(path_sum) 16383 0.005 0.000 0.005 0.000 {built-in method builtins.max} 32767 0.003 0.000 0.003 0.000 {built-in method builtins.len} 1 0.002 0.002 0.002 0.002 {method `readlines' of `_io._IOBase' objects} 1 0.000 0.000 0.000 0.000 {built-in method io.open} 1 0.000 0.000 0.036 0.036 profiling.py:12(max_path) 1 0.000 0.000 0.000 0.000 profiling.py:15(<listcomp>) 1 0.000 0.000 0.036 0.036 {built-in method builtins.exec} 2 0.000 0.000 0.000 0.000 codecs.py:318(decode) 1 0.000 0.000 0.036 0.036 <string>:1(<module>) 15 0.000 0.000 0.000 0.000 {method `split' of `str' objects} 1 0.000 0.000 0.000 0.000 _bootlocale.py:23(getpreferredencoding) 2 0.000 0.000 0.000 0.000 {built-in method _codecs.utf_8_decode} 1 0.000 0.000 0.000 0.000 {built-in method _locale.nl_langinfo} 1 0.000 0.000 0.000 0.000 codecs.py:259(__init__) 1 0.000 0.000 0.000 0.000 codecs.py:308(__init__) 1 0.000 0.000 0.000 0.000 {method `disable' of `_lsprof.Profiler' objects} \end{verbatim} } \section*{Optimizing Python Code} % =========================================== A poor implementation of a good algorithm is better than a good implementation of a bad algorithm, but clumsy implementation can still cripple a program's efficiency. The following are a few important practices for speeding up a Python program. Remember, however, that such improvements are futile if the algorithm is poorly suited for the problem. \subsection*{Avoid Repetition} % ---------------------------------------------- % {\small % \begin{verbatim} % ncalls tottime percall cumtime percall filename:lineno(function) % 32767/1 0.025 0.000 0.034 0.034 profiling.py:18(path_sum) % 16383 0.005 0.000 0.005 0.000 {built-in method builtins.max} % 32767 0.003 0.000 0.003 0.000 {built-in method builtins.len} % 1 0.002 0.002 0.002 0.002 {method `readlines' of `_io._IOBase' objects} % 15 0.000 0.000 0.000 0.000 {method `split' of `str' objects} % \end{verbatim} % } A clean program does no more work than is necessary. The \li{ncalls} column of the profiler output is especially useful for identifying parts of a program that might be repetitive. For example, the profile of \li{max_path()} indicates that \li{len()} was called 32,767 times---exactly as many times as \li{path_sum()}. This is an easy fix: save \li{len(data)} as a variable somewhere outside of \li{path_sum()}. \begin{lstlisting} <g<In [4]:>g> def max_path_clean(filename="triangle.txt"): <g<...:>g> with open(filename, 'r') as infile: <g<...:>g> data = [[int(n) for n in line.split()] <g<...:>g> for line in infile.readlines()] <g<...:>g> N = len(data) # Calculate len(data) outside of path_sum(). <g<...:>g> def path_sum(r, c, total): <g<...:>g> total += data[r][c] <g<...:>g> if r == N - 1: # Use N instead of len(data). <g<...:>g> return total <g<...:>g> else: <g<...:>g> return max(path_sum(r+1, c, total), <g<...:>g> path_sum(r+1, c+1, total)) <g<...:>g> return path_sum(0, 0, 0) <g<...:>g> <g<In [5]:>g> <p<%prun>p> max_path_clean("triangle.txt") \end{lstlisting} {\small \begin{verbatim} 49181 function calls (16415 primitive calls) in 0.026 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 32767/1 0.020 0.000 0.025 0.025 <ipython-input-5-9e8c48bb1aba>:6(path_sum) 16383 0.005 0.000 0.005 0.000 {built-in method builtins.max} 1 0.002 0.002 0.002 0.002 {method `readlines' of `_io._IOBase' objects} 1 0.000 0.000 0.000 0.000 {built-in method io.open} 1 0.000 0.000 0.026 0.026 <ipython-input-5-9e8c48bb1aba>:1(max_path_clean) 1 0.000 0.000 0.000 0.000 <ipython-input-5-9e8c48bb1aba>:3(<listcomp>) 1 0.000 0.000 0.027 0.027 {built-in method builtins.exec} 15 0.000 0.000 0.000 0.000 {method `split' of `str' objects} 1 0.000 0.000 0.027 0.027 <string>:1(<module>) 2 0.000 0.000 0.000 0.000 codecs.py:318(decode) 1 0.000 0.000 0.000 0.000 _bootlocale.py:23(getpreferredencoding) 2 0.000 0.000 0.000 0.000 {built-in method _codecs.utf_8_decode} 1 0.000 0.000 0.000 0.000 {built-in method _locale.nl_langinfo} 1 0.000 0.000 0.000 0.000 codecs.py:308(__init__) 1 0.000 0.000 0.000 0.000 codecs.py:259(__init__) 1 0.000 0.000 0.000 0.000 {built-in method builtins.len} 1 0.000 0.000 0.000 0.000 {method `disable' of `_lsprof.Profiler' objects} \end{verbatim} } Note that the total number of primitive function calls decreased from 49,181 to 16,415. Using \li{<p<\%timeit>p>} also shows that the run time decreased by about 15\%. Moving code outside of a loop or an often-used function usually results in a similar speedup. Another important way of reducing repetition is carefully controlling loop conditions to avoid unnecessary iterations. Consider the problem of identifying Pythagorean triples, sets of three distinct integers $a < b < c$ such that $a^2 + b^2 = c^2$. The following function identifies all such triples where each term is less than a parameter $N$ by checking all possible triples. \begin{lstlisting} >>> def pythagorean_triples_slow(N): ... """Compute all pythagorean triples with entries less than N.""" ... triples = [] ... for a in range(1, N): # Try values of a from 1 to N-1. ... for b in range(1, N): # Try values of b from 1 to N-1. ... for c in range(1, N): # Try values of c from 1 to N-1. ... if a**2 + b**2 == c**2 and a < b < c: ... triples.append((a,b,c)) ... return triples ... \end{lstlisting} Since $a < b < c$ by definition, any computations where $b \le a$ or $c \le b$ are unnecessary. Additionally, once $a$ and $b$ are chosen, $c$ can be no greater than $\sqrt{a^2 + b^2}$. The following function changes the loop conditions to avoid these cases and takes care to only compute $a^2 + b^2$ once for each unique pairing $(a,b)$. \begin{lstlisting} >>> from math import sqrt >>> def pythagorean_triples_fast(N): ... """Compute all pythagorean triples with entries less than N.""" ... triples = [] ... for a in range(1, N): # Try values of a from 1 to N-1. ... for b in range(a+1, N): # Try values of b from a+1 to N-1. ... _sum = a**2 + b**2 ... for c in range(b+1, min(int(sqrt(_sum))+1, N)): ... if _sum == c**2: ... triples.append((a,b,c)) ... return triples ... \end{lstlisting} These improvements have a drastic impact on run time, even though the main approach---checking by brute force---is the same. \begin{lstlisting} <g<In [6]:>g> <p<%time>p> triples = pythagorean_triples_slow(500) <<CPU times: user 1min 51s, sys: 389 ms, total: 1min 51s Wall time: 1min 52s>> # 112 seconds. <g<In [7]:>g> <p<%time>p> triples = pythagorean_triples_fast(500) <<CPU times: user 1.56 s, sys: 5.38 ms, total: 1.57 s Wall time: 1.57 s>> # 98.6% faster! \end{lstlisting} \begin{problem} The following function computes the first $N$ prime numbers. \begin{lstlisting} def primes(N): """Compute the first N primes.""" primes_list = [] current = 2 while len(primes_list) < N: isprime = True for i in range(2, current): # Check for nontrivial divisors. if current % i == 0: isprime = False if isprime: primes_list.append(current) current += 1 return primes_list \end{lstlisting} This function takes about 6 minutes to find the first 10,000 primes on a fast computer. Without significantly modifying the approach, rewrite \li{primes()} so that it can compute 10,000 primes in under 0.1 seconds. Use the following facts to reduce unnecessary iterations. \begin{itemize} \item A number is not prime if it has one or more divisors other than 1 and itself. \\(Hint: recall the \li{break} statement.) \item If $p\nmid n$, then $ap\nmid n$ for any integer $a$. Also, if $p \mid n$ and $0 < p < n$, then $p \le \sqrt{n}$. \item Except for $2$, primes are always odd. \end{itemize} Your new function should be helpful for solving problem 7 on \url{https://projecteuler.net}. \label{prob:profiling-primes-naive} \end{problem} \subsection*{Avoid Loops} % --------------------------------------------------- % Most repetition occurs in a looping structure. % \textbf{Avoid loops where possible, especially nested loops} (loops within loops). % If nested loops are unavoidable, focus optimization efforts on the innermost loop, since that part of the code gets the most repetitions. NumPy routines and built-in functions are often useful for eliminating loops altogether. %, a process called \emph{vectorization}. Consider the simple problem of summing the rows of a matrix, implemented in three ways. \begin{lstlisting} >>> def row_sum_awful(A): ... """Sum the rows of A by iterating through rows and columns.""" ... m,n = A.shape ... row_totals = np.empty(m) # Allocate space for the output. ... for i in range(m): # For each row... ... total = 0 ... for j in range(n): # ...iterate through the columns. ... total += A[i,j] ... row_totals[i] = total # Record the total. ... return row_totals ... >>> def row_sum_bad(A): ... """Sum the rows of A by iterating through rows.""" ... return np.array([sum(A[i,:]) for i in range(A.shape[0])]) ... >>> def row_sum_fast(A): ... """Sum the rows of A with NumPy.""" ... return np.<<sum>>(A, axis=1) # Or A.sum(axis=1). ... \end{lstlisting} None of the functions are fundamentally different, but their run times differ dramatically. \begin{lstlisting} <g<In [8]:>g> import numpy as np <g<In [9]:>g> A = np.random.random((10000, 10000)) <g<In [10]:>g> <p<%time>p> rows = row_sum_awful(A) <<CPU times: user 22.7 s, sys: 137 ms, total: 22.8 s Wall time: 23.2 s>> # SLOW! <g<In [11]:>g> <p<%time>p> rows = row_sum_bad(A) <<CPU times: user 8.85 s, sys: 15.6 ms, total: 8.87 s Wall time: 8.89 s>> # Slow! <g<In [12]:>g> <p<%time>p> rows = row_sum_fast(A) <<CPU times: user 61.2 ms, sys: 1.3 ms, total: 62.5 ms Wall time: 64 ms>> # Fast! \end{lstlisting} In this experiment, \li{row_sum_fast()} runs several hundred times faster than \li{row_sum_awful()}. This is primarily because looping is expensive in Python, but NumPy handles loops in C, which is much quicker. Other NumPy functions like \li{np.<<sum>>()} with an \li{axis} argument can often be used to eliminate loops in a similar way. \begin{problem} % Naive Nearest Neighbor with vectorization. Let $A$ be an $m\times n$ matrix with columns $\a_0, \ldots, \a_{n-1}$, and let $\x$ be a vector of length $m$. The \emph{nearest neighbor problem}\footnote{The nearest neighbor problem is a common problem in many fields of artificial intelligence. The problem can be solved more efficiently with a $k$-d tree, a specialized data structure for storing high-dimensional data.} is to determine which of the columns of $A$ is ``closest'' to $\x$ with respect to some norm. That is, we compute \[\underset{j}{\text{argmin }} \|\a_j - \x\|.\] The following function solves this problem na\"ively for the usual Euclidean norm. \begin{lstlisting} def nearest_column(A, x): """Find the index of the column of A that is closest to x.""" distances = [] for j in range(A.shape[1]): distances.append(np.linalg.norm(A[:,j] - x)) return np.argmin(distances) \end{lstlisting} Write a new version of this function without any loops or list comprehensions, using array broadcasting and the \li{axis} keyword in \li{np.linalg.norm()} to eliminate the existing loop. Try to implement the entire function in a single line. \\(Hint: See the NumPy Visual Guide in the Appendix for a refresher on array broadcasting.) Profile the old and new versions with \li{<p<\%prun>p>} and compare the output. Finally, use \li{<p<\%time>p>} or \li{<p<\%timeit>p>} to verify that your new version runs faster than the original. \end{problem} \subsection*{Use Data Structures Correctly} % --------------------------------- Every data structure has strengths and weaknesses, and choosing the wrong data structure can be costly. Here we consider three ways to avoid problems and use sets, dictionaries, and lists correctly. \begin{itemize} \item \textbf{Membership testing}. The question ``is \li{<value>} a member of \li{<container>}'' is common in numerical algorithms. Sets and dictionaries are implemented in a way that makes this a trivial problem, but lists are not. In other words, the \li{in} operator is near instantaneous with sets and dictionaries, but not with lists. \begin{lstlisting} <g<In [13]:>g> a_list = list(range(int(1e7))) <g<In [14]:>g> a_set = set(a_list) <g<In [15]:>g> <p<%timeit>p> 12.5 in a_list <<413 ms +- 48.2 ms per loop (mean+-std.dev. of 7 runs, 1 loop each)>> <g<In [16]:>g> <p<%timeit>p> 12.5 in a_set <<170 ns +- 3.8 ns per loop (mean+-std.dev. of 7 runs, 10000000 loops each)>> \end{lstlisting} Looking up dictionary values is also almost immediate. Use dictionaries for storing calculations to be reused, such as mappings between letters and numbers or common function outputs. \item \textbf{Construction with comprehension}. Lists, sets, and dictionaries can all be constructed with comprehension syntax. This is slightly faster than building the collection in a loop, and the code is highly readable. % TODO (?): map(). \begin{lstlisting} # Map the integers to their squares. <g<In [17]:>g> <p<%%time>p> <g<...:>g> a_dict = {} <g<...:>g> for i in range(1000000): <g<...:>g> a_dict[i] = i**2 <g<...:>g> <<CPU times: user 432 ms, sys: 54.4 ms, total: 486 ms Wall time: 491 ms>> <g<In [18]:>g> <p<%time>p> a_dict = {i:i**2 for i in range(1000000)} <<CPU times: user 377 ms, sys: 58.9 ms, total: 436 ms Wall time: 440 ms>> \end{lstlisting} \item \textbf{Intelligent iteration}. Unlike looking up dictionary values, indexing into lists takes time. Instead of looping over the indices of a list, loop over the entries themselves. When indices and entries are both needed, use \li{enumerate()} to get the index and the item simultaneously. \begin{lstlisting} <g<In [19]:>g> a_list = list(range(1000000)) <g<In [20]:>g> <p<%%time>p> # Loop over the indices of the list. <g<...:>g> for i in range(len(a_list)): <g<...:>g> item = a_list[i] <g<...:>g> <<CPU times: user 103 ms, sys: 1.78 ms, total: 105 ms Wall time: 107 ms>> <g<In [21]:>g> <p<%%time>p> # Loop over the items in the list. <g<...:>g> for item in a_list: <g<...:>g> _ = item <g<...:>g> <<CPU times: user 61.2 ms, sys: 1.31 ms, total: 62.5 ms Wall time: 62.5 ms>> # Almost twice as fast as indexing! \end{lstlisting} % <g<In [X]:>g> <p<%%time>p> # Use enumerate() to get both indices and items. % <g<...:>g> for i, item in enumerate(a_list): % <g<...:>g> _ = item % <g<...:>g> % <<CPU times: user 92.5 ms, sys: 1.58 ms, total: 94.1 ms % Wall time: 94.4 ms>> # Still slightly faster than indexing. % \end{lstlisting} \end{itemize} \begin{comment} % USELESS Second, swap values with a single assignment. \begin{lstlisting} >>> a, b = 1, 2 >>> a, b = b, a >>> a, b (2, 1) \end{lstlisting} Third, many non-Boolean objects in Python have truth values. For example, numbers are \li{False} when equal to zero and \li{True} otherwise. Similarly, lists and strings are \li{False} when they are empty and \li{True} otherwise. The following code gives some examples. \begin{lstlisting} # Use the truth values of numbers. >>> if 10: ... print("Non-zero") ... Non-zero # Use the truth values of a list. >>> my_list = [i for i in range(5)] >>> if my_list: ... print(my_list[0]) ... 0 \end{lstlisting} \end{comment} \begin{problem} % Name scores. This is problem 22 from \url{https://projecteuler.net}. Using the rule $A\mapsto 1, B\mapsto 2, \ldots, Z\mapsto 26$, the \emph{alphabetical value} of a name is the sum of the digits that correspond to the letters in the name. For example, the alphabetic value of ``COLIN'' is $3 + 15 + 12 + 9 + 14 = 53$. The following function reads the file \texttt{names.txt}, containing over five-thousand first names, and sorts them in alphabetical order. The \emph{name score} of each name in the resulting list is the alphabetic value of the name multiplied by the name's position in the list, starting at 1. ``COLIN'' is the 938th name alphabetically, so its name score is $938 \times 53 = 49714$. The function returns the total of all the name scores in the file. \begin{lstlisting} def name_scores(filename="names.txt"): """Find the total of the name scores in the given file.""" with open(filename, 'r') as infile: names = sorted(infile.read().replace('"', '').split(',')) total = 0 for i in range(len(names)): name_value = 0 for j in range(len(names[i])): alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" for k in range(len(alphabet)): if names[i][j] == alphabet[k]: letter_value = k + 1 name_value += letter_value total += (names.index(names[i]) + 1) * name_value return total \end{lstlisting} Rewrite this function---removing repetition, eliminating loops, and using data structures correctly---so that it runs in less than 10 milliseconds on average. \end{problem} \subsection*{Use Generators} % ------------------------------------------------ A \emph{generator} is an iterator that yields multiple values, one at a time, as opposed to returning a single value. For example, \li{range()} is a generator. Using generators appropriately can reduce both the run time and the spatial complexity of a routine. Consider the following function, which constructs a list containing the entries of the sequence $\{x_n\}_{n=1}^N$ where $x_{n} = x_{n-1} + n$ with $x_1 = 1$. \begin{lstlisting} >>> def sequence_function(N): ... """Return the first N entries of the sequence x_n = x_{n-1} + n.""" ... sequence = [] ... x = 0 ... for n in range(1, N+1): ... x += n ... sequence.append(x) ... return sequence ... >>> sequence_function(10) [1, 3, 6, 10, 15, 21, 28, 36, 45, 55] \end{lstlisting} A potential problem with this function is that all of the values in the list are computed before anything is returned. This can be a big issue if the parameter $N$ is large. A generator, on the other hand, \emph{yields} one value at a time, indicated by the keyword \li{yield} (instead of \li{return}). When the generator is asked for the next entry, the code resumes right where it left off. % The only visible difference between a generator and a function is the use of \li{yield} in place of \li{return}. % In the following example, note that \li{sequence_generator()} does not keep track of the entire sequence like \li{sequence_function()} does. \begin{lstlisting} >>> def sequence_generator(N): ... """Yield the first N entries of the sequence x_n = x_{n-1} + n.""" ... x = 0 ... for n in range(1, N+1): ... x += n ... yield x # "return" a single value. ... # Get the entries of the generator one at a time with next(). >>> generated = sequence_generator(10) >>> next(generated) 1 >>> next(generated) 3 >>> next(generated) 6 # Put each of the generated items in a list, as in sequence_function(). >>> list(sequence_generator(10)) # Or [i for i in sequence_generator(10)]. [1, 3, 6, 10, 15, 21, 28, 36, 45, 55] # Use the generator in a for loop, like range(). >>> for entry in sequence_generator(10): ... print(entry, end=' ') ... 1 3 6 10 15 21 28 36 45 55 \end{lstlisting} Many generators, like \li{range()} and \li{sequence_generator()}, only yield a finite number of values. However, generators can also continue yielding indefinitely. For example, the following generator yields the terms of $\{x_n\}_{n=1}^\infty$ forever. In this case, using \li{enumerate()} with the generator is helpful for tracking the index $n$ as well as the entry $x_n$. \begin{lstlisting} >>> def sequence_generator_forever(): ... """Yield the sequence x_n = x_{n-1} + n forever.""" ... x = 0 ... n = 1 ... while True: ... x += n ... n += 1 ... yield x # "return" a single value. ... # Sum the entries of the sequence until the sum exceeds 1000. >>> total = 0 >>> for i, x in enumerate(sequence_generator_forever()): ... total += x ... if total > 1000: ... print(i) # Print the index where the total exceeds. ... break # Break out of the for loop to stop iterating. ... 17 # Check that 18 terms are required (since i starts at 0 but n starts at 1). >>> print(sum(sequence_generator(17)), sum(sequence_generator(18))) 969 1140 \end{lstlisting} \begin{warn} % Use xrange() in Python 2. In Python 2.7 and earlier, \li{range()} is \textbf{not} a generator. Instead, it constructs an entire list of values, which is often significantly slower than yielding terms individually as needed. If you are using old versions of Python, use \li{xrange()}, the equivalent of \li{range()} in Python 3.0 and later. \end{warn} \begin{problem} % Fibonacci sequence. This is problem 25 from \url{https://projecteuler.net}. The \emph{Fibonacci sequence} is defined by the recurrence relation $F_{n} = F_{n-1} + F_{n-2}$, where $ F_1 = F_2 = 1$. The 12th term, $F_{12} = 144$, is the first term to contain three digits. Write a generator that yields the terms of the Fibonacci sequence indefinitely. Next, write a function that accepts an integer $N$. Use your generator to find the first term in the Fibonacci sequence that contains $N$ digits. Return the index of this term. \\(Hint: a generator can have more than one \li{yield} statement.) \end{problem} % See \url{https://docs.python.org/3/tutorial/classes.html#generators} for more about generators. % and \url{https://wiki.python.org/moin/Generators} \begin{problem} % Sieve of Eratosthenes. The function in Problem \ref{prob:profiling-primes-naive} could be turned into a prime number generator that yields primes indefinitely, but it is not the only strategy for yielding primes. The \emph{Sieve of Eratosthenes}\footnote{See \url{https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes}.} is a faster technique for finding all of the primes below a certain number. \begin{enumerate} \item Given a cap $N$, start with all of the integers from $2$ to $N$. \item Remove all integers that are divisible by the first entry in the list. \label{step:profiling-sieve-of-eratos} \item Yield the first entry in the list and remove it from the list. \item Return to step \ref{step:profiling-sieve-of-eratos} until the list is empty. \end{enumerate} Write a generator that accepts an integer $N$ and that yields all primes (in order, one at a time) that are less than $N$ using the Sieve of Eratosthenes. Your generator should be able to find all primes less than 100,000 in under $5$ seconds. Your generator and your fast function from Problem \ref{prob:profiling-primes-naive} may be helpful in solving problems 10, 35, 37, 41, 49, and 50 (for starters) of \url{https://projecteuler.net}. \end{problem} \section*{Numba} % ============================================================ Python code is simpler and more readable than many languages, but Python is also generally much slower than compiled languages like C. The \li{numba} module %\footnote{Numba is \textbf{not} part of the standard library, but it is included in the Anaconda distribution. For installation details, see \url{https://numba.pydata.org/}.} bridges the gap by using \emph{just-in-time} (JIT) compilation to optimize code, meaning that the code is actually compiled right before execution. \begin{lstlisting} >>> from numba import jit >>> @jit # Decorate a function with @jit to use Numba. ... def row_sum_numba(A): ... """Sum the rows of A by iterating through rows and columns, ... optimized by Numba. ... """ ... m,n = A.shape ... row_totals = np.empty(m) ... for i in range(m): ... total = 0 ... for j in range(n): ... total += A[i,j] ... row_totals[i] = total ... return row_totals \end{lstlisting} Python is a \emph{dynamically typed} language, meaning variables are not defined explicitly with a datatype (\li{x = 6} as opposed to \li{int x = 6}). This particular aspect of Python makes it flexible, easy to use, and slow. % One of the reasons compiled languages like C are so much faster than Python is because they have explicitly defined datatypes. Numba speeds up Python code primarily by assigning datatypes to all the variables. Rather than requiring explicit definitions for datatypes, Numba attempts to infer the correct datatypes based on the datatypes of the input. In \li{row_sum_numba()}, if \li{A} is an array of integers, Numba will infer that \li{total} should also be an integer. On the other hand, if \li{A} is an array of floats, Numba will infer that \li{total} should be a \emph{double} (a similar datatype to float in C). Once all datatypes have been inferred and assigned, the original Python code is translated to machine code. % by the LLVM library. Numba caches this compiled version of code for later use. The first function call takes the time to compile and then execute the code, but subsequent calls use the already-compiled code. \begin{lstlisting} <g<In [22]:>g> A = np.random.random((10000, 10000)) # The first function call takes a little extra time to compile first. <g<In [23]:>g> <p<%time>p> rows = row_sum_numba(A) <<CPU times: user 408 ms, sys: 11.5 ms, total: 420 ms>> Wall time: 425 ms # Subsequent calls are consistently faster that the first call. <g<In [24]:>g> <p<%timeit>p> row_sum_numba(A) <<138 ms +- 1.96 ms per loop (mean +- std. dev. of 7 runs, 10 loops each)>> \end{lstlisting} Note that the only difference between \li{row_sum_numba()} and \li{row_sum_awful()} from a few pages ago is the \li{@jit} decorator, and yet the Numba version is about 99\% faster than the original! The inference engine within Numba does a good job, but it's not always perfect. Adding the keyword argument \li{nopython=True} to the \li{@jit} decorator raises an error if Numba is unable to convert each variable to explicit datatypes. The \li{inspect_types()} method can also be used to check if Numba is using the desired types. \begin{lstlisting} # Run the function once first so that it compiles. >>> rows = row_sum_numba(np.random.random((10,10))) >>> row_sum_numba.inspect_types() # The output is very long and detailed. \end{lstlisting} Alternatively, datatypes can be specified explicitly in the \li{@jit} decorator as a dictionary via the \li{<<locals>>} keyword argument. Each of the desired datatypes must also be imported from Numba. \begin{lstlisting} >>> from numba import int64, double >>> @jit(nopython=True, <<locals>>=dict(A=double[:,:], m=int64, n=int64, ... row_totals=double[:], total=double)) ... def row_sum_numba(A): # 'A' is a 2-D array of doubles. ... m,n = A.shape # 'm' and 'n' are both integers. ... row_totals = np.empty(m) # 'row_totals' is a 1-D array of doubles. ... for i in range(m): ... total = 0 # 'total' is a double. ... for j in range(n): ... total += A[i,j] ... row_totals[i] = total ... return row_totals ... \end{lstlisting} While it sometimes results in a speed boost, there is a caveat to specifying the datatypes: \li{row_sum_numba()} no longer accepts arrays that contain anything other than floats. When datatypes are not specified, Numba compiles a new version of the function each time the function is called with a different kind of input. Each compiled version is saved, so the function can still be used flexibly. \begin{problem} % Compare times for Numba. The following function calculates the $n$th power of an $m\times m$ matrix $A$. \begin{lstlisting} def matrix_power(A, n): """Compute A^n, the n-th power of the matrix A.""" product = A.copy() temporary_array = np.empty_like(A[0]) m = A.shape[0] for power in range(1, n): for i in range(m): for j in range(m): total = 0 for k in range(m): total += product[i,k] * A[k,j] temporary_array[j] = total product[i] = temporary_array return product \end{lstlisting} \begin{enumerate} \item Write a Numba-enhanced version of \li{matrix_power()} called \li{matrix_power_numba()}. \item Write a function that accepts an integer $n$. Run \li{matrix_power_numba()} once with a small random input so it compiles. Then, for $m=2^2,2^3,\ldots,2^7$, \begin{enumerate} \item Generate a random $m\times m$ matrix $A$ with \li{np.random.random()}. \item Time (separately) \li{matrix_power()}, \li{matrix_power_numba()}, and NumPy's \\ \li{np.linalg.matrix_power()} on $A$ with the specified value of $n$. \\(If you are unfamiliar with timing code inside of a function, see the \\ Additional Material section on timing code.) \end{enumerate} Plot the times against the size $m$ on a log-log plot (use \li{plt.loglog()}). \end{enumerate} With $n=10$, the plot should show that the Numba and NumPy versions far outperform the pure Python implementation, with NumPy eventually becoming faster than Numba. % NumPy takes products of matrices by calling BLAS and LAPACK, which are heavily optimized linear algebra libraries written in C, assembly, and Fortran. \end{problem} \begin{warn} Optimizing code is an important skill, but it is also important to know when to refrain from optimization. The best approach to coding is to write unit tests, implement a solution that works, test and time that solution, \textbf{then} (and only then) optimize the solution with profiling techniques. As always, the most important part of the process is choosing the correct algorithm to solve the problem. Don't waste time optimizing a poor algorithm. \end{warn} \newpage \section*{Additional Material} % ============================================== \subsection*{Other Timing Techniques} % --------------------------------------- Though \li{<p<\%time>p>} and \li{<p<\%timeit>p>} are convenient and work well, some problems require more control for measuring execution time. The usual way of timing a code snippet by hand is via the \li{time} module (which \li{<p<\%time>p>} uses). The function \li{time.time()} returns the number of seconds since the Epoch\footnote{See \url{https://en.wikipedia.org/wiki/Epoch_(reference_date)\#Computing}.}; to time code, measure the number of seconds before the code runs, the number of seconds after the code runs, and take the difference. \begin{lstlisting} >>> import time >>> start = time.time() # Record the current time. >>> for i in range(int(1e8)): # Execute some code. ... pass ... end = time.time() # Record the time again. ... print(end - start) # Take the difference. ... 4.20402193069458 # (seconds) \end{lstlisting} The \li{timeit} module (which \li{<p<\%timeit>p>} uses) has tools for running code snippets several times. The code is passed in as a string, as well as any setup code to be run before starting the clock. \begin{lstlisting} >>> import timeit >>> timeit.timeit("for i in range(N): pass", setup="N = int(1e6)", number=200) 4.884839255013503 # Total time in seconds to run the code 200 times. >>> _ / 200 0.024424196275067516 # Average time in seconds. \end{lstlisting} The primary advantages of these techniques are the ability automate timing code and being able save the results. For more documentation, see \url{https://docs.python.org/3.6/library/time.html} and \url{https://docs.python.org/3.6/library/timeit.html}. \subsection*{Customizing the Profiler} % -------------------------------------- The output from \li{<p<\%prun>p>} is generally long, but it can be customized with the following options. \begin{table}[H] \centering \begin{tabular}{l|l} Option & Description \\ \hline \li{-l <limit>} & Include a limited number of lines in the output.\\ \li{-s <key>} & Sort the output by call count, cumulative time, function name, etc. \\ \li{-T <filename>} & Save profile results to a file (results are still printed).\\ \end{tabular} \end{table} For example, \li{<p<\%prun>p> -l 3 -s ncalls -T path_profile.txt max_path()} generates a profile of \li{max_path()} that lists the 3 functions with the most calls, then write the results to \texttt{path\_profile.txt}. See \url{http://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-prun} for more details.
{ "alphanum_fraction": 0.6634136724, "avg_line_length": 48.1352313167, "ext": "tex", "hexsha": "63ac277cae071daac1a9937bf3d23b9b92b34fde", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "DM561/dm561.github.io", "max_forks_repo_path": "acme-material/Labs/PythonEssentials/Profiling/Profiling.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_issues_repo_issues_event_max_datetime": "2021-03-31T19:00:36.000Z", "max_issues_repo_issues_event_min_datetime": "2019-10-18T19:57:53.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "DM561/dm561.github.io", "max_issues_repo_path": "acme-material/Labs/PythonEssentials/Profiling/Profiling.tex", "max_line_length": 357, "max_stars_count": 1, "max_stars_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "DM561/dm561.github.io", "max_stars_repo_path": "acme-material/Labs/PythonEssentials/Profiling/Profiling.tex", "max_stars_repo_stars_event_max_datetime": "2019-04-13T13:22:41.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-13T13:22:41.000Z", "num_tokens": 11568, "size": 40578 }
\documentclass[10pt]{report} \usepackage{subcaption} % for subfigures \usepackage{amsthm} % for QED %\usepackage{algpseudocode} % for pseudo-code \usepackage{mathtools} % for delimiter \usepackage{listings} % for code \lstset{ language=R, basicstyle=\ttfamily, numbers=none, stepnumber=1, numbersep=8pt, showspaces=false, showstringspaces=false, showtabs=false, frame=single, tabsize=2, captionpos=t, breaklines=true, breakatwhitespace=false } \usepackage{float} % for figure [H] \usepackage{booktabs} % for tabular \usepackage{caption} % for \caption* \usepackage[export]{adjustbox} % for valign=t \usepackage{array} % for column type m \usepackage{verbatim} \usepackage{graphicx} %\graphicspath{ {imgs/} } \usepackage{fancyhdr} \usepackage{amssymb} \usepackage{amsmath} %%%%%% Pagination \setlength{\topmargin}{-.3 in} \setlength{\oddsidemargin}{0in} \setlength{\evensidemargin}{0in} \setlength{\textheight}{9.in} \setlength{\textwidth}{6.5in} %Title page \newcommand{\hwTitle}{Homework \#1} \newcommand{\hwCourse}{Applied Statistics/Regression} \newcommand{\hmwkClassInstructor}{Professor Lulu Kang} \title{ \vspace{2in} \textmd{\textbf{\hwCourse\\\hwTitle}}\\ \vspace{0.3in}\large{\textit{\hmwkClassInstructor}} \vspace{3in} } %\title{Homework 1} \author{\textbf{Zhihao Ai}} \date{} %Header setting. \pagestyle{fancy} \fancyhead[L]{Zhihao Ai} \fancyhead[C]{Math 484} \fancyhead[R]{Homework 1} %%%%%% %Global setting. %\everymath{\displaystyle} \setlength\parindent{0pt} %Custom general commands. \newcommand{\ds}{\displaystyle} \newcommand{\ts}{\textstyle} \newcommand{\f}[1] {f\left(#1\right)} \newcommand{\eva}[2] {\left. #1 \right|_{#2}} \newcommand{\dintt}[4] {\int_{#1}^{#2} #3 d#4} \newcolumntype{N}{ >$ c <$} \newcolumntype{M}[1]{>{\centering\arraybackslash $}m{#1}<{$}} \newcommand{\abs}[1] {\left| #1 \right|} \DeclarePairedDelimiter\autoparen{(}{)} \newcommand{\pa}[1]{\autoparen*{#1}} \DeclarePairedDelimiter\autodvert{\Vert}{\Vert} \DeclarePairedDelimiter{\floor}{\lfloor}{\rfloor} \newcommand{\norm}[1]{\autodvert*{#1}} \newcommand{\var}{\text{var}} \begin{document} \maketitle \subsection*{Ex 1.27} \begin{enumerate} \item [a.] The estimated regression function is $\hat{Y} = -1.19x + 156.35$. \lstinputlisting{27a.txt} The plot of the function and the data is shown below: \begin{figure}[H] \centering \includegraphics[width=.6\linewidth]{27a.png} \end{figure} This function appears to give a good fit and the plot supports the anticipation. \item [b.] \begin{enumerate} \item [(1)] A point estimate of the difference in the mean muscle mass for women differing in age by one year is $\hat{\beta}_1 = -1.19$. \item [(2)] A point estimate of the mean muscle mass for women aged $X=60$ years is $-1.19(60) + 156.35 = 84.95$. \item [(3)] The value of the residual for the eighth case, where $X=41$ and $Y=112$ is $e_8 = Y_8 - \hat{Y}_8 = 112 - [-1.19(41) + 156.35] = 4.44$. \item [(4)] A point estimate of $\sigma^2$ is $\hat{\sigma}^2 = \frac{RSS}{n-2} = \frac{3874.4}{58} = 66.8$. \lstinputlisting{27b4.txt} \end{enumerate} \end{enumerate} \subsection*{Ex 1.28} \begin{enumerate} \item [a.] The estimated regression function is $\hat{Y} = -170.6x + 20517.6$. \lstinputlisting{28a.txt} The plot of the function and the data is shown below: \begin{figure}[H] \centering \includegraphics[width=.6\linewidth]{28a.png} \end{figure} This function generally gives not a very good but acceptable fit since the correlation between $X$ and $Y$ here is not that strong. \item [b.] \begin{enumerate} \item [(1)] A point estimate of the difference in the mean crime rate for two countries whose high-school graduation rates differ by one percentage point is $\hat{\beta}_1 = -170.6$. \item [(2)] A point estimate of the mean crime rate last year in countries with high school graduation percentage $X=80$ is $-170.6(80) + 20517.6 = 6869.6$. \item [(3)] A point estimate of $\epsilon_{10}$, where $X=82$ and $Y=7932$ is $e_{10} = Y_{10} - \hat{Y}_{10} = 7932 - [-170.6(82) + 20517.6] = 1403.6$. \item [(4)] A point estimate of $\sigma^2$ is $\hat{\sigma}^2 = \frac{RSS}{n-2} = \frac{455273165}{82} = 5552112$. \lstinputlisting{28b4.txt} \end{enumerate} \end{enumerate} \subsection*{Ex 1.39} \begin{enumerate} \item [a.] Let $Y_{i,1}$ and $Y_{i,2}$ be the observations on $Y$ whose mean is $\bar{Y}_i, i=1,2,3$. For the six points, \begin{align*} \hat{\beta}_1^* &= \frac{S_{xy}^*}{S_{xx}^*}\\ &= \frac{\sum_{i=1}^{3} (x_i - \bar{x})(y_{i,1}-\bar{y}) + (x_i - \bar{x})(y_{i,2}-\bar{y}) }{\sum_{i=1}^{3} (x_i - \bar{x}) + (x_i - \bar{x})}\\ &= \frac{\sum_{i=1}^{3} 2(x_i - \bar{x})(\frac{y_{i,1} + y_{i,2}}{2}-\bar{y}) }{2\sum_{i=1}^{3}(x_i - \bar{x})}\\ &= \frac{\sum_{i=1}^{3} (x_i - \bar{x})(\bar{Y}_i-\bar{y})}{\sum_{i=1}^{3}(x_i - \bar{x})}\\ &= \frac{S_{xy}}{S_{xx}} \end{align*} where $S_{xy}$ and $S_{xx}$ are for three points. So, $\hat{\beta}_1$ is the same for three-point case and six-point case. Since the mean values $\bar{x}$ and $\bar{y}$ are both the same in two cases, $\hat{\beta}_0 = \bar{y} - \hat{\beta}_1 \bar{x}$ is also the same. Therefore, the least squares regression lines are identical. \item [b.] Since $RSS = S_{yy} - \frac{S_{xy}^2}{S_{xx}}$ and $\hat{\sigma}^2 = \frac{RSS}{n-2}$, we can directly calculate the estimate of $\sigma^2$ without fitting a regression line. \end{enumerate} \subsection*{Ex 1.42} \begin{enumerate} \item [a.] \[ L(\beta_1) = \prod_{i=1}^{n} f(y_i | \beta_1, \sigma^2) = \pa{\frac{1}{4\sqrt{2\pi}}}^6 \exp\pa{-\frac{1}{32} \sum_{i=1}^{n} (y_i - \beta_1 x_i)^2} \] \item [b.] $L(17) = 9.45133\times 10^{-30}, L(18) = 2.64904\times 10^{-7}, L(19) = 3.04729\times 10^{-37}$. The likelihood function is largest at $\beta_1 = 18$. \item [c.] $b_1 = \sum_{i=1}^{6} X_i Y_i / \sum_{i=1}^{6} X_i^2 = 17.9285$. The result in part (b) is close to the estimate. \item [d.] The plot for the likelihood function is shown below. \begin{figure}[H] \centering \includegraphics[width=.5\linewidth]{42d.png} \end{figure} The point where the likelihood function is maximized corresponds to what's found in part (c). \end{enumerate} {\large\bf Show that $\ds \sum_{i=1}^{n} e_i x_i = 0$ and $\ds \sum_{i=1}^{n} e_i = 0$.} \begin{align*} \sum_{i=1}^{n} e_i x_i &= \sum_{i=1}^{n} x_i[y_i - (\bar{y} - \hat{\beta}_1 \bar{x}) - \hat{\beta}_1 x_i]\\ &= \sum_{i=1}^{n} x_i y_i - \bar{y}\sum_{i=1}^{n}x_i + \hat{\beta}_1 \bar{x} \sum_{i=1}^{n} x_i - \hat{\beta}_1 \sum_{i=1}^{n} x_i^2\\ &= \sum_{i=1}^{n} x_i y_i - n\bar{x}\bar{y} + \frac{\sum_{i=1}^{n} x_i y_i - n\bar{x}\bar{y}}{\sum_{i=1}^{n} x_i^2 - n\bar{x}^2}\pa{n\bar{x}^2 + \sum_{i=1}^{n} x_i^2}\\ &= \sum_{i=1}^{n} x_i y_i - n\bar{x}\bar{y} - \pa{\sum_{i=1}^{n} x_i y_i - n\bar{x}\bar{y}}\\ &= 0\\ \sum_{i=1}^{n} e_i &= \sum_{i=1}^{n} [y_i - (\bar{y} - \hat{\beta}_1 \bar{x}) - \hat{\beta}_1 x_i]\\ &= \sum_{i=1}^{n} y_i - n\bar{y} + \hat{\beta}_1 n\bar{x} - \hat{\beta}_1 \sum_{i=1}^{n} x_i\\ &= n\bar{y} - n\bar{y} + \hat{\beta}_1 n\bar{x} - \hat{\beta}_1 n\bar{x}\\ &= 0 \end{align*} {\large\bf Show that $\hat{\beta}_1 \sim N(\beta_1, \sigma^2/\sum_{i=1}^{n} (x_i - \bar{x})^2)$.} \[ \hat{\beta}_1 = \frac{\sum_{i=1}^{n} (x_i - \bar{x})(y_i - \bar{y})}{S_{xx}} = \frac{\sum_{i=1}^{n} (x_i - \bar{x}y_i)}{S_{xx}} - \frac{\bar{y}\sum_{i=1}^{n} (x_i - \bar{x})}{S_{xx}} = \frac{\sum_{i=1}^{n} (x_i-\bar{x})y_i}{S_{xx}} \] Let $k_i = \frac{x_i - \bar{x}}{S_{xx}}, i=1,\dots,n$, then $\hat{\beta}_1 = \sum_{i=1}^{n} k_i y_i$. Since $k_i$ possess the properties: \begin{align*} \sum_{i=1}^{n} k_i &= 0\\ \sum_{i=1}^{n} k_i x_i &= 1\\ \sum_{i=1}^{n} k_i^2 &= \frac{1}{S_{xx}} \end{align*} we can derive that \begin{align*} E(\hat{\beta}_1) &= E\pa{\sum_{i=1}^{n} k_i Y_i} = \sum_{i=1}^{n} k_i E(Y_i) = \sum_{i=1}^{n} k_i (\beta_0 + \beta_1 x_i) = \beta_0 \sum_{i=1}^{n} k_i + \beta_1 \sum_{i=1}^{n} k_i x_i = \beta_1\\ \var(\hat{\beta}_1) &= \var\pa{\sum_{i=1}^{n} k_i Y_i} = \sum_{i=1}^{n} k_i^2 \var(Y_i) = \sum_{i=1}^{n} k_i^2\sigma^2 = \frac{\sigma^2}{S_{xx}} \end{align*} Since $Y_i$'s are assumed to be normally distributed with mean $\mu_i$ and variance $\sigma_i^2$ , the moment-generating function is given by \[ m_{Y_i}(t) = \exp\pa{\mu_i t + \frac{\sigma_i^2 t^2}{2}} \] Then \begin{align*} m_{\hat{\beta}_1}(t) &= \prod_{i=1}^n m_{k_i Y_i}(t)\\ \ds &= \prod_{i=1}^n \exp\pa{\mu_i k_i t + \frac{k_i^2 \sigma_i^2 t^2}{2}}\\ &= \exp\pa{t\sum_{i=1}^{n} k_i \mu_i + \frac{t^2}{2} \sum_{i=1}^{n} k_i^2 \sigma_i^2} \end{align*} Therefore, $\hat{\beta}_1$ is also normally distributed. Thus, \[ \hat{\beta}_1 \sim N\pa{\beta_1, \sigma^2/\sum_{i=1}^{n} (x_i - \bar{x})^2} \] \end{document}
{ "alphanum_fraction": 0.6266378033, "avg_line_length": 36.5708333333, "ext": "tex", "hexsha": "214e04c909903bee96050a3765a50c99fe7d2e98", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9817add32fbcb46b58849ec923aa73a47daa9584", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ZhihaoAi/MATH-484-Assignments", "max_forks_repo_path": "HW1/Math-484-HW1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9817add32fbcb46b58849ec923aa73a47daa9584", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ZhihaoAi/MATH-484-Assignments", "max_issues_repo_path": "HW1/Math-484-HW1.tex", "max_line_length": 330, "max_stars_count": null, "max_stars_repo_head_hexsha": "9817add32fbcb46b58849ec923aa73a47daa9584", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ZhihaoAi/MATH-484-Assignments", "max_stars_repo_path": "HW1/Math-484-HW1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3682, "size": 8777 }
\documentclass[11pt, oneside]{article} \usepackage{indentfirst, hyperref, geometry, amsmath, amssymb, algorithm, CJKutf8} \usepackage[noend]{algpseudocode} \usepackage[cache=false]{minted} \usepackage{CJKutf8} \geometry{a4paper} \hypersetup{ colorlinks=true, urlcolor=cyan } \makeatletter \AtBeginEnvironment{minted}{\dontdofcolorbox} \def\dontdofcolorbox{\renewcommand\fcolorbox[4][]{##4}} \makeatother \title{Matricization of the Rubik's Cube} \author{Stephen Huan} \begin{document} \maketitle \section{Rational} Apparently the Rubik's cube is an ``algebraic group'' (which I will not pretend to understand), and has certain mathematical properties. The most important in my opinion is \textit{non-commutativity}, or when \( a \times b \neq b \times a \). For a simple example, note that applying the sequence ``R U'' does not give the same result as applying ``U R''. Such ``non-commutative algebra'' is closely associated with things like Heisenberg's Matrix mechanics, a formulation of quantum mechanics mathematically identical to Schrödinger's wave mechanics or Hamilton's quaternions. In my previous lecture I argued that a cubie-wise approach was simplier and more abstract than a sticker-wise approach. However, my cubie approach is 3x9x3, making it a tensor. Contrary to what Google wants you to believe, tensors are difficult to work with and do not ``flow''. A sticker-wise approach could be thought of as a 6x9 matrix, and moves as other transformation matrices such that if \( S \) is a cube state and \( R \) a possible turn of the cube \( S' = S R \). The advantages are numerous. Solving the cube becomes a matrix factorization problem - trying to decompose a single matrix which represents a a complex transition from scrambled to solved into the product of many move matrices. Matrix multiplication is a well-optimized operation in many different linear algebra libraries and is trivially parallelized. \section{Example} Suppose \( A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \) is a cube state and \( B = \begin{bmatrix} 2 & 1 \\ 4 & 3 \end{bmatrix} \) is the cube state after a R move. We are looking for a matrix \( X \) such that \( A X = B \). Multipying both sides on the left by \( A' \) yields \( A' A X = A' B \) which means \( X = A' B \). After computing the inverse \( X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \). \( X \) can be thought of as a 2-swap which swaps the first row as well as the second row. Computing \( AX^2 = A \), \( AX^3 = AX \), etc. However, if \( B = \begin{bmatrix} 2 & 1 \\ 3 & 4 \end{bmatrix} \) solving for \( X \) yields \( X = \begin{bmatrix} -1 & 2 \\ 1.5 & -0.5 \end{bmatrix} \). By definition \( A X = B \), however \( A X^2 \neq A \) which would be expected of a swap. Further exponentiation of \( X \) results in gibberish. This can be interepreted as the limitation of ``learning'' a transformation matrix: it cannot learn a single swap, only a two-swap. The analogy to cubing is clear. Suppose we had a transformation matrix \( R \) which applies a R-move on a given state of the cube \( S \). R2 would literally be \( R^2 \). R' would be \( R^3 \) and R4 would be \( R^4 \) as well as the identity matrix. \section{Experimentation} As stated previously, a stickermap is 6x9 or a non-square matrix. Therefore there does not exist a traditional inverse, and more general methods must be used. In particular I tried the Moore-Penrose inverse as well as the two one-sided inverses. \subsection{Solved Cube} Starting with the solved cube given by: \[ \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 \\ 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 \\ 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 \\ 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \end{bmatrix} \] and a R-turn on the solved cube given by: \[ \begin{bmatrix} 0 & 0 & 5 & 0 & 0 & 5 & 0 & 0 & 5 \\1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\4 & 2 & 2 & 4 & 2 & 2 & 4 & 2 & 2 \\3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 \\4 & 4 & 0 & 4 & 4 & 0 & 4 & 4 & 0 \\5 & 5 & 2 & 5 & 5 & 2 & 5 & 5 & 2 \end{bmatrix} \] using Srikar Gouru's \href{https://github.com/srikarg89/RubiksCubeSolver-JS}{code} (\begin{CJK}{UTF8}{maru} ありがとう \end{CJK}!). I first computed the Moore-Penrose inverse but had no idea what to do with it. I then attempted to compute both one-sided inverses, but for an one-sided inverse to exist the rank of the matrix has to be the maximum possible for that size. A property of rank is that \( \text{rank}(A) = \text{rank}(A^T) \), or that the columns rank is equal to the row rank. In this case, the maximum rank is 6. Since each row of the matrix is linearly dependent on each other row and each column is linearly dependent on each other column the rank of the solved cube is 1. \subsection{Random Permutation} To rectify the rank problem I permuted each \( (i, j) \to (i', j') \) randomly. \begin{minted}{python} def transform(mapping, m): new = [[0]*9 for i in range(6)] for i in range(6): for j in range(9): x, y = mapping[(i, j)] new[i][j] = m[x][y] return np.array(new) def gen_random(m=solved): indexes = [(i, j) for i in range(6) for j in range(9)] trans = {} for i in range(6): for j in range(9): trans[(i, j)] = random.choice(indexes) indexes.remove(trans[(i, j)]) return transform(trans, m), trans \end{minted} To verify the inversability of the new matrices, I wrote a function. \begin{minted}{python} def inversable(m, tol=2): m = np.array(m) if np.linalg.matrix_rank(m) == 6: return (round(np.linalg.det(m.T @ m), tol) != 0, round(np.linalg.det(m @ m.T), tol) != 0) return (False,)*2 \end{minted} It happened that the left inverse did not exist. The right inverse is defined by \( A_r = A^T (A A^T)^{-1} \) with the result that \( A A_r = A A^T (A A^T)^{-1} = I_6 \). Given \( A \) as the solved cube state and \( B \) the cube after a R-turn, \( A X = B \), so \( X = A_r B \) is the transformation matrix. However, repeated application of \( X \) was meaningless. Also, the permutation was completely ad-hoc and mathematically meaningless. \subsection{Distinct Swaps} Recalling the simple 2x2 example, certain swaps are possible and certain other swaps are impossible. I realized I could not possibly learn from the solved state \( \to \) R-move off because only 12 stickers move (the entire red face is unchanged). I would therefore need a 20 sticker change, the maximum possible. To find such a state I defined the function rdiff which returns the number of stickers which change before and after doing a R move and applied my BFS defined in my earlier lecture. \begin{minted}{python} c = cube.Cube() states, alg = cube.solve(c, (None, lambda c: rdiff(c) >= 20), cube.HTM) c.turn(alg) print(rdiff(c)) print(inversable(c.to_face())) a = np.array(c.to_face()) c.turn("R") ap = np.array(c.to_face()) ar = a.T @ np.linalg.inv(a @ a.T) R = ar @ ap print(a @ R) \end{minted} As a result of finding a 20 sticker difference I no longer needed to permute the matrix. However, the transition matrix remains meaningless. \section{Future Work} Perhaps transitions cannot be represented by matrices and have to be represented by tensors or quaternions. Whatever the case, the mathematical representation must be noncommutative. This also may be a fairly useless avenue of research. Various vectorization approaches and neural networks to be done soon! \end{document}
{ "alphanum_fraction": 0.6824953445, "avg_line_length": 49.4605263158, "ext": "tex", "hexsha": "c5d479ab3e778a3713890ef22373e282aed5450d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "45340bca434c8b1eb0d16dcae3799ed8dd758f0c", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "myfatemi04/TJCubing", "max_forks_repo_path": "src/pdfs/Matricization/matrix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "45340bca434c8b1eb0d16dcae3799ed8dd758f0c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "myfatemi04/TJCubing", "max_issues_repo_path": "src/pdfs/Matricization/matrix.tex", "max_line_length": 249, "max_stars_count": null, "max_stars_repo_head_hexsha": "45340bca434c8b1eb0d16dcae3799ed8dd758f0c", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "myfatemi04/TJCubing", "max_stars_repo_path": "src/pdfs/Matricization/matrix.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2286, "size": 7518 }
\documentclass[12pt]{article} \input{preamble.tex} \title{My paper title} \begin{document} \renewcommand{\onlyinsubfile}[1]{} %% for use with subfiles \renewcommand{\notinsubfile}[1]{#1} %% for use with subfiles \maketitle \begin{center} {\small\color{lgray}A list of authors and their affiliations can be found at the end of the manuscript.}\end{center} \thispagestyle{empty} \medskip \begin{abstract} An abstract \end{abstract} \section{Introduction} An introduction contains words. Often it will have citations, as well \cite{grillner, poldrack2014making}. \begin{figure}[h!] \begin{cframed} \centering \includegraphics[width=\textwidth]{./figs/example_fig.pdf} \caption{Figures get pretty frames.} \label{fig:sic} \end{cframed} \end{figure} %% subfiles are included like this. \subfile{sub-file.tex} \section{Methods} Often we do things, these things can be called methods\footnote{or procedures, perhaps}. % Tables \ref{tab:hurdles} also get pretty frames. \paragraph{Woah} Paragraphs are bolded. \begin{table}[h!] \begin{cframed}[lgray] \centering \caption{see! I'm framed!.} \begin{tabular}{ l Q Q L } \hline \textbf{columns} & \textbf{are} & \textbf{often} & \textbf{bold} \\ \hline \hline and & contain & useful & data \\ \hline \end{tabular} \makeatletter \let\@currsize\normalsize \label{tab:hurdles} \end{cframed} \end{table} \paragraph{Another bold para} because I can. \cite{ndmg} \section{Results} We made science happen. \subsection{yes, we have subsections too} words can even go under them. \section{Discussion} The world is better because YOU are in it \cite{openconnectome}. \subsection*{Author Information} {\normalsize Gregory~Kiar$^{1,2}$, person 2$^{3}$, Randal~Burns$^{4}$, Joshua~T.~Vogelstein$^{1,2}$ } {\small \noindent Corresponding Author: Joshua T. Vogelstein $<$\url{[email protected]}$>$} \begin{spacing}{0.3} {\normalsize \noindent${^1}$Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.\\ ${^2}$Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA.\\ ${^3}$Department of Herbology, Hogwards School of Witchcraft and Wizardry, Hogwarts, UK.\\ ${^4}$Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.} \end{spacing} \subsection*{Declarations} \paragraph{Competing Interests} The authors declare no competing interests in this manuscript. \bibliographystyle{IEEEtran} \begin{spacing}{0.5} {\footnotesize \bibliography{example}} \end{spacing} % \clearpage \appendix \renewcommand\thesection{Appendix~\Alph{section}} \section{thought you were done?} \renewcommand\thesection{\Alph{section}} ya.... appendices are also a thing. \subsection{with sub sections} yup. \subsubsection{and sub sub sections} sorry. \theendnotes \end{document}
{ "alphanum_fraction": 0.7412988877, "avg_line_length": 23.4201680672, "ext": "tex", "hexsha": "1376d8427aa594eb9d55d1f172b320ef67cc1204", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-12-19T22:38:57.000Z", "max_forks_repo_forks_event_min_datetime": "2017-07-17T22:12:01.000Z", "max_forks_repo_head_hexsha": "cf8be7bf2c1560560518f5673ff944c5ab02ef78", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "neurodata/progl.ai", "max_forks_repo_path": "Draft/example.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "cf8be7bf2c1560560518f5673ff944c5ab02ef78", "max_issues_repo_issues_event_max_datetime": "2015-05-20T11:47:38.000Z", "max_issues_repo_issues_event_min_datetime": "2015-05-14T23:04:49.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "progl-ai/progl.ai", "max_issues_repo_path": "Draft/example.tex", "max_line_length": 131, "max_stars_count": 14, "max_stars_repo_head_hexsha": "cf8be7bf2c1560560518f5673ff944c5ab02ef78", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "progl-ai/progl.ai", "max_stars_repo_path": "Draft/example.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-20T02:25:07.000Z", "max_stars_repo_stars_event_min_datetime": "2017-04-25T21:49:41.000Z", "num_tokens": 853, "size": 2787 }
\documentclass[11pt,a4paper,titlepage]{beamer} \usepackage[utf8]{inputenc} \usepackage{babel} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{amssymb} \usepackage{sidecap} \usepackage{hyperref} \usepackage{siunitx} \usepackage{booktabs} \usepackage{animate} %\usepackage{multimedia} \usepackage{selinput} \usepackage{media9} \usepackage{tikz} \usepackage{pgfplots} \usepackage{subcaption} \usepackage{appendixnumberbeamer} \usepackage[ backend=biber, % use modern biber backend autolang=hyphen, % load hyphenation rules for if language of bibentry is not style=numeric, % german, has to be loaded with \setotherlanguages sorting=none % in the references.bib use langid={en} for english sources ]{biblatex} \addbibresource{references.bib} % the bib file to use \DefineBibliographyStrings{german}{andothers = {{et\,al\adddot}}} % replace u.a. with et al. \useoutertheme{infolines2} \colorlet{structure}{green!50!black} \definecolor{tugreen}{RGB}{132,184,24} \definecolor{tugrey}{RGB}{178,179,182} \definecolor{tured}{RGB}{205,0,47} \setbeamercolor{palette primary}{bg=tugreen,fg=black} \setbeamercolor{palette secondary}{bg=tugrey!50!tugreen,fg=black} \setbeamercolor{palette quaternary}{fg=black, bg=tugrey} \setbeamercolor{caption name}{fg=tugreen} \setbeamercolor{palette tertiary}{fg=black,bg=tugrey} \setbeamercolor{palette compare}{bg=white!80!tugreen,fg=black} \setbeamercolor{palette misc}{bg=white!80!tugreen,fg=black} \setbeamercolor{palette white}{bg=white!99!black,fg=black} \setbeamercolor{itemize item}{fg=tugreen} \setbeamercolor{itemize subitem}{fg=tugreen} \setbeamercolor{itemize subsubitem}{fg=tugreen} \setbeamercolor{enumerate item}{fg=tugreen} \setbeamercolor{enumerate subitem}{fg=tugreen} \setbeamertemplate{itemize item}[square] \setbeamercolor{title}{fg=black, bg=tugreen} \setbeamercolor{frametitle}{fg=black, bg=tugreen} \setbeamertemplate{navigation symbols}{} \setbeamertemplate{frametitle}[default][center] \setbeamertemplate{titlepage}{ \begin{center} \begin{beamercolorbox}[rounded=true, shadow=true, center,ht=0.75cm]{title} \begin{center} \usebeamerfont{title}\inserttitle \end{center} \end{beamercolorbox} \vspace{1cm} \usebeamerfont{subtitle}\insertsubtitle \\ \vspace{1cm} \usebeamerfont{author}\insertauthor \\ \vspace{1cm} \insertinstitute \usebeamerfont{date}\insertdate \begin{flushleft}\hspace*{4cm}%\includegraphics[width=5cm]{logos/tu.pdf}\end{flushleft} \end{center} } \author{Jonah Blank} \title{Indirect search for Dark Matter} \date{28.11.2019} \institute{TU Dortmund} \begin{document} %\begin{frame}[plain] %\begin{tikzpicture}[remember picture,overlay] %\node[at=(current page.center)] { %%\includegraphics[width=1.2\paperwidth,height=\paperheight]{build/coma.pdf} %}; %\end{tikzpicture} %\end{frame} \begin{frame} \titlepage \end{frame} \begin{frame}[plain] \begin{tikzpicture}[remember picture,overlay] \node[at=(current page.center)] { %\includegraphics[width=0.65\paperwidth,height=0.65\paperheight]{build/dir_ind_col.png} }; \end{tikzpicture} \footnotetext[1]{inspirehep.net} \end{frame} \begin{frame} \tableofcontents \end{frame} \section{Hints for Dark Matter(DM)} %\begin{frame}{Hints for Dark Matter(DM)} %gravitational effect: %\begin{itemize} %\item existence of galaxy clusters %\begin{itemize} %\item Virial: $E_\text{kin}=-\left(\frac{1}{2}\right)\cdot E_\text{grav}$ %\item $E_\text{grav}$ way too small assuming only visible matter %\end{itemize}\vfill %\item galaxy rotation: velocity of outer rim %\begin{itemize} %\item distance $r$ from galactic center: $v_\text{rot}\propto\frac{1}{\sqrt{r}}$\vfill %\item measured: $v_\text{rot}\approx$ const %\end{itemize}\vfill %\item fluctuations in cosmic microwave background (CMB) %\begin{itemize} %\item fluctuation of hot matter in early universe\vfill %\item gravity $\leftrightarrow$ radiation pressure %\end{itemize} %\end{itemize} %\end{frame} \setbeamercolor{normal text}{fg=white,bg=black} \renewcommand\footnoterule{} \begin{frame}[plain] \begin{tikzpicture}[remember picture,overlay] \node[at=(current page.center)] { %\includegraphics[width=1.2\paperwidth,height=\paperheight]{build/coma.pdf} }; \end{tikzpicture} \footnotetext[2]{arXiv:1604.00014} \end{frame} \renewcommand{\footnoterule}{% \kern -5pt \hrule width \textwidth \kern 2pt } \setbeamercolor{normal text}{fg=black,bg=white} \begin{frame}{Hints for Dark Matter(DM)} gravitational effect: \begin{itemize} \item existence of galaxy clusters \begin{itemize} \item Virial: $\bar{E}_\text{kin}=-\left(\frac{1}{2}\right)\cdot \bar{E}_\text{grav}$\vfill \item $E_\text{grav}$ way too small assuming only visible matter \end{itemize}\vfill\pause \item galaxy rotation: velocity of outer rim \begin{itemize} \item distance $r$ from galactic center: $v_\text{rot}\propto\frac{1}{\sqrt{r}}$\vfill \item measured: $v_\text{rot}\approx$ const \end{itemize}\vfill\pause \item the cosmic microwave background (CMB) \begin{itemize} \item oscillation of hot matter in early universe\vfill \item gravity $\leftrightarrow$ radiation pressure \end{itemize} \end{itemize} \end{frame} \begin{frame}{Hints for Dark Matter(DM)} \begin{figure} \begin{minipage}{0.45\textwidth} fluctuations in CMB: \begin{itemize} \item oscillation of hot matter in early universe\vfill \item gravity $\leftrightarrow$ radiation pressure\vfill \item after cooling down red/blue shift due to gravitational field\\ $\rightarrow$ power spectrum \end{itemize} \end{minipage} \begin{minipage}{0.53\textwidth} %\includegraphics[keepaspectratio,width=0.9\textwidth]{build/cmb.pdf} \end{minipage} \end{figure} \footnotetext[3]{Integrierter Kurs Physik 3, Prof. Tolan, Prof. Stolze, WS17/18, TU Dortmund} \end{frame} \begin{frame}{The CMB power spectrum - DM dependency} \begin{figure} \begin{subfigure}{0.45\textwidth} %\includegraphics[width=0.68\textwidth]{build/clmatter/clmatter-001.png} %\includegraphics[width=0.68\textwidth]{build/clmatter/clmatter-015.png} \end{subfigure} \begin{subfigure}{0.45\textwidth}\pause %\includegraphics[width=0.75\textwidth]{build/clmatter/clmatter-004.png}\pause \begin{itemize} \item spectrum changes for different DM densities \item peak ratio and decay rate to calculate amount of DM \end{itemize} \end{subfigure}\ \end{figure} \footnotetext[4]{www.uchicago.edu} \end{frame} \begin{frame}{The CMB power spectrum - Planck measurement} \begin{figure} %\includegraphics[width=0.9\textwidth]{build/power.pdf} \end{figure} \footnotetext[5]{arXiv:1303.5075} \end{frame} \begin{frame}{The energy density distribution} \begin{figure} \begin{subfigure}{0.45\textwidth} %\includegraphics[width=0.9\textwidth]{build/power.pdf} \end{subfigure} \begin{subfigure}{0.45\textwidth} %\includegraphics[width=0.8\textwidth]{build/pie.pdf} \end{subfigure} \end{figure}\medskip \begin{itemize} \item only $5\%$ of energy in the universe explained by normal matter\medskip \item 5x more DM than baryonic matter\medskip \only<2>{\item But what \textbf{is} dark matter?} \end{itemize} \footnotetext[6]{arXiv:1303.5075} \footnotetext[7]{Studies of dark matter annihilation and production in the Universe,\\ Carl Niblaeus (2019)} \end{frame} \setbeamercolor{normal text}{fg=white,bg=black} \renewcommand\footnoterule{} \begin{frame}[plain] \begin{tikzpicture}[remember picture,overlay] \node[at=(current page.center)] { %\includegraphics[width=1\paperwidth,height=\paperheight]{build/types.png} }; \end{tikzpicture} \footnotetext[8]{cf. physics.aps.org} \end{frame} \renewcommand{\footnoterule}{% \kern -3pt \hrule width \textwidth \kern 2pt } \setbeamercolor{normal text}{fg=black,bg=white} \section{Possible DM candidates} \begin{frame}{Possible DM candidates} \begin{itemize} \item sterile neutrinos\footnotemark[9] \begin{itemize} \item additional right-handed neutrino eigenstate (ES), mass $\mathcal{O}(\si{\kilo\eV})$ \medskip \item falling out of equilibrium at $T\geq m_{\nu} \rightarrow$ hot/warm thermal relic\medskip \item mixing of mass ES (PMNS matrix)$\rightarrow$ small share of active ES\medskip \item heavier (sterile) can decay into lighter (active) mass ES: $\nu_\text{s} \rightarrow \nu_\text{a} + \gamma$\medskip \item photon emission line at $\si{\kilo\eV}$ \end{itemize}\vfill \item super heavy gravitinos\footnotemark[10] \begin{itemize} \item 8 gravitinos, $m_\text{gr}\propto \SI{2e18}{\giga\eV} \rightarrow$ UHECR\medskip \item participating in strong and EM interaction, $\alpha_\text{s,em}\propto \mathcal{O}(1)$\medskip \item 2 color triplets - EM charge $\pm \frac{1}{3}$, 2 color singlets - EM charge $\pm \frac{2}{3}$\medskip \item SM interaction only through annihilation \end{itemize} \end{itemize} \footnotetext[9]{arXiv:1901.00151} \footnotetext[10]{arXiv:1906.07262} \end{frame} \begin{frame}{\textbf{W}eakly \textbf{I}nteracting \textbf{M}assive \textbf{P}articles} \begin{figure} \begin{minipage}{0.65 \textwidth} \begin{itemize} \item WIMPs \begin{itemize} \item weak scale masses $\rightarrow$ search in gamma and cosmic rays\medskip \item only interacting via weak interaction\medskip \item annihilation cross section $\langle \sigma_\text{A}v\rangle\approx \SI{3e-26}{\centi\meter^3\per\second}$\medskip \item thermally produced until \glqq freeze out\grqq\\ at $T<m_\text{WIMP}\rightarrow$ cold thermal relic\medskip \item non-relativistic DM forming structures in the universe\medskip \item $\Omega_\text{WIMP} = \frac{\rho_\text{WIMP}}{\rho_\text{c}}\propto 0.2$ \end{itemize} \end{itemize} \end{minipage} \begin{minipage}{0.34\textwidth} %\includegraphics[width=0.9\textwidth]{build/WIMP.pdf} \end{minipage} \end{figure} \footnotetext[11]{nasa.gov} \end{frame} \begin{frame}{The thermal equilibrium} \begin{minipage}{\textwidth} \centering %\includegraphics[width=0.5\textwidth]{build/equi.png} \end{minipage} \begin{minipage}{\textwidth} \begin{itemize} \item thermal equilibrium at $T\approx m_\text{DM}$\medskip \item moment of freeze out important for modern structure of the universe \item dependent on annihilation cross section \end{itemize} \end{minipage} \footnotetext[12]{arXiv:1812.02029} \end{frame} %\begin{frame}{Possible DM candidates} %\begin{itemize} %\item super heavy DM %\begin{itemize} %\item cosmologically stable, low interaction rate\medskip %\item not produced in thermodynamic equilibrium\medskip %\item $m_\text{heavy}\propto \SI{e12}{\giga\eV} \rightarrow$ searched for in Ultra-High-Energy Cosmic Rays (UHECR) %\end{itemize}\vfill %\end{frame} \setbeamercolor{normal text}{fg=white,bg=black} \renewcommand\footnoterule{} \begin{frame}[plain] \begin{tikzpicture}[remember picture,overlay] \node[at=(current page.center)] { %\includegraphics[width=1.2\paperwidth,height=\paperheight]{build/bullet.jpg} }; \end{tikzpicture} \footnotetext[13]{chandra.harvard.edu} \end{frame} \renewcommand{\footnoterule}{% \kern -3pt \hrule width \textwidth \kern 2pt } \setbeamercolor{normal text}{fg=black,bg=white} \section{Indirect searches for DM} \subsection{Cosmic sources} \begin{frame}{Cosmic sources for indirect detection} \begin{figure} \only<1>{\begin{minipage}{\textwidth} \begin{itemize} \item weak scale mass DM\\ $\rightarrow$ final states at SM and gamma ray energies\medskip \item equal amounts of anti-/matter with max energy $m_\text{DM}$ each\\ in DM annihilation\medskip \item photons experience no deflection in the universe\\ $\rightarrow$ spatial information to distinguish source \end{itemize} \end{minipage}}\vfill \begin{minipage}{\textwidth} \begin{small} \begin{align*} \Phi_\text{x,ann}(\Delta\Omega)&=\frac{1}{2}\frac{\mathrm{d}N_\text{x}}{\mathrm{d}E}\frac{\langle\sigma v \rangle}{4\pi m^2_\text{DM}}\cdot J_\text{ann}(\Delta\Omega)\\ J_\text{ann}(\Delta\Omega)&=\int_{\Delta\Omega}\int_\text{los}\rho^2_\text{DM}\mathrm{d}\ell\mathrm{d}\Omega \end{align*} \end{small} \end{minipage} \begin{minipage}{\textwidth} \centering \only<2>{%\includegraphics[width=0.8\textwidth]{build/j_fac.png}} \only<3>{%\includegraphics[width=0.8\textwidth]{build/j_fac2.png}} \end{minipage} \end{figure} \only<2-3>{\footnotetext[12]{arXiv:1812.02029}} \end{frame} \begin{frame}{Gamma Ray Detection: The Draco Spheroidal Dwarf Galaxy} \begin{figure} \begin{minipage}{0.48\textwidth} \begin{itemize} \item distance: 260,000 ly\medskip \item diameter: 3000 ly\medskip \item bright mass: 22,000,000 $M_{\odot}$\medskip \item $J_\text{ann}\propto\SI{e18.8}{\giga\eV^2\per\centi\meter^5}$ %$\rightarrow \Phi_\gamma\approx \SI{5e-12}{\centi\meter^{-2}\second^{-1}}\left(\frac{\langle\sigma v\rangle}{\SI{2.2e-26}{\centi\meter^3\per\second}}\right)\left(\frac{\int\frac{\mathrm{d}N_\gamma}{\mathrm{d}E_\gamma}\mathrm{d}E_\gamma}{10}\right)\left(\frac{\SI{100}{\giga\eV}}{m_\text{DM}}\right)^2\left(\frac{J}{\SI{e18.8}{\giga\eV^2\per\centi\meter^5}}\right)$ \end{itemize} \end{minipage} \begin{minipage}{0.5\textwidth} \centering %\includegraphics[width=0.75\textwidth]{build/m87_ell.png} \end{minipage} \begin{minipage}{\textwidth} \[ \rightarrow\Phi_\gamma\propto \SI{5e-12}{\centi\meter^{-2}\second^{-1}}\left(\frac{\SI{100}{\giga\eV}}{m_\text{DM}}\right)^2 \] \uncover<2>{\begin{itemize} \item Fermi Large Area Telescope effective area $\SI{0.85}{\meter^2}$\\ $\rightarrow$ 0.3 photons per year from DM annihilation detected \end{itemize}} \end{minipage} \end{figure} \footnotetext[14]{cornell.edu} \end{frame} \subsection{Current experiments} \begin{frame}{Ground based vs. orbiting telescopes} \begin{columns}[T] \begin{column}{0.49\textwidth} \begin{itemize} \item ground based telescopes \begin{itemize} \item capture only part of the sky\medskip \item have to take in account atmosphere\\ $\rightarrow$ only useful for high energy measurements\medskip \item bigger detectors for better resolution \end{itemize} \end{itemize} \hspace{1cm}%\includegraphics[width=0.9\textwidth]{build/CTA.png} \end{column} \begin{column}{0.49\textwidth} \begin{itemize} \item orbiting telescopes \begin{itemize} \item difficult maintenance\medskip \item limited resolution and size\medskip \item variable target of observation\medskip \item cover lower energy spectra \end{itemize} \end{itemize} \vspace{0.3cm} \hspace{0.1cm}%\includegraphics[width=0.9\textwidth]{build/Fermi_LAT.jpg} \end{column} \end{columns} \footnotetext[15]{cta-observatory.org} \footnotetext[16]{nasa.gov} \end{frame} \begin{frame}{Current experiments} \begin{figure} \begin{minipage}{0.45\textwidth} \only<1>{%\includegraphics[width=0.85\textwidth]{build/exp.pdf}} \only<2-3>{%\includegraphics[width=0.85\textwidth]{build/exp2.pdf}} \end{minipage}\pause \begin{minipage}{0.52\textwidth} \begin{itemize} \item Alpha Magnetic Spectrometer: long term experiment at ISS\medskip \pause \item AMS-02 collaboration presented new results in 2019 \end{itemize} \end{minipage}\vfill \begin{minipage}{\textwidth} \begin{itemize} \item confirming excess in high energetic positrons from previous measurements\medskip \item new: similar distribution of anti-protons \end{itemize} \end{minipage} \end{figure} \footnotetext[17]{arXiv:1604.00014} \end{frame} \begin{frame}{\textbf{A}lpha \textbf{M}agnetic \textbf{S}pectrometer} \begin{figure} \begin{minipage}{0.8\textwidth} %\includegraphics[keepaspectratio,width=0.9\textwidth]{build/ams_ges.pdf} \end{minipage} \begin{minipage}{\textwidth} \begin{itemize} \item internal data rate $\approx 7 \text{GB/s}$, transmission rate $\approx 2 \text{MB/s}$ \item 140 billion registered events since 2011 \end{itemize} \end{minipage} \end{figure} \footnotetext[18]{inspirehep.net} \end{frame} \begin{frame}{\textbf{A}lpha \textbf{M}agnetic \textbf{S}pectrometer} \begin{figure} \begin{minipage}{0.5\textwidth} %\includegraphics[width=0.8\textwidth]{build/ams_sch.pdf} \end{minipage} \begin{minipage}{0.48\textwidth} \begin{itemize} \item \textbf{T}ransition \textbf{R}adiation \textbf{D}etector: proton rejection\medskip \item magnet \& 9 layers silicon tracker: particle momentum\medskip \item \textbf{R}ing \textbf{I}maging \textbf{CH}erenkov: ion identification \& velocity measurement\medskip \item \textbf{T}ime \textbf{O}f \textbf{F}light: particle mass\medskip \item \textbf{E}lectromagnetic \textbf{Cal}orimeter: particle energy \end{itemize} \end{minipage} \end{figure} \footnotetext[19]{inspirehep.net} \end{frame} \setbeamercolor{normal text}{fg=white,bg=black} \renewcommand\footnoterule{} \begin{frame}[plain] \begin{tikzpicture}[remember picture,overlay] \node[at=(current page.center)] { %\includegraphics[keepaspectratio,width=\textwidth]{build/ams_table.pdf} }; \end{tikzpicture} \footnotetext[20]{EPS-HEP Conference 2019} \end{frame} \renewcommand{\footnoterule}{% \kern -3pt \hrule width \textwidth \kern 2pt } \setbeamercolor{normal text}{fg=black,bg=white} \begin{frame}{Positron excess in cosmic ray events} \begin{minipage}{\textwidth} %\includegraphics[keepaspectratio, width=0.75\textwidth]{build/positron.pdf} \end{minipage}\vfill \begin{minipage}{\textwidth} \begin{itemize} \item cosmic ray collisions only known origin of positrons \end{itemize} \end{minipage} \footnotetext[21]{DOI:10.1103/PhysRevLett.122.041102} \end{frame} \begin{frame}{Positron excess in cosmic ray events} \begin{minipage}{\textwidth} %\includegraphics[keepaspectratio,width=0.75\textwidth]{build/positron_unk.png} \end{minipage}\vfill \begin{minipage}{\textwidth} \begin{itemize} \item maximum at $\mathcal{O}(\si{\giga\eV})$, flat tail towards higher energies\\ $\rightarrow$ excess in $0.1$-$\SI{1}{\tera\eV}$ positrons with cutoff $E_\text{s}$ \end{itemize} \end{minipage} \footnotetext[20]{cf. EPS-HEP Conference 2019} \end{frame} \begin{frame}{Positron excess in cosmic ray events} \begin{minipage}{\textwidth} %\includegraphics[keepaspectratio,width=0.75\textwidth]{build/positron_src.pdf} \end{minipage}\vfill \begin{minipage}{\textwidth} \begin{itemize} \item maximum at $\mathcal{O}(\si{\giga\eV})$, flat tail towards higher energies\\ $\rightarrow$ excess in $0.1$-$\SI{1}{\tera\eV}$ positrons with cutoff $E_\text{s}$ \end{itemize} \end{minipage} \footnotetext[21]{DOI:10.1103/PhysRevLett.122.041102} \end{frame} \begin{frame}{Positron excess in cosmic ray events} \begin{minipage}{0.5\textwidth} %\includegraphics[keepaspectratio, width=0.9\textwidth]{build/positron_src.pdf} \end{minipage} \begin{minipage}{0.4\textwidth} \begin{tiny} \[ \Phi_{\mathrm{e}^{+}}(E)=\frac{E^2}{\hat{E}^2}\left[C_\text{d}\left(\frac{\hat{E}}{E_\text{1}}\right)^{\gamma_\text{d}}+C_\text{s}\left(\frac{\hat{E}}{E_\text{2}}\right)^{\gamma_\text{s}}\mathrm{e}^{-\frac{\hat{E}}{E_\text{s}}}\right] \] \end{tiny} \begin{itemize} \item excess can be described adding a source term\medskip \item established at $99,99\%$ CL with $E_\text{s}= 810^{+310}_{-180} \si{\giga\eV}$ \end{itemize} \end{minipage}\vfill\pause \begin{minipage}{\textwidth} \begin{itemize} \item pulsars can produce high energy positrons, but without sharp $E_\text{S}$\medskip \item they do not produce anti-protons \end{itemize} \end{minipage} \end{frame} \begin{frame}{The anti-proton excess} \begin{minipage}{\textwidth} \centering %\includegraphics[keepaspectratio,width=0.75\textwidth]{build/anti_proton.png} \end{minipage} \begin{minipage}{\textwidth} \begin{itemize} \item anti-protons follow similar distribution\medskip \item could hint for DM annihilation:\\ equal production of matter/antimatter $\rightarrow$ sharp cut at $E\approx M_\text{DM}$ \end{itemize} \end{minipage} \footnotetext[20]{cf. EPS-HEP Conference 2019} \end{frame} \begin{frame}{Comparison of previous experiments} \begin{minipage}{\textwidth} \centering %\includegraphics[keepaspectratio,width=0.75\textwidth]{build/results.png} \end{minipage}\vfill \begin{minipage}{\textwidth} \begin{itemize} \item confirming trends from LAT(2009), AMS-02(2013), PAMELA(2016) \medskip \item first measurement of cutoff in $\si{TeV}$-positrons \end{itemize} \end{minipage} \footnotetext[21]{DOI:10.1103/PhysRevLett.122.041102} \end{frame} \section{Conclusion and outlook} \begin{frame}{Conclusion and outlook} \begin{minipage}{0.42\textwidth} conclusion: \begin{itemize} \item steady stream of new theories and scenarios\medskip \item for now no unambiguous observations\medskip \item background is mostly unknown $\rightarrow$ other explanations possible \end{itemize} \end{minipage} \begin{minipage}{0.57\textwidth} %\includegraphics[keepaspectratio,width=0.75\textwidth]{build/expect.png} \end{minipage}\vfill \begin{minipage}{\textwidth} outlook: \begin{itemize} \item extension of data taking period for AMS-02 $\rightarrow$ reduction of uncertainties\medskip \item new upcoming experiments with better resolution: GAMMA-400, CTA \end{itemize} \end{minipage} \footnotetext[20]{EPS-HEP Conference 2019} \end{frame} \end{document} %♣☺∟♦☻♦☻☺®☺♠☺♥♦♠
{ "alphanum_fraction": 0.7587206493, "avg_line_length": 35.8717504333, "ext": "tex", "hexsha": "e8257266d39ef091a90f229ab947ce970d6e2a05", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "58857be88e7a3d3a00bbf253ae0cd1a7d80a670a", "max_forks_repo_licenses": [ "Beerware" ], "max_forks_repo_name": "syhon/BSM_Vortrag", "max_forks_repo_path": "Jonah/DM.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "58857be88e7a3d3a00bbf253ae0cd1a7d80a670a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Beerware" ], "max_issues_repo_name": "syhon/BSM_Vortrag", "max_issues_repo_path": "Jonah/DM.tex", "max_line_length": 365, "max_stars_count": null, "max_stars_repo_head_hexsha": "58857be88e7a3d3a00bbf253ae0cd1a7d80a670a", "max_stars_repo_licenses": [ "Beerware" ], "max_stars_repo_name": "syhon/BSM_Vortrag", "max_stars_repo_path": "Jonah/DM.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7046, "size": 20698 }
\hypertarget{generate-with}{% \section{generate with}\label{generate-with}} run \begin{verbatim} R CMD Sweave --pdf Docs4+MD+R+book.Rnw \end{verbatim} or \begin{verbatim} R -e rmarkdown::render"('Docs4+MS+R.md',output_file='test.html')" \end{verbatim} or Rscript Docs4+MS+R.md Rscript -e ``library(knitr); knit(`./Docs4+MS+R+book.Rnw')'' \#pdflatex Docs4+MS+R.tex \hypertarget{doc4-r-demo}{% \section{doc4 R demo}\label{doc4-r-demo}} you might need in linux R to clean the environment: \begin{verbatim} sudo apt-get -u dist-upgrade \end{verbatim} \begin{verbatim} 4*222 \end{verbatim} another \texttt{\{r\ test-a,\ eval=TRUE\}\ 1+1\ strsplit(\textquotesingle{}hello\ world\textquotesingle{},\ \textquotesingle{}\ \textquotesingle{})} In R also installing the dependencies `stringi', `stringr' `digest', `rlang' rmd vs.~rnw sudo apt-get install software-properties-common apt install --no-install-recommends software-properties-common dirmngr \begin{verbatim} sudo add-apt-repository ppa:ubuntu-toolchain-r/test https://www.tuxamito.com/wiki/index.php/Installing_newer_GCC_versions_in_Ubuntu sudo apt-get install gcc-8 g++-8 sudo update-alternatives --remove-all gcc sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 90 --slave /usr/bin/g++ g++ /usr/bin/g++-8 sudo ulimit -s 16384 && sudo R -e 'install.packages("stringi");' conda install -c conda-forge r-bookdown \end{verbatim} \begin{verbatim} install.packages("stringi") install.packages("bookdown") install.packages("knitr") knitr::write_bib(x = c("knitr") , file = "test.bib") \end{verbatim} remember some R versioning issues on R 3.2 cannot install.packages(``rmarkdown'', dep = TRUE) rmarkdown::pandoc\_version() nor knitr::write\_bib(x = c(``knitr'', ``rmarkdown'') , file = ``test.bib'') with bash \texttt{\{bash,\ engine.opts=\textquotesingle{}-l\textquotesingle{}\}\ echo\ \$PATH} \hypertarget{vars}{% \section{Vars}\label{vars}} \begin{longtable}[]{@{} >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.25}} >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.20}} >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.20}} >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.36}}@{}} \toprule \begin{minipage}[b]{\linewidth}\raggedright \end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright --variable \end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright --metadata \end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright YAML metadata and --metadata-file \end{minipage} \\ \midrule \endhead values can be\ldots{} & strings and bools & strings and bools & also YAML objects and lists \\ strings are\ldots{} & inserted verbatim & escaped & interpreted as markdown \\ accessible by filters: & no & yes & yes \\ \bottomrule \end{longtable} Figures and tables with captions will be placed in \texttt{figure} and \texttt{table} environments, respectively. \texttt{\{r\ nice-fig,\ fig.cap=\textquotesingle{}Here\ is\ a\ nice\ figure!\textquotesingle{},\ out.width=\textquotesingle{}80\%\textquotesingle{},\ fig.asp=.75,\ fig.align=\textquotesingle{}center\textquotesingle{}\}\ par(mar\ =\ c(4,\ 4,\ .1,\ .1))\ plot(pressure,\ type\ =\ \textquotesingle{}b\textquotesingle{},\ pch\ =\ 19)} refs see Figure @ref(fig:nice-fig). can write citations, too. For example, we are using the \textbf{bookdown} package {[}@R-bookdown{]} using R Markdown and \textbf{knitr} \hypertarget{brmp}{% \section{BRMP}\label{brmp}} https://itsm.zone/samples/BRM.pdf
{ "alphanum_fraction": 0.7192242833, "avg_line_length": 32.0540540541, "ext": "tex", "hexsha": "c73e0c93da1a661364bce75bb079d60921c9965c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cae5d44d621888bfc6eaf7825f7f4a0f452af874", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "infchg/docs-md", "max_forks_repo_path": "Docs4+MS+R.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cae5d44d621888bfc6eaf7825f7f4a0f452af874", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "infchg/docs-md", "max_issues_repo_path": "Docs4+MS+R.tex", "max_line_length": 330, "max_stars_count": null, "max_stars_repo_head_hexsha": "cae5d44d621888bfc6eaf7825f7f4a0f452af874", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "infchg/docs-md", "max_stars_repo_path": "Docs4+MS+R.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1166, "size": 3558 }
\chapter{Conclusion} \label{ch:Conclusion} This thesis addressed the subject of whether grid-based weather information provides a benefit in energy forecasting or not. It may not appear entirely clear that the answer is yes, as there definitely is a benefit, even though not in the way anticipated. The behaviour that has been expected, is, that filtering the most populated regions and using those grid points would improve the forecast result, but it actually worsened it. This could be observed for a forecast using the huge number of 1435 exogenous variables, one for each grid point. It turned out, that this does not only highly comprise the computation time, but also has a very negative impact on the accuracy of the forecast. Still, an improvement can be seen for using \eg averaged temperature data, which is a noteworthy benefit of grid-based data, as the average temperature can be computed for any desired composition of grid points. It also needs to be mentioned, that this subject has not yet been directly addressed by any of the related works found. Previous works mainly focused on the actual forecast and how to improve it, rather than which sort of data actually provides beneficial behaviour in terms of forecasting. Further research is needed in order to figure out how grid-based data can be compressed optimally, so that the number of variables is limited to a small enough amount or generalizes well enough to avoid over-fitting. Another possibility would be to check for economic activity for filtering specific grid points, but still, the data has to be compressed in order to improve generalization. Further, forecasts with an increased time scope could be used, to evaluate the temporal range for which the current weather improves forecasts of the short-term future energy demand. But also completely different models could be tested, such as \gls{rnn}, which are particularly suitable for time series forecasting. In the end, there is an additional assumption to be made, which is, that in the near future, weather may have an even higher influence on energy consumption due to the current energy transition and possibly resulting outcomes such as \gls{dsm}. This assumption emphasizes the importance of this research topic and also the practical usefulness of the results of this thesis.\\ %This provides an interesting subject for further research. But other methods could be considered for further research as well, such as filtering grid points by economic activity. Another possibility would be the use of completely different models such as \gls{rnn}, which are particularly suitable for time series forecasting.\\ %This is a conclusion. It is fine because it is small and nice.\\ %By evaluating the benefit of using grid-based weather information in energy forecasting %In the end, it can be said, that in contrast to other works presented in \Cref{ch:RW}, the subject %It needs to be clarified, that in contrast to most of the presented works, this thesis uses reanalysed data from \gls{ecmwf} as weather predictions which means, that the forecasts might behave differently from forecasts in other works as what here is assumed to be a weather forecast is more accurate than usually. This also means that results from this thesis may not exactly match results using the same procedure with real-time data.\\ %Repeat the problem and its relevance, as well as the contribution (plus quantitative results). %Look back at what you have written in the introduction. %Provide an outlook for further research steps.
{ "alphanum_fraction": 0.8090014065, "avg_line_length": 197.5, "ext": "tex", "hexsha": "485cda36ed7b64718b053b09486c976afc63aef3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8d15df4d39c5f49b6b856fd584085c2db0263a2c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "maGitty/GISME", "max_forks_repo_path": "doc/sections/conclusion.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "8d15df4d39c5f49b6b856fd584085c2db0263a2c", "max_issues_repo_issues_event_max_datetime": "2021-06-02T00:49:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-06-02T00:49:09.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "maGitty/GISME", "max_issues_repo_path": "doc/sections/conclusion.tex", "max_line_length": 2278, "max_stars_count": null, "max_stars_repo_head_hexsha": "8d15df4d39c5f49b6b856fd584085c2db0263a2c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "maGitty/GISME", "max_stars_repo_path": "doc/sections/conclusion.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 700, "size": 3555 }
\documentclass[8pt,a4paper,landscape,oneside]{amsart} \usepackage{minted} \usepackage{amsmath, amsthm, amssymb, amsfonts} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{booktabs} \usepackage{caption} \usepackage{fancyhdr} \usepackage{float} \usepackage{fullpage} %\usepackage{geometry} % \usepackage[top=0pt, bottom=1cm, left=0.3cm, right=0.3cm]{geometry} \usepackage[top=3pt, bottom=1cm, left=0.3cm, right=0.3cm]{geometry} \usepackage{graphicx} % \usepackage{listings} \usepackage{subcaption} \usepackage[scaled]{beramono} \usepackage{titling} \usepackage{datetime} \usepackage{enumitem} \usepackage{multicol} \usepackage{bookmark} \usepackage{color} \usepackage{xcolor} \usepackage{soul} \usepackage{comment} \ifdefined\ICPCCONFIG \setcounter{tocdepth}{3} \else \setcounter{tocdepth}{1} \fi \newcommand{\subtitle}[1]{% \posttitle{% \par\end{center} \begin{center}\large#1\end{center} \vskip0.1em\vspace{-1em}}% } % Minted \newcommand{\code}[1]{\inputminted[fontsize=\normalsize,baselinestretch=1]{cpp}{_code/#1}} \newcommand{\bashcode}[1]{\inputminted{bash}{_code/#1}} \newcommand{\regcode}[1]{\inputminted{cpp}{code/#1}} % Header/Footer % \geometry{includeheadfoot} %\fancyhf{} \pagestyle{fancy} \lhead{Ateneo de Manila University} \rhead{\thepage} \cfoot{} \setlength{\headheight}{15.2pt} \setlength{\droptitle}{-20pt} \posttitle{\par\end{center}} \renewcommand{\headrulewidth}{0.4pt} \renewcommand{\footrulewidth}{0.4pt} % Math and bit operators \DeclareMathOperator{\lcm}{lcm} \newcommand*\BitAnd{\mathrel{\&}} \newcommand*\BitOr{\mathrel{|}} \newcommand*\ShiftLeft{\ll} \newcommand*\ShiftRight{\gg} \newcommand*\BitNeg{\ensuremath{\mathord{\sim}}} \DeclareRobustCommand{\stirling}{\genfrac\{\}{0pt}{}} \newcommand{\sectionRed}[1]{\section{\colorbox{red}{\color{white}#1}}} \newcommand{\subsectionRed}[1]{\subsection{\colorbox{red}{\color{white}#1}}} \newcommand{\subsubsectionRed}[1]{\subsubsection{\colorbox{red}{\color{white}#1}}} \newcommand{\sectionBlack}[1]{\section{\colorbox{black}{\color{white}#1}}} \newcommand{\subsectionBlack}[1]{\subsection{\colorbox{black}{\color{white}#1}}} \newcommand{\subsubsectionBlack}[1]{\subsubsection{\colorbox{black}{\color{white}#1}}} \newenvironment{myitemize} { \begin{itemize}[leftmargin=.5cm] \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} } { \end{itemize} } % Title/Author \ifdefined\TEAMNAME \title{\TEAMNAME{}} \subtitle{AdMU ProgVar} \else \title{Ateneo ProgVar} \fi \date{\ddmmyyyydate{\today{}}} % Output Verbosity \newif\ifverbose \verbosetrue % \verbosefalse \begin{document} \begin{multicols*}{3} \maketitle \thispagestyle{fancy} \vspace{-3em} % \addtocontents{toc}{\protect\enlargethispage{\baselineskip}} \tableofcontents % \clearpage \begin{comment} \section{Code Templates} \code{header.cpp} \end{comment} \input{tex/data-structures} \input{tex/dp} \input{tex/geometry} \input{tex/graphs} \input{tex/math} \input{tex/strings} \ifdefined\ICPCCONFIG \input{tex/other} \input{tex/useful_info} \input{tex/other_combinatorics} \fi \clearpage \end{multicols*} \end{document}
{ "alphanum_fraction": 0.7034566404, "avg_line_length": 24.9848484848, "ext": "tex", "hexsha": "65d709a6b537ba3ab6d60d13843b98d3f2715d8b", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-03-20T07:08:46.000Z", "max_forks_repo_forks_event_min_datetime": "2022-03-11T20:53:41.000Z", "max_forks_repo_head_hexsha": "2e2b204b2dd55ad23f8257f5a50f197ebd115f49", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "admu-progvar/progvar-library", "max_forks_repo_path": "notebook/notebook.tex", "max_issues_count": 19, "max_issues_repo_head_hexsha": "2e2b204b2dd55ad23f8257f5a50f197ebd115f49", "max_issues_repo_issues_event_max_datetime": "2022-03-30T07:14:59.000Z", "max_issues_repo_issues_event_min_datetime": "2021-11-27T14:40:00.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "admu-progvar/progvar-library", "max_issues_repo_path": "notebook/notebook.tex", "max_line_length": 91, "max_stars_count": 3, "max_stars_repo_head_hexsha": "2e2b204b2dd55ad23f8257f5a50f197ebd115f49", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "admu-progvar/progvar-library", "max_stars_repo_path": "notebook/notebook.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-29T22:03:44.000Z", "max_stars_repo_stars_event_min_datetime": "2021-10-16T13:22:58.000Z", "num_tokens": 1126, "size": 3298 }
%% arara directives % arara: xelatex % arara: bibtex % arara: xelatex % arara: xelatex %\documentclass{article} % One-column default \documentclass[twocolumn, switch]{article} % Method A for two-column formatting \usepackage{preprint} \usepackage{enotez} \let\footnote=\endnote %% Math packages %\usepackage{amsmath, amsthm, amssymb, amsfonts} %% Bibliography options %\usepackage[numbers,square]{natbib} %\bibliographystyle{unsrtnat} \usepackage{natbib} %\bibliographystyle{Geology} %% General packages \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts \usepackage{xcolor} % colors for hyperlinks \usepackage[colorlinks = true, linkcolor = purple, urlcolor = blue, citecolor = black, anchorcolor = black]{hyperref} % Color links to references, figures, etc. \usepackage{booktabs} % professional-quality tables \usepackage{nicefrac} % compact symbols for 1/2, etc. %\usepackage{microtype} % microtypography %\usepackage{lineno} % Line numbers \usepackage{float} % Allows for figures within multicol \usepackage{textcomp,marvosym} %\usepackage{multicol} % Multiple columns (Method B) %\usepackage{lipsum} % Filler text \usepackage{lettrine} \usepackage{pdflscape} \usepackage{longtable} \usepackage{threeparttablex} %% Special figure caption options \usepackage{newfloat} \DeclareFloatingEnvironment[name={Supplementary Figure}]{suppfigure} \usepackage{sidecap} \sidecaptionvpos{figure}{c} % Section title spacing options \usepackage{titlesec} \titlespacing\section{0pt}{12pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \titlespacing\subsection{0pt}{10pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \titlespacing\subsubsection{0pt}{8pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} %%%%%%%%%%%%%%%% Title %%%%%%%%%%%%%%%% \title{The Precambrian paleogeography of Laurentia} % Add watermark with submission status %\usepackage{xwatermark} %% Left watermark %\newwatermark[firstpage,color=gray!60,angle=90,scale=0.32, xpos=-4.05in,ypos=0]{\href{https://doi.org/}{\color{gray}{Publication doi}}} %% Right watermark %\newwatermark[firstpage,color=gray!60,angle=90,scale=0.32, xpos=3.9in,ypos=0]{\href{https://doi.org/}{\color{gray}{Preprint doi}}} % Bottom watermark %\newwatermark[firstpage,color=gray!90,angle=0,scale=0.28, xpos=0in,ypos=-5in]{*correspondence: \texttt{[email protected]}} %%%%%%%%%%%%%%% Author list %%%%%%%%%%%%%%% \usepackage{authblk} \renewcommand*{\Authfont}{\bfseries} \author[]{Nicholas L. Swanson-Hysell} \affil[]{Department of Earth and Planetary Science, University of California, Berkeley, CA 94720 USA} \setcounter{section}{4} %%%%%%%%%%%%%% Front matter %%%%%%%%%%%%%% \begin{document} \twocolumn[ % Method A for two-column formatting \begin{@twocolumnfalse} % Method A for two-column formatting \maketitle \begin{abstract} Laurentia is the craton that forms the Precambrian core of North America and was a major continent throughout the majority of the Proterozoic following its amalgamation 1.8 billion years ago. The paleogeographic position of Laurentia is key to the development of reconstructions of Proterozoic paleogeography including the Paleoproterozoic to Mesoproterozoic supercontinent Nuna and latest Mesoproterozoic to Neoproterozoic supercontinent Rodinia. There is a rich record of Precambrian paleomagnetic poles from Laurentia, as well as an extensive and well-documented geologic history of tectonism. These geologic and paleomagnetic records are increasingly better constrained geochronologically and are both key to evaluating and developing paleogeographic models. These data from Laurentia provide strong-support for mobile lid plate tectonic processes operating continuously over the past 2.2 billion years. \end{abstract} %\keywords{First keyword \and Second keyword \and More} % (optional) \textit{This manuscript is a preprint of the chapter: \vspace{0.1 cm} \\ Swanson-Hysell, N. L. (2021) The Precambrian paleogeography of Laurentia. In: Pesonen, L.J., Salminen, J., Evans, D.A.D., Elming, S.-Å., Veikkolainen, T. (eds.) Ancient Supercontinents and the Paleogeography of the Earth, doi:10.1016/B978-0-12-818533-9.00009-6.} \vspace{0.3 cm} \end{@twocolumnfalse} % Method A for two-column formatting ] % Method A for two-column formatting %\begin{multicols}{2} % Method B for two-column formatting (doesn't play well with line numbers), comment out if using method A %%%%%%%%%%%%%%% Main text %%%%%%%%%%%%%%% % \linenumbers \subsection{Introduction and broad tectonic history} \lettrine[lines=2]{L}{aurentia} refers to the craton that forms the Precambrian interior of North America and Greenland (Fig. \ref{fig:Laurentia_map}). Laurentia comprises multiple Archean provinces that had unique histories prior to their amalgamation in the Paleoproterozoic (ca. 1.8 billion years ago; Ga), as well as regions of Paleoproterozoic and Mesoproterozoic crustal growth that post-date this assembly (Fig. \ref{fig:Laurentia_map}; \citealp{Hoffman1989c, Whitmeyer2007a}). That the vast majority of the present-day continent of North America is a single Precambrian craton without major differential motion between constituent provinces and relatively minor crustal growth over the past billion years is exceptional in comparison to Earth's other continents. In contrast, South America and Africa are products of the amalgamation of multiple Proterozoic cratons that obtained their relative positions during the formation of Gondwana ca. 0.6 Ga \citep{Goscombe2019a}. Eurasia's constituent Proterozoic cratons have an even more recent history of amalgmation with the North China and South China cratons not arriving in their present relative position until ca. 0.15 Ga \citep{Van-der-Voo2015a,Torsvik2017a}. The longevity of a large intact Laurentia makes its position a critical part of global paleogeographic models since its assembly. Rich geologic, paleomagnetic, and geochronologic data provide deep insight into Laurentia's tectonic history and paleogeographic journey that is the focus of this chapter. \begin{figure*} \centering \includegraphics[width=\textwidth]{../Figures/Fig1_map.pdf} \caption{\textbf{Simplified map of the tectonic units of Laurentia.} The Archean provinces (labeled with text) and younger Paleoproterozoic and Mesoproterozoic crust are simplified from \cite{Whitmeyer2007a} with additions for Greenland based on \cite{St-Onge2009a}. Proterozoic orogens are labeled with \textit{italicized text} (or. -- orogen; THO -- Trans-Hudson orogen; MCR -- Midcontinent Rift). The localities from which the compiled Precambrian paleomagnetic poles were developed are shown and colored by age. The circles (A rated poles) and squares (B rated poles) have been assessed by the Nordic workshop panel \citep{Evans2021a} while the diamond (not rated -- NR) poles are discussed in the text.} \label{fig:Laurentia_map} \end{figure*} \subsubsection{Laurentia's initial formation} Collision between the Superior province and the composite Slave\endnote{The term Slave Province is rather jarring when read for the first time given that it brings to mind evil human oppression. The geologic Slave Province as well as the eponymous Great Slave Lake get their name from a name given to the First Nation indigenous peoples of the Dene Group who are indigenous to the region. The origin of the name Slave is commonly explained as a French translation of the name given to the Dene people by the Cree people \citep{Britannica-Slave2017a}. Given its negative connotations, the name is not preferred now and there are efforts to remove it from Canadian place names \citep{Mandeville2016a}. Given that this name for the Archean geologic province is deeply entrenched in the geologic literature, it will be used here.}+Rae+Hearne provinces that resulted in the Trans-Hudson orogeny represents a major event in the formation of Laurentia (Fig. \ref{fig:Laurentia_map}; \citealp{Corrigan2009a}). Terminal collision recorded in the Trans-Hudson orogen is estimated to have occurred ca. 1.86 to 1.82 Ga based on constraints such as U-Pb dates of monazite grains and zircon rims \citep{Skipton2016a, Weller2017a}. It was preceded by a period of accretionary and collisional orogenesis that is recorded in the constituent provinces and terranes of Laurentia leading up to the terminal collision of the Trans-Hudson orogeny. Throughout this chapter, I will utilize the term ``accretionary orogenesis'' in a broad sense to refer to the tectonic addition of allochthonous terranes such as island arcs and continental ribbons to Laurentia. ``Collisional orogenesis'' will refer to orogens interpreted to result from the collision of continent-scale blocks associated with ocean closure. This usage follows \cite{Staal2020a} who discuss the ambiguity in such a division as both orogen types are the result of collision of lithospheric blocks as the result of subduction with the distinction between these end members largely being made on the basis of scale. Nevertheless, this categorization has paleogeographic utility given that following accretionary orogenesis there will be an ocean basin along the margin (albeit further outboard then before) whereas following collisional orogenesis the margin will have become part of a continental interior. The overall story of the rapid Paleoproterozoic amalgamation of Laurentia's constituent Archean provinces, including the terminal Trans-Hudson orogeny, was synthesized in the seminal \textit{United Plates of America} paper of \citet{Hoffman1988a} and has been refined in the time since --- particularly with additional geochronological constraints. Of most relevance here, are the events that led to the suturing of the major Archean provinces: the Thelon orogen associated with the collision between the Slave and Rae provinces ca. 2.0 to 1.9 Ga \citep{Hoffman1989c}; the Snowbird orogen associated with ca. 1.90 Ga collision between the Rae and Hearne provinces and associated terranes \citep{Berman2007a, Thiessen2020a}; the Nagssugtoqidian orogen due to the ca. 1.86 to 1.84 Ga collision between the Rae and North Atlantic provinces \citep{St-Onge2009a}; and the Torngat orogen resulting from the ca. 1.87 to 1.85 Ga collision of the southern Meta Incognita province (grouped with the Rae province in older compilations) with the North Atlantic province \citep{St-Onge2009a}. As for the suturing of the Wyoming province to Laurentia (Fig. \ref{fig:Laurentia_map}), many models posit that it was conjoined with Hearne and associated provinces at the time of the Trans-Hudson orogeny \citep[e.g.][]{St-Onge2009a, Pehrsson2015a} or was proximal to the Hearne and Superior provinces while still undergoing continued translation up to ca. 1.80 Ga \citep{Whitmeyer2007a}. A contrasting view has been proposed that the Wyoming and Medicine Hat provinces were not conjoined with the other Laurentia provinces until ca. 1.72 Ga \citep{Kilian2016b}. This interpretation is argued to be consistent with geochronological constraints on monazite and metamorphic zircon indicating active orogenesis associated with the Big Sky orogen on the northern margin of the craton as late as ca. 1.75 to 1.72 Ga \citep{Condit2015a} and ca. 1.72 Ga tectonomagmatic activity in the Black Hills region \citep{Redden1990a}. However, evidence for earlier orogenesis ca. 1.78 to 1.75 Ga in the Black Hills \citep{Dahl1999a, Hrncir2017a}, as well as high-grade metamorphism as early as ca. 1.81 Ga in the Big Sky orogen \citep{Condit2015a}, may support the interpretation of \citet{Hrncir2017a} that ca. 1.72 Ga activity is a minor overprint on ca. 1.75 terminal suturing between the Wyoming and Superior provinces. Regardless, in both of these interpretations Wyoming is a later addition to Laurentia with final suturing post-dating ca. 1.82 Ga amalgamation of Archean provinces with the Trans-Hudson orogen further to the northeast. These collisional orogenies, particularly the Trans-Hudson orogeny, are interpreted to be associated with assembly of the supercontinent Nuna \citep{Zhang2012a,Pehrsson2015a}. The Trans-Hudson orogeny is taken to be the terminal collision associated with closure of the Manikewan Ocean, a large oceanic tract that had previously separated the Superior and the North Atlantic provinces from the composite Slave+Rae+Hearne provinces (often referred to as the Churchill domain or plate; e.g. \citealp{Skipton2016a, Weller2017a}; Fig. \ref{fig:Superior_Slave_recons}). The paleogeographic model of \cite{Pehrsson2015a} posits that the closure of the Manikewan Ocean not only resulted in the amalgamation of Laurentia, but was also associated with the assembly of the supercontinent Nuna that is hypothesized to include other major Paleoproterozoic cratons including Baltica, Siberia, Congo, S\~ao Francisco, West Africa, and Amazonia. In this volume, \cite{Elming2021a} put forward an alternate scenario for Nuna paleogeography. In their model, Laurentia, Baltica and Siberia become conjoined at the time of Laurentia amalgamation forming the core of Nuna (as in \citealp{Evans2011a}). This core then subsequently grows to be a semi-supercontinent with India and Australia; however Amazonia, West Africa, Congo and S\~ao Francisco cratons remain independent from Nuna. \begin{figure*} \centering \includegraphics[width=5.25 in]{../Figures/Fig2_Superior_Slave_reconstructions.pdf} \caption{\textbf{Paleogeographic reconstructions developed using poles from the Superior, Slave and Rae provinces.} The polarity options that are chosen for the provinces are those that minimize total apparent polar wander path length. This model follows \cite{Swanson-Hysell2021b} and reconstructs a wide Manikewan ocean that underwent orthogonal closure rather than an alternative possibility of a narrower Manikewan ocean with a pivot-like closure. Paleomagnetic poles are shown colored to match their respective province with these provinces shown in present-day coordinates and labeled in the 0 Ma panel. Poles with ages that are within 20 million years of the given time slice are shown. The relatively well-resolved pole paths from the Superior and Slave provinces (Fig. \ref{fig:Laurentia_poles}) that are utilized for these reconstructions provide strong support for differential plate tectonic motion between 2220 and 1850 Ma.} \label{fig:Superior_Slave_recons} \end{figure*} Overall, the collision of Archean microcontinents between ca. 1.9 and 1.8 Ga led to rapid amalgamation of the core of the Laurentia craton (Fig. \ref{fig:Laurentia_map}). \subsubsection{Protracted Proterozoic accretionary growth followed by collisional orogenesis} Growth of Laurentia in the Paleoproterozoic also occurred through accretionary orogenesis. In the northwest, this accretion occurred within the Wopmay orogen through ca. 1.88 Ga arc-continent collision that led to the accretion of the Hottah terrane (the Calderian orogeny) and the subsequent emplacement of the Great Bear magmatic zone from ca. 1.88 to 1.84 Ga \citep{Hildebrand2009a}. Coeval with the Trans-Hudson orogeny was accretionary orogenesis on the southern margin of the Superior province (the Penokean orogeny; \citealp{Schulz2007a}) and the southern margin of the North Atlantic province (the Makkovik-Ketilidian orogeny; \citealp{Kerr1996a}). The Penokean orogeny involved the accretion of a microcontinent block (the Marshfield terrane) and arc terranes to the southern margin of the west Superior province ca. 1.86 to 1.82 Ga (Fig. \ref{fig:Laurentia_map}; \citealp{Schulz2007a}). Firm evidence of the end of the Penokean orogeny comes from the ca. 1.78 Ga undeformed plutons of the East Central Minnesota Batholith \citep{Holm2005a, Swanson-Hysell2021b}. Following the Trans-Hudson orogeny, Laurentia's growth was dominantly through accretion of juvenile crust along the southern and eastern margin of the nucleus of Archean provinces (\citealp{Whitmeyer2007a}; Figs. \ref{fig:Laurentia_map} and \ref{fig:tectonic_history}). Determining the extent of these belts can be complicated by poor exposure in the midcontinent relative to the exposure of the Archean provinces throughout the Canadian shield. Nevertheless, it is well-constrained that major growth of Laurentia following the amalgamation of these Archean provinces occurred associated with arc-continent collision during the ca. 1.71 to 1.68 Ga Yavapai orogeny (Fig. \ref{fig:tectonic_history}). Yavapai orogenesis is interpreted to have resulted from the accretion of a series of arc terranes that collided with each other and Laurentia \citep{Karlstrom2001a}. Potentially associated with the Yavapai orogeny is the accretion of the Mojave province of southwestern Laurentia which experienced metamorphic events between ca. 1.76 and 1.67 Ga (\citealp{Strickland2013a}; Fig. \ref{fig:Laurentia_map}). The Mojave province comprises Paleoproterozoic gneiss that is interpreted based on isotopic data to include reworked Archean lithologies \citep{Bennett1987a}. However, it remains unclear whether the Mojave province should be considered a Archean province or a Yavapai arc terrane built upon minor fragments of Archean lithosphere \citep{Whitmeyer2007a}. Yavapai accretion was followed by widespread emplacement of granitoid intrusions \citep{Whitmeyer2007a}. These intrusions are hypothesized to have stabilized the juvenile accreted terranes that subsequently remained part of Laurentia \citep{Whitmeyer2007a}. Subsequent accretionary orogenesis of the ca. 1.65 to 1.60 Ga Mazatzal orogeny and associated plutonism led to further crustal growth in the latest Paleoproterozoic \citep{Karlstrom1988a}. Yavapai to Mazatzal-age accretionary orogenesis extended across the southeastern margin of Laurentia from the southwestern USA to eastern Canada where it is called the Labradorian orogeny. In eastern Canada, where the orogen was overprinted in the Grenvillian orogeny, it is interpreted to have been active from ca. 1.71 to 1.60 Ga (Fig. \ref{fig:Laurentia_map}; \citealp{Gower1992a, Gower2008b}). Laurentia's growth continued into the Mesoproterozoic along the southeastern margin through further juvenile terrane and arc accretion. Continental arc magmatism is interpreted to have occurred associated with the Pinwarian orogeny in the northeast Grenville province in Labrador from ca. 1.52 to 1.46 Ga \citep{Gower2002a}. Accretionary orogenesis recorded in the Grenville province includes accretion of the Quebecia composite arc terrane to Laurentia ca. 1.43 to 1.37 Ga \citep{Groulier2020a}. Far to the southwest along the margin in northern New Mexico, metamorphic rocks from Mesoproterozoic sedimentary and volcanic protoliths have been interpreted to indicate an interval of ca. 1.46 to 1.40 Ga orogenesis that has been named the Picuris orogeny \citep{Daniel2013a, Aronoff2016a}. In the midcontinent region, deformation and metamorphism of post-Mazatzal orogeny sedimentary rocks is constrained to have occurred ca. 1.49 to 1.46 Ga associated with the Baraboo orogeny \citep{Medaris2003a, Holm2019a}. Coveval Picuris-Baraboo-Pinwarian orogenesis is indicative of convergent tectonism along the entire length of the southeastern margin of Laurentia. This active-margin, upper-plate setting was the site of voluminous widespread plutonism from ca. 1.48 to 1.35 Ga that resulted in the emplacement of A-type granitoids throughout the previously accreted Paleoproterozoic and Mesoproterozoic provinces. Known as the Granite-Rhyolite Province, these magmatic products extend from the southwestern United States to the Central Gneiss Belt of Ontario northeast of Georgian Bay (Fig. \ref{fig:Laurentia_map}; \citealp{Slagstad2009a}). This magmatism has been interpreted to be a combination of continental arc magmatism and melt generation within a back-arc region of Laurentia's long-lived active margin \citep{Bickford2015a}. Magmatic activity at the younger end of this range (ca. 1.37 Ga) is abundant in the Southern Granite-Rhyolite Province, suggesting a similar active margin setting \citep{Bickford2015a}. While an active margin interpretation for the Granite-Rhyolite Province, with arc and back-arc magmatism, has gained traction within the literature and is consistent with evidence for accretionary orogenesis in the Picuris, Baraboo and Pinwarian orogens, the tectonic setting is often described as enigmatic given earlier interpretations of an anorogenic setting (see references in \citealp{Slagstad2009a}). \begin{figure*} \centering \includegraphics[width=\textwidth]{../Figures/Fig3_Tectonic_history.pdf} \caption{\textbf{Simplified timeline of Laurentia's tectonic history over the past 1.8 billion years.} Brief summaries and references related to the orogenic and rifting episodes are given in the text. A timeline of large igneous provinces (LIPs) associated with typically brief and voluminous (or interpreted to be voluminous) magmatism is also shown. The interpreted ages of paleomagnetic poles for Laurentia (not including separated terranes) compiled in this study for the Proterozoic (Table 2) and in \cite{Torsvik2012a} for the Phanerozoic is shown. Abbreviations on the figure: CAMP -- Central Atlantic Magmatic Province; proto-Cord -- proto-Cordilleran.} \label{fig:tectonic_history} \end{figure*} Accretionary orogenesis continued locally along the southeastern margin of Laurentia with the amalgamation and accretion of arcs and back-arcs associated with the ca. 1.25 to 1.22 Ga Elzevirian orogeny \citep{Carr2000a,McLelland2013a}. The subsequent ca. 1.19 to 1.16 Ga Shawinigan orogeny is interpreted to be due to the collision and accretion of a previously rifted fragment of Laurentia that led to obduction of the Pyrites Complex ophiolite \citep{Carr2000a,McLelland2010a, Chiarenzelli2011a}. The Shawinigan orogeny is followed by a period of tectonic quiescence on the eastern margin of Laurentia until the collisional orogenesis of the Grenvillian orogeny \citep{McLelland2010a}. An exception to this quiescence during the interval between the Shawinigan and Grenvillian orogenies is ca. 1.15 to 1.12 Ga orogenesis in the Llano uplift of the southern Laurentia margin \citep{Mosher1998a}. Llano orogenesis is interpreted to have resulted from collision of continental lithosphere along with an accreted arc \citep{Mosher1998a}. This orogenesis is earlier and temporally distinct from the Grenvillian orogeny, is only known from a limited spatial area, and is located in a region that experienced further orogenesis during the Grenvillian orogeny \citep{Grimes2004a}. Taken together, this context is suggestive of a microcontinent collision leading to Llano orogenesis prior to terminal Grenvillian continental collision. If this interpretation is correct, it would be similar to Paleozoic orogenesis along the margin where microcontinent collision resulted in the Acadian orogeny prior to Alleghanian orogenesis during the Appalachian orogenic interval (Fig. \ref{fig:tectonic_history}). The Grenvillian orogeny was a protracted interval of continent-continent collision (ca. 1.09 to 0.98 Ga) leading to amphibolite to granulite facies metamorphism throughout the orogen \citep{Carr2000a, Rivers2008a, Rivers2012a, Indares2020a}. Note that while the terms Grenvillian orogeny and Grenville belt have been used rather loosely in the literature to refer to any late Mesoproterozoic orogenic belt, the timeline of orogenesis on the Laurentia margin has more nuanced constraints than this usage. Properly referring to the Grenvillian orogeny as distinct from the Elzevirian and Shawinigan orogenies enables the available constraints to be comparatively assessed when evaluating potential conjugate continents to Laurentia associated with the orogen (Fig. \ref{fig:tectonic_history}). Evidence of large-scale continent-continent collision at the time of the Ottawan Phase of the Grenvillian orogeny is recorded in Texas \citep{Grimes2004a}, through the Blue Ridge Appalachian inliers \citep{Johnson2020a}, through Ontario and to the Labrador Sea \citep{Rivers2008a, Rivers2012a}. The orogeny is interpreted to have resulted in the development of a thick plateau associated with the Ottawan orogenic phase (ca. 1090 to 1030 Ma; \citealp{Rivers2008a}). Continued convergence during the Rigolet phase of the Grenvillian orogeny led to the development of the Grenville Front tectonic zone and ended ca. 980 Ma \citep{Hynes2010a}. In the latest Mesoproterozoic (ca. 1.11 to 1.08 Ga) prior to the Grenvillian orogeny, a major intracontinental rift co-located with a large igneous province formed in Laurentia's interior, leading to extension within the Archean Superior province and adjacent Paleoproterozoic provinces to the south (MCR in Fig. \ref{fig:Laurentia_map}; \citealp{Cannon1992b}). This Midcontinent Rift is associated with the emplacement of a thick succession of volcanics and mafic intrusions that are well-preserved in Laurentia's interior. Midcontinent Rift development ceased as major collisional orogenesis of the Grenvillian orogeny began \citep{Cannon1994a, Swanson-Hysell2019a}. There is significantly less preserved Mesoproterozoic crust on the western margin of Laurentia (Fig. \ref{fig:Laurentia_map}) and the tectonic history through the Mesoproterozoic Era is not as well constrained as on the southern to eastern margin. There are thick successions of Paleoproterozoic siliciclastic and carbonate sedimentary rocks such as the ca. 1.66 to 1.62 Ga Wernecke Supergroup in Yukon, Canada \citep{Delaney1986a, Furlanetto2016a}. The Wernecke Supergroup is interpreted to have resulted from rifting followed by passive margin thermal subsidence \citep{Furlanetto2016a}, although the tectonic setting is poorly understood and could be intracratonic. These metasedimentary rocks were deformed and metamorphosed during the ca. 1.60 Ga Racklan-Forward orogeny that is interpreted to be associated with collision of an arc terrane and potentially a conjugate continent \citep{Thorkelson2005a, Furlanetto2013a, Furlanetto2016a}. Further south along Laurentia's western margin, sedimentary rocks of the 15 to 20 km thick Belt-Purcell Supergroup are associated with a ca. 1.47 to 1.40 rift \citep{Evans2000c}. While the rift is typically interpreted as being intracontinental \citep{Lydon2004a}, the tectonic setting in which it formed is debated. \citet{Hoffman1989c} proposed that it may be a remnant back-arc basin trapped within a continent, while others have envisioned it being associated with continental rifting associated with separation of a conjugate continent \citep{Jones2015a}. This region was subsequently deformed during the ca. 1.37 to 1.33 Ga East Kootenay orogeny that is constrained by granite crystallization and authigenic monazite dates \citep{McMechan1982a, Nesheim2012a, McFarlane2015a}. Taken together, this late Paleoproterozoic and Mesoproterozoic tectonic history provides significant constraints on paleogeographic reconstructions. In particular, the long-lived history of accretionary orogenesis along the southeast (present-day coordinates) of Laurentia from the initiation of the Yavapai orogeny (ca. 1.71 Ga) to the end of the Shawinigan orogeny (ca. 1.16 Ga) requires a long-lived open margin without a major conjugate continent until the time of terminal Grenvillian collisional orogenesis \citep{Karlstrom2001a}. This constraint is incorporated into paleogeographic models such as that of \citet{Zhang2012a} and \citet{Pehrsson2015a} which maintain a long-lived convergent margin throughout the Mesoproterozoic, but in some reconstructions other continental blocks are reconstructed into positions that are seemingly incompatible with this record of accretionary orogenesis (e.g. Amazonia in \citealp{Elming2009a, Elming2021a}). The great extent of the high-grade metamorphism associated with the Ottawan phase of the Grenvillian orogeny itself strongly suggests a collision between Laurentia and another continent ca. 1080 Ma --- the geological observation of which first led to the formulation of the hypothesis of the supercontinent Rodinia \citep{Hoffman1991a}. This extensive and major collisional orogenesis on Laurentia's margin, and that of other Proterozoic continents, remains a strong piece of evidence that a supercontinent or (proto)supercontinent formed at the 1.0 Ga Mesoproterozoic to Neoproterozoic transition. \subsubsection{Neoproterozoic rifting} The subsequent Neoproterozoic tectonic history of Laurentia is dominantly a record of rifting (Fig. \ref{fig:tectonic_history}). Along the western margin of Laurentia, by ca. 760 Ma there was rifting leading to deposition in basins from the Death Valley region of southwestern Laurentia to the Ogilvie Mountains of northwestern Laurentia \citep{Macdonald2013a, Strauss2015a, Dehler2017a, Rooney2017a}. The emplacement of the ca. 780 Ma Gunbarrel large igneous province \citep{Harlan2003a} along this margin and the subsequent extension recorded in the western Laurentia basins is commonly interpreted to be associated with the break-up of Laurentia and a conjugate continent to the western margin (e.g. \citealp{Li2008a}) although this interpretation is difficult to reconcile with the subsequent history of basin development. Extensional basin development continued into the Cryogenian Period with active normal faulting occurring during the deposition of both Sturtian (ca. 717 to 656 Ma) and Marinoan (ca. 645 to 635 Ma) glacial deposits in southwestern Laurentia \citep{Yonkee2014a, Nelson2020a}. Additionally, Cryogenian volcanics along the western Laurentia margin (e.g. \citealp{Eyster2018a}) are interpreted to be the result of active rifting. A puzzlingly feature of this record of active rifting is that it significantly predates interpreted passive margin thermal subsidence closer to the ca. 539 Ma Neoproterozoic-Phanerozoic boundary that has been linked to lithospheric thinning \citep{Bond1984a, Levy1991a}. If the interpretation of a conjugate continent rifting off the margin prior to the ca. 717 Tonian-Cryogenian boundary is correct, it is unclear why there would be minimal thermal subsidence until the Ediacaran (after 635 Ma in \citealp{Levy1991a}). While the geological evidence supports prolonged extensional tectonism along the western margin of Laurentia, it suggests that significant lithospheric thinning occurred later than the timing of rifting typically implemented in models of Rodinia break-up. One explanation is that Tonian extensional basin development was associated with transtension in a dominantly strike-slip tectonic regime \citep{Smith2015b, Strauss2015a}. The record of Neoproterozoic basin development led \cite{Yonkee2014a} to propose that the early ca. 780 Ma rifting was intracratonic and that while it may have led to some associated thermal subsidence, there was a second interval of rifting and thermal subsidence associated with Australia rifting away in the Ediacaran (later than in most models). Another possibility, along the lines of the tectonic scenarios proposed by \citet{Ross1991a} and \citet{Colpron2002a}, is that ca. 760 Ma extensional tectonism is an inboard record of rifting and passive margin development that dominantly occurred further to the west. In this model, subsequent continental rifting that drove lithospheric thinning, perhaps associated with the departure of a ribbon continent rather than an already departed major conjugate continent, would be the cause of Ediacaran to Cambrian thermal subsidence (Fig. \ref{fig:. In northwestern Laurentia from the Ogilvie Mountains (Yukon, Canada) to Victoria Island (Nunavut, Canada), the sedimentary rock record is distinct from that further south as it also records earlier Neoproterozoic basin development during the Tonian Period in addition to Cryogenian basin development \citep{Macdonald2012a}. Lithospheric extension is interpreted from basin development that accommodated deposition of the lower Fifteenmile Group with maximum depositional ages of ca. 1050 Ma with ongoing basin development ca. 812 Ma (age constraint from a U-Pb zircon date on a tuff within the upper Fifteenmile Group; \citealp{Macdonald2010a}) that may have been accommodated through thermal subsidence \citep{Macdonald2012a}. Earlier basin development in the region recorded by the Mesoproterozoic/Neoproterozoic Pinguicula Group could provide valuable insight on the tectonic history as it has been interpreted to have been deposited in an extensional basin \citep{Medig2016a}, however it is poorly constrained in terms of age --- older than the Fifteenmile Group and younger than the ca. 1382 Ma Hart River sills (which themselves have been interpreted to be emplaced in conjunction with rifting; \citealp{Verbaas2018a}). As in southwestern Laurentia, extensional basin development continued into the Cryogenian and Ediacaran which accommodated deposition of the Windermere Supergroup \citep{Moynihan2019a}. Ediacaran rifting transitioned to thermal subsidence in the Cambrian and an early Paleozoic passive margin \citep{Moynihan2019a}. The record of Neoproterozoic basin development on the northern Franklinian margin of Laurentia is more difficult to resolve given the truncation of the margin by the Paleozoic Ellesmerian orogeny \citep{Cocks2011a}. The North Slope terrane of Arctic Alaska is considered to have been part of northeastern Laurentia in the Neoproterozoic and shows a similar history of Tonian to Cryogenian extensional tectonism with limited overall lithospheric stretching that was followed by Ediacaran to early Cambrian extension that led to passive margin sedimentation \citep{Strauss2019a}. Another margin that experienced rifting and associated passive margin thermal subsidence earlier in the Neoproterozoic is the northeastern Greenland margin, including the displaced terrane of northeastern Svalbard, Norway (Fig. \ref{fig:tectonic_history}). Available geochronological constraints and thermal subsidence modeling indicate ca. 850 to 820 Ma rifting followed by thermal subsidence of a stable carbonate platform \citep{Maloof2006a, Sonderholm2008a, Halverson2018a}. These data suggest that conjugate continental lithosphere rifted away from northeastern Greenland by ca. 820 Ma although some models place northeastern Greenland in a distal backarc setting \citep{Malone2014a}. Extensive rifting followed by thermal subsidence occurred along the southeastern to eastern Laurentia margin in the time leading up to the Neoproterozoic-Phanerozoic boundary and is interpreted to be associated with the opening of the Iapetus ocean. On the southern part of the eastern margin, a record of this rifting is preserved as rift basins that were part of failed arms (Rome trough, Reelfoot rift and Oklahoma aulacogen; Fig. \ref{fig:Laurentia_map}) as well as prolonged Cambrian to Ordovician passive margin thermal subsidence along the margin \citep{Bond1984a, Whitmeyer2007a}. The ages of igneous intrusions that have been interpreted to be rift-related play a significant role in interpretations of this history such as in the rift development model of \citet{Burton2010a}. In this model, spatially-restricted rifting occurs ca. 760 to 680 Ma in the region of present-day North Carolina and Virginia. Rifting initiated in the region from present-day New York to Newfoundland ca. 620 to 580 Ma associated with the Central Iapetus Magmatic Province (CIMP; Fig. \ref{fig:Grenville_reconstructions}) and by ca. 580 to 550 Ma rifting extended along the length of Laurentia's eastern margin. The last phase of this rifting has been interpreted to be associated with the separation of the Argentine pre-Cordillera Cuyania terrane ca. 540 Ma \citep{Dickerson1998a, Martin2019a}. As with other rifted margins, it is difficult to distinguish the separation of a cratonic fragment as a microcontinent from the rifting and departure of a major craton, as the record that lingers on the craton is similar. Recognizing this ambiguity, it has been proposed that rather than being associated with spatially-restricted or failed rifting, ca. 700 Ma extension (which occured in the southern part of the eastern margin) was associated with breakup and separation of a conjugate continent and that there was subsequent rifting of smaller terranes off the eastern Laurentia margin ca. 600 Ma \citep{Chew2008a, Escayola2011a, Robert2020a}. In these models, the separation of a conjugate continent ca. 700 Ma leads to the formation of an ocean termed the Puncoviscana Ocean by \citet{Escayola2011a} and the Paleo-Iapetus Ocean by \citet{Robert2020a}. The subsequent rifting of terranes off the Laurentia margin ca. 600 Ma would have resulted in the formation of the (Neo-)Iapetus Ocean and the record of thermal subsidence associated with passive margin development on Laurentia \citep{Escayola2011a, Robert2020a}. \subsubsection{Similarities in Laurentia's Proterozoic and Phanerozoic tectonic histories} The eastern margin of Laurentia then went through multiple phases of Appalachian orogenesis. As is visualized in Figure \ref{fig:tectonic_history}, there are parallels between the Grenville and Appalachian orogenic intervals in that in both cases there was a period of arc-continent collision (Elzevirian orogeny in the Grenville interval; Taconic orogeny in the Appalachian interval) followed by microcontinent accretion (Shawinigan/Llano orogenies in the Grenville interval; Acadian orogeny in the Appalachian interval) that culminated in large-scale continent-continent collision (Grenvillian orogeny in the Grenville interval; Alleghanian orogeny in the Appalachian interval). These similarities are the consequence of an active margin facing an ocean basin that was progressively consumed until continent-continent collision. In the case of the Grenville interval, this terminal collision is interpreted to be associated with the assembly of the supercontinent Rodinia, and in the Appalachian interval it is interpreted to be associated with the assembly of the supercontinent Pangea. Even without considering other continents on Earth, the geological record of Paleoproterozoic collision of Archean provinces combined with accretionary orogenesis at that time and through the rest of the Paleoproterozoic and Mesoproterozoic Eras provides strong evidence for mobile plate tectonics driving Laurentia's evolution throughout the past 2 billion years. This tectonic history inferred from geological data can be enhanced through integration with the paleomagnetic record. \subsection{Paleomagnetic pole compilation} \begin{figure*}[!h] \centering \includegraphics[width=\textwidth]{../Figures/Fig4_Laurentia_poles_combined.pdf} \caption{\textbf{Paleomagnetic poles from Laurentia.} Upper-left panel: Paleomagnetic poles from 1800 to 720 Ma for Laurentia (including Greenland). Portions of the apparent polar wander path (APWP) are referred to by the names Logan loop, Keweenawan track, and Grenville loop in the literature and those are labeled next to the associated poles. Upper-right panel: Paleomagnetic poles from 1800 to 580 Ma for Laurentia (including those from the separated terranes of Greenland, Scotland and Svalbard rotated to Laurentia coordinates). The youngest poles from the Ediacaran Period have unusually variable positions as discussed in the text. Lower-left panel: Running mean APWP calculated with a 20 million year moving window. Lower-right panel: Poles for the Archean provinces of Laurentia prior to Laurentia's Paleoproterozoic amalgamation.} \label{fig:Laurentia_poles} \end{figure*} In this chapter, I focus on the compilation of paleomagnetic poles developed through the Nordic Paleomagnetism Workshops with some additions and modifications (Fig. \ref{fig:Laurentia_poles} and Table 2). The Nordic Paleomagnetism Workshops have taken the approach of using expert panels to assess paleomagnetic poles and assign them grades to convey the confidence that the community has in these results \citep{Evans2021a}. While many factors associated with paleomagnetic poles can be assessed quantitatively through Fisher statistics and the precision of geochronological constraints, other aspects such as the degree to which available field tests constrain the magnetization to be primary require expert assessment. The categorizations used by the expert panel are `A' and `B' with the last panel meeting occurring in Fall 2017 in Leirubakki, Iceland \citep{Brown2018a}. The `A' rating refers to poles that are judged to be of such high quality that they provide essential constraints that should be satisfied in paleogeographic reconstructions. The `B' rating is associated with poles that are judged to likely provide a high-quality constraint, but have some deficiency such as remaining ambiguity in the demonstration of primary remanence or the quality/precision of available geochronologic constraints. In addition, in this chapter I refer to select additional poles that were not given an `A' or `B' classification at the Nordic Workshops as not-rated (`NR'). These additional poles are taken from the Paleomagia database \citep{Veikkolainen2014a} in conjunction with the papers in which they were reported. Many additional poles in the database to those that are rated are valuable and should not be dismissed from being considered in paleogeographic reconstructions. However, there are ambiguities associated with many of the poles not given Nordic `A' or `B' ratings in terms of how well the nature of the remanence is constrained, including its age. For example, there are many poles that have been developed from lithologies of the Trans-Hudson orogen (e.g. \citealp{Symons2005a}). However, these poles typically lack field tests, have poor control on paleohorizontal and are hypothesized to suffer from variable overprints as discussed in \cite{Raub2008a} and \cite{DAgrella-Filho2020a}. Given the preponderance of similar directions, three representative poles from the Trans-Hudson orogen are included in the Nordic Workshop compilation (Table 2) as `B' rated poles. However, they should be used with caution given that they suffer from the same deficiencies. There are similar issues with the rich data sets associated with intrusive and metamorphic lithologies of the Grenville Province that are the available paleomagnetic constraints for Laurentia at the Mesoproterozoic-Neoproterozoic boundary. The ages of the remanence associated with these poles is complicated by the reality that the magnetization was acquired during exhumation and associated cooling within the Grenville orogen. Cooling ages of deeply exhumed lithologies are more difficult to robustly constrain than the ages of remanence associated with dated eruptive units or shallow-level intrusions. As a result, the vast majority of Grenville Province poles are not given an `A' or `B' rating with the exception of the `B' rated pole from the ca. 1015 Ma Haliburton intrusions (Table 2). However, while any one of these Grenville poles could be interpreted to suffer from large temporal uncertainty, the overall preponderance of poles in a similar location at the time suggests that they need to be taken seriously within paleogeographic reconstructions of Laurentia (although an alternative view of an allochthonous origin put forward by \citealp{Halls2015a} is discussed below). In this compilation, the poles of \cite{Brown2012a} from the Adirondack highlands for which the magnetic mineralogy and associated relative ages of remanence are relatively well-constrained are included (Table 2). Additional not-rated poles, published after the ratings were conferred, that are included in the present compilation are: 1) a pole from the ca. 1780 Ma East Central Minnesota Batholith that confirms that the Slave, Hearne, Rae, and Superior provinces were part of coherent amalgamated Laurentia following the Trans-Hudson orogeny \citep{Swanson-Hysell2021b}; 2) a pole for the ca. 1144 Ma Ontario lamprophyre dikes \citep{Piispa2018a} that strengthens the position of Laurentia at that time and coincides with the position of the poles from the ca. 1140 Ma Abitibi dikes \citep{Ernst1993a}. These poles will likely receive an `A' rating when assessed at the next Nordic paleomagnetism workshop. Poles from the Neoproterozoic Chuar Group of southwestern Laurentia (ca. 760 Ma) as presented in \cite{Eyster2020a}, incorporating data from \cite{Weil2004a}, are also included. \subsection{Differential motion before Laurentia amalgamation} Prior to the termination of the Trans-Hudson orogeny (before 1.8 Ga), paleomagnetic poles need to be considered with respect to the individual Archean provinces. For the Superior province, an additional complexity is that paleomagnetic poles from Siderian to Rhyacian Period (2.50 to 2.05 Ga) dike swarms, as well as deflection of dike trends, support an interpretation that there was substantial Paleoproterozoic rotation of the western Superior province relative to the eastern Superior province across the Kapuskasing Structural Zone \citep{Bates1991a, Evans2010a}. This interpretation is consistent with the hypothesis of \citet{Hoffman1988a} that the Kapuskasing Structural Zone represents major intracratonic uplift related to the Trans-Hudson orogeny. \cite{Evans2010a} propose an Euler rotation of (51\textdegree N, 85\textdegree W, -14\textdegree CCW) to reconstruct western Superior relative to eastern Superior and interpret that the rotation occurred in the time interval of 2.07 to 1.87 Ga. I follow this interpretation and group the poles into Superior (West) and Superior (East). There are poles in the compilation for the Slave, Wyoming, Rae, Superior and North Atlantic provinces prior to Laurentia amalgamation (Fig. \ref{fig:Laurentia_poles} and Table 2). Overall, these data provide an opportunity to re-evaluate the paleomagnetic evidence for relative motions between Archean provinces prior to Laurentia assembly. A lingering question raised in \citet{Hoffman1988a} is to what extent the Archean provinces each had independent drift histories with significant separation or shared histories before experiencing fragmentation and reamalgamation. The strongest analysis in this regard comes from comparisons between paleomagnetic poles between the Superior and Slave provinces \citep{Buchan2009a, Mitchell2014a, Buchan2016a}. High-quality paleomagnetic poles from these two provinces provide strong support for differential motion between the Superior and Slave provinces between 2.2 and 1.8 Ga with the two provinces not being in their present-day relative orientation to one another and having distinct pole paths as constrained by five time slices of nearly coeval poles between 2.23 and 1.89 Ga (Fig. \ref{fig:Superior_Slave_recons}; \citealp{Buchan2016a}). These data provide paleomagnetic support for the Superior and Slave provinces having independent histories of differential motion. The data also support the hypothesis that the Trans-Hudson orogeny is the result of terminal collision associated with the closure of the Manikewan Ocean between the Superior province and the Hearne+Rae+Slave provinces. Reconstructions developed for this chapter of the Superior and Slave provinces using these poles are shown in Figure \ref{fig:Superior_Slave_recons} and illustrate the differences in implied orientation and paleolatitude that result from these well-constrained poles. \subsection{Paleogeography of an assembled Laurentia} Following the amalgamation of the Archean provinces in Laurentia by ca. 1.8 Ga, poles from each part of Laurentia can be considered to reflect the position of the entire composite craton. This interpretation of a coherent Laurentia is confirmed paleomagnetically by the consistent positions of ca. 1780 to 1740 Ma poles from the Slave, Hearne/Rae, and Superior Provinces (Fig. \ref{fig:Superior_Slave_recons}; \citealp{Swanson-Hysell2021b}). It is worth considering the possibility that poles from zones of Paleoproterozoic and Mesoproterozoic accretion could be allochthonous to the craton. \cite{Halls2015b} argued that this was the case for late Mesoproterozoic and early Neoproterozoic poles from east of the Grenvillian allochthon boundary fault. However, the majority of researchers have considered these poles to post-date major differential motion and be associated with cooling during collapse of a thick orogenic plateau developed during continent-continent collision (e.g. \citealp{Brown2012a}). Poles with a B-rating are also included in the compilation that come from Greenland, Svalbard and Scotland. These terranes were once part of contiguous Laurentia, but have subsequently been separated through translation and rifting. These poles need to be rotated into the Laurentia reference frame prior to use for tectonic reconstruction, and I apply the rotations shown in Table \ref{tab:terrane_rotations}. The Euler pole and rotation is quite well-constrained for Greenland as it is associated with recent opening of Baffin Bay and the Labrador Sea (for which the rotation of \citealp{Roest1989a} is used). The reconstruction of Scotland is associated with the opening of the Atlantic (for which the rotation employed by \citealp{Torsvik2017a} is used) which is well-constrained, but has more uncertainty associated with the Euler pole than that for Greenland. The reconstruction of Svalbard is more challenging and non-unique given a multi-stage tectonic history involving both translation within the Caledonides and subsequent rifting. The preferred Euler pole parameters of \cite{Maloof2006a} are used here for this reconstruction. This Euler rotation is designed, in particular, to honor the high degree of similarity between Tonian sediments in East Greenland and those of East Svalbard \citep{Maloof2006a, Hoffman2012a} and to reconstruct East Svalbard to be aligned with these correlative sedimentary rocks. \begin{table}[hbt] \caption{Rotations of separated terranes} {\scriptsize \begin{tabular}{|l|l|l|l|p{1.1 in}|} \hline & Euler pole & Euler pole & rotation & note and \\ Terrane & longitude & latitude & angle & citation \\ \hline Greenland & -118.5 & 67.5 & -13.8 & Cenozoic separation of Greenland from Laurentia associated with opening of Baffin Bay and the Labrador Sea \citep{Roest1989a} \\ \hline Scotland & 161.9 & 78.6 & -31.0 & Reconstruction of Atlantic Ocean opening following \cite{Torsvik2017a} \\ \hline Svalbard & 305.0 & 81.0 & -68 & Rotate Svalbard to Laurentia in fit that works well with East Greenland basin correlation following to \cite{Maloof2006a}\\ \hline \end{tabular} } \label{tab:terrane_rotations} \end{table} Through the Proterozoic there are intervals where there are abundant paleomagnetic poles that constrain Laurentia's position and intervals when the record is sparse (shown colored by age in Fig. \ref{fig:Laurentia_poles}). To further visualize the temporal coverage of the poles and to summarize the motion, implied paleolatitudes for an interior point on Laurentia are shown in Figure \ref{fig:Laurentia_paleolatitude}. The ages of the utilized paleomagnetic poles are also shown in comparison to the simplified summary of tectonic events in Figure \ref{fig:tectonic_history}. Both collisional and extensional tectonism can result in the formation of lithologies that can be used to develop paleomagnetic poles either as a result of basin formation, magmatism or both. In addition, intraplate magmatism resulting from plume-related large-igneous provinces (LIPs) can lead to paleomagnetic poles in periods that are otherwise characterized by tectonic quiescence (e.g. the ca. 1267 Ma Mackenzie LIP; Fig. \ref{fig:tectonic_history}). Intracontinental rifts have led to the highest density of poles both in the case of the ca. 1.4 Ga Belt-Purcell Supergroup and the ca. 1.1 Ga Midcontinent Rift (Figs. \ref{fig:Laurentia_map} and \ref{fig:tectonic_history}). The quality and resolution of the record from the Midcontinent Rift is aided by the voluminous magmatism that occurred in conjunction with basin formation that enables the development of a well-calibrated apparent polar wander path \citep{Swanson-Hysell2019a}. The late Tonian Period also has a number of poles including the Gunbarrel LIP (ca. 780 Ma) and Franklin LIP (ca. 720 Ma), as well as similarly-aged sedimentary rocks from western Laurentia basins \citep{Eyster2020a}. Overall, there is internal consistency among the paleomagnetic poles within intervals for which there is high-resolution coverage. These data result in progressive paths (Fig. \ref{fig:Laurentia_poles}) such as ascending to the ca. 1140 to 1108 Ma apex of the Logan Loop \citep{Robertson1971a}, down the ca. 1108 to 1080 Ma Keweenawan Track \citep{Swanson-Hysell2019a} to the ca. 980 Ma Grenville Loop \citep{McWilliams1975a} prior to a temporal gap before the late Tonian (ca. 775 to 720 Ma) path \citep{Eyster2020a}. Data from other terranes add resolution to the record. In particular, data from Greenland add 12 poles between 1385 and 1160 Ma when there are only four poles from mainland Laurentia. Given that the rotation between Greenland and mainland Laurentia is well-constrained (Table \ref{tab:terrane_rotations}), once they have been rotated these poles can be used for reconstruction of the entire craton. The reliability of this approach gains credence through the good agreement between the ca. 1633 Ma Melville Bugt diabase dikes pole from Greenland \citep{Halls2011a} and the ca. 1590 Ma Western Channel diabase pole of mainland Laurentia (\citealp{Irving1972a}; Figs. \ref{fig:Laurentia_poles} and \ref{fig:Laurentia_paleolatitude}). Similarly, there is good agreement between the ca. 1267 Ma Mackenzie dikes pole of Laurentia \citep{Buchan2000a} and coeval poles from Greenland such as the ca. 1275 Ma North Qoroq intrusives \citep{Piper1992a} and Kungnat Ring dike \citep{Piper1977a}. Furthermore, the Greenland poles with ages that fall between the ca. 1237 Ma Sudbury dikes and ca. 1144 Ma lamprophyre dikes pole of mainland Laurentia are consistent with older and younger constraints from mainland Laurentia while filling in the ascending limb of the path leading up to the apex of 1140 to 1108 Ma poles known as the Logan Loop (Figs. \ref{fig:Laurentia_poles} and \ref{fig:Laurentia_paleolatitude}). An exception to this overall agreement between coeval poles from Greenland and mainland Laurentia occurs ca. 1382 Ma. There are poles of this age from Greenland associated with the Zig-Zag Dal basalts and related intrusions \citep{Marcussen1983a, Abrahamsen1987a}. However, these poles are in a distinct location from poles of similar age associated with the Belt-Purcell Supergroup (e.g. the McNamara Formation and Pilcher/Garnet Range and Libby Formations; \citealp{Elston2002a}). Additionally, the older Belt-Purcell Supergroup poles form a more southerly population than time-equivalent poles from elsewhere in Laurentia such as the Mistastin Pluton. There are potential complications associated with the Belt-Purcell Supergroup being exposed within thrust sheets with significant Mesozoic and Cenozoic deformation. However, vertical axis rotations of the Belt region are not able to bring the Belt poles into agreement with those from Laurentia or Greenland nor is translation away from the craton. Another potential complication is that the remanence used for the development of the Belt-Purcell Supergroup poles resides in hematite. As a result, there is the potential for inclination-flatting within the sedimentary rocks from which poles are developed. However, applying a moderate inclination correction factor of $f=0.6$ also does not bring the poles into congruence with the Zig-Zag Dal basalts. There is the potential that the hematite could be the result of post-depositional oxidation --- the remanence of the Purcell lavas pole is also held by hematite such that it is a chemical remanent magnetization (potentially acquired soon after eruption) rather than being a thermal remanent magnetization held by magnetite \citep{Elston2002a}. However the overall coherency of the pole directions from the Belt-Purcell Supergroup and the presence of geomagnetic reversals as interpreted from antipodal directions has been taken as evidence that the remanence is primary \citep{Elston2002a}. At present, it is unclear which poles are a better representation of Laurentia's position ca. 1400 Ma. The interval of the record when there are the most significant inconsistencies between poles of similar age is the Ediacaran Period at the end of the Neoproterozoic Era (Figs. \ref{fig:Laurentia_poles} and \ref{fig:Laurentia_paleolatitude}). Between 583 and 565 Ma, paleomagnetic poles imply both low-latitude and high-latitude positions of Laurentia (Fig. \ref{fig:Laurentia_paleolatitude}). This conflicting record is a longstanding problem and has led to the presentation of both high-latitude and low-latitude Laurentia paleogeographic reconstructions at the time (e.g. \citealp{Pisarevsky2001a,Li2008a}). One explanation for these variable pole positions is that they are the result of large-scale oscillatory true polar wander in the Ediacaran where rapid rotation of the entire silicate Earth influenced poles in Baltica and West Africa as well \citep{McCausland2007a, Robert2017a}. Paleodirectional data from single feldspar crystals from the Sept-\^Iles layered intrusion led \cite{Bono2015a} to interpret the lower inclination (and therefore lower latitude) direction from the intrusion (the one included as the ca. 565 Ma Sept-\^Iles pole in Table 2; \citealp{Tanczyk1987a}) as the primary thermal remanent magnetization. \cite{Bono2015a} interpreted steeper directions also recovered from the intrusives as the result of remagnetization. They suggested that other steep magnetizations from Ediacaran Laurentia plutonic rocks, such as that observed in the ca. 583 Ma Baie des Moutons complex (the A group of \cite{McCausland2011a} in Table 2), are also the result of remagnetization. The lower inclination Baie des Moutons complex B Group directions result in a pole that is indistinguishable from the lower inclination Sept-\^Iles intrusives pole. Another possibility discussed in the literature is that the lack of congruency between poles in this time interval is due to a particularly weak and non-dipolar geomagnetic field \citep{Abrajevitch2010a, Halls2015a, Bono2019a}. Data from the ca. 585 Ma Grenville dyke swarm of Laurentia, that are interpreted as primary, reveal $\sim$90\textdegree\ differences in direction within dikes dated within 2.5 $\pm$ 0.9 million years of one another \citep{Halls2015a}. The rates of $>$26\textdegree /Myr ($>$288 cm/yr) implied if these data are interpreted as resulting from plate motion or true polar wander were considered as dynamically implausible by \cite{Halls2015a} leading the authors to favor a deviation from axial dipolar behavior as the explanation for disparate Ediacaran directions. Estimates of magnetic paleointensity in these Grenville dikes are anomalously weak \citep{Thallner2021a} as are data from coeval volcanics in Ukraine \citep{Shcherbakova2019a} which could support an anomalous deviation from stable axial dipolar geomagnetic field behavior at the time as interpreted by \cite{Halls2015a}. Regardless of mechanism, the Ediacaran data stand out as anomalous relative to the coherency of the rest of the poles in the compiled record for Laurentia (Fig. \ref{fig:Laurentia_paleolatitude}). \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{../Figures/Fig5_Laurentia_paleolatitude.pdf} \caption{\textbf{Laurentia paleolatitude through time in data and models.} Top panel: Paleolatitude for the city of Duluth on the southern margin of the Superior province (present-day coordinates of lat=46.79\textdegree N, lon=92.10\textdegree W) implied by paleomagnetic poles from Laurentia and associated terranes. The paleomagnetic poles are compiled in Table 2. Middle panel: Paleolatitude implied by Laurentia poles compared with that implied by published paleogeographic models and the simple Laurentia model used in this chapter for the reconstructions in Figure \ref{fig:Laurentia_reconstructions}. TC2017 refers to \cite{Torsvik2017a} and SHM2017 refers to \cite{Swanson-Hysell2017a}. Bottom panel: the velocity implied by the continuous paleogeographic models in cm per year for the Duluth reference point on Laurentia.} \label{fig:Laurentia_paleolatitude} \end{figure*} Synthesizing the compilation of paleomagnetic poles for Laurentia into a composite path over the past 1.8 billion years presents a challenge given the highly variable temporal coverage. The method typically applied in the Phanerozoic is to develop synthesized pole paths either through fitting spherical splines through the data or calculating binned running means where the Fisher mean of poles within a given interval are calculated \citep{Torsvik2012a}. Applying such an approach can reduce the influence of spurious poles. Such synthesis is particularly important in regions of high data density where seeking to satisfy every mean pole position would result in jerky motion. A synthesized pole path for Laurentia is developed here and used to develop a paleogeographic reconstruction of Laurentia constrained by the compilation of paleomagnetic poles. The paleolatitude implied by this continuous simple Laurentia pole interpolation model is shown in Figure \ref{fig:Laurentia_paleolatitude}. This path is based on Laurentia data alone which means that it is poorly constrained through intervals of sparse data (950-850 Ma for example). One could use interpretations of paleogeographic connections with other cratons (e.g. Baltica in the early Neoproterozoic) to fill in such portions of the path, however the result then becomes model-dependent without being constrained by data from Laurentia itself. In portions of the record with a more dense record of poles, such as ca. 1450 Ma, a calculated running mean is used to integrate constraints from multiple poles (shown in Fig \ref{fig:Laurentia_poles}). This method follows the approach taken in the Phanerozoic (e.g. \citealp{Torsvik2012a}) wherein all poles within a 20-Myr interval are averaged with the interval than progressively moved forward in 10-Myr steps. When there are isolated `A' grade poles without other temporally-similar poles, these poles are fully satisfied in model. Where there are no constraints, a simple interpolation between constraints is made. While data from Scotland and Svalbard are associated with Laurentia, the Scotland poles are poorly constrained in time and the Svalbard rotation to Laurentia is uncertain. These poles are not utilized in the simple Laurentia model, which means that the model as shown does not include oscillatory true polar wander interpreted to have occurred between ca. 810 and 790 Ma based on data from Svalbard \citep{Maloof2006a}. The model of \cite{Li2013a} shown in Figure \ref{fig:Laurentia_paleolatitude} does seek to partially incorporate this true polar wander while also incorporating an interpretation of the paleomagnetic pole record from South China (albeit one that needs to be revisited given updates to the paleomagnetic and geochronologic record from South China; \citealp{Zhang2021a}). One downside of a running mean approach is that it pulls the mean to regions of high data density. As was shown in \cite{Swanson-Hysell2019a}, this behavior can reduce motion along an apparent polar wander path. As a result, for the portion of the reconstruction during the interval of time ca. 1110 to 1070 Ma where there is high data density from the Midcontinent Rift, I use the time-calibrated path from \cite{Swanson-Hysell2019a}. Paleogeographic snapshots for the past position of Laurentia reconstructed using this synthesis of the paleomagnetic poles are shown in Figure \ref{fig:Laurentia_reconstructions}. These reconstructions use the tectonic elements as defined by \citet{Whitmeyer2007a} with these elements being progressively added associated with Laurentia's accretionary growth. As a reminder to the reader, paleomagnetic poles provide constraints on the paleolatitude of a continental block as well as its orientation (which way was north relative to the block). While they provide constraints in this regard, they do not provide constraints in and of themselves for the longitudinal position of the block. Other approaches to obtain paleolongitude utilize geodynamic hypotheses such as assuming that large low shear velocity provinces have been stable plume-generating zones in the lower mantle to which plumes can be reconstructed \citep{Torsvik2014a} or that significant pole motion in certain time intervals is associated with true polar wander axes with specified paleolongitudes that switch through time in conjunction with hypothesized supercontinent cyclicity \citep{Mitchell2012a}. In Figure \ref{fig:Laurentia_reconstructions}, the map projections are centered on the longitudinal position of Duluth with the orientation and paleolatitude of Laurentia being constrained by the paleomagnetic pole compilation as synthesized in the simple pole interpolation model (Fig. \ref{fig:Laurentia_paleolatitude}). \begin{figure*}[!h] \centering \includegraphics[width=\textwidth]{../Figures/Fig6_Laurentia_reconstructions.pdf} \caption{\textbf{Paleogeographic reconstructions of Laurentia at time intervals through the Proterozoic.} These reconstructions use the simple Laurentia pole interpolation model that is shown in Figure \ref{fig:Laurentia_paleolatitude} to reconstruct the tectonic elements of \cite{Whitmeyer2007a} shown in Figure \ref{fig:Laurentia_map}. Modern coastlines are maintained in these polygons so that the rotated orientations can be interpreted by the reader in comparison to Figure \ref{fig:Laurentia_map}. Paleomagnetic poles within 25 million years of each reconstruction time are plotted. All reconstructions have poles within such a time frame that provide constraints with the exception of the 850 Ma reconstruction which is shown faintly given this relative uncertainty in Laurentia's position. The colors follow Figure \ref{fig:Laurentia_map} where the light grey represents Archean provinces, dark grey represents Paleoproterozoic collisional orogens, and pink/orange represents Paleoproterozoic/Mesoproterozoic accretionary orogens and granitoid intrusions.} \label{fig:Laurentia_reconstructions} \end{figure*} \subsection{Comparing paleogeographic models to the paleomagnetic compilation} Developing comprehensive global continuous paleogeographic models is a major challenge given the need to integrate and satisfy diverse geological and paleomagnetic data. Continually improving constraints related to tectonic setting from improved geologic and geochronologic data need to be carefully integrated with the database of paleomagnetic poles. Paleomagnetic pole compilations themselves are evolving with better data and improved geochronology \citep{Evans2021a}. Efforts such as this volume are therefore essential to present the state-of-the-art in terms of existing constraints that can be used to evaluate current models and set the stage for future progress in Precambrian paleogeography. There is an overall lack of models in the literature for the Proterozoic with published continuous rotation parameters that can be compared to the compilation of paleomagnetic poles presented herein. The approach in the community for many years has been to publish models as snapshots at time intervals presented in figures without publishing continuous rotation parameters, although some studies have published the Euler rotations associated with specified times. With the further adoption of software tools such as GPlates, there has been significant progress in the publication of continuous paleogeographic models constrained by paleomagnetic poles through the Phanerozoic (540 Ma to present; e.g. \citealp{Torsvik2017a}). An exception to the paucity of published continuous paleogeographic models for the Precambrian is the Neoproterozoic model of \cite{Merdith2017b} which is shown in comparison to the constraints for Laurentia in Figure \ref{fig:Laurentia_paleolatitude}. The extent to which the implied position of Laurentia in \cite{Merdith2017b} is consistent with the compiled paleomagnetic constraints can be visualized in Figure \ref{fig:Laurentia_paleolatitude}. As noted above, the development of such models is challenging and researchers need to simultaneously balance many constraints. The focus here is on the extent to which this model satisfies the available paleomagnetic poles for Laurentia. The model does not honor the Grenville loop (e.g. Laurentia going to moderately high southerly latitudes ca. 1000 Ma), which is a striking departure from the paleomagnetic record and standard paleogeographic models. Additionally, the implemented plate motion of Laurentia in the \cite{Merdith2017b} model strays from the younger poles of the Keweenawan Track and does not honor the Franklin LIP pole \citep{Denyszyn2009b} despite its `A' Nordic rating (Fig. \ref{fig:Laurentia_paleolatitude}). The Franklin pole is taken to be a key constraint at the Tonian/Cryogenian boundary that provides evidence both for the supercontinent Rodinia being equatorial and for ice sheets associated with the Sturtian glaciation having extended to equatorial latitudes \citep{Macdonald2010a}. There are more published models that show snapshots and publish rotation parameters associated with particular time intervals, such as the Rodinia model of \cite{Li2008a} and the Mesoproterozoic model of \cite{Pisarevsky2014b}, without providing parameters for a continuous model. The position for Laurentia implied by the Euler poles given for the model snapshots of these studies are shown in Figure \ref{fig:Laurentia_paleolatitude} and can be compared to the compiled record. The figure also shows the continuous implied position of Laurentia from the late Mesoproterozoic into the early Paleozoic from the model of \citet{Li2013a} (although the model parameters were not published with that study they have now been made available by the authors). This paleogeographic model implements large oscillations, including ones in the Ediacaran, that result from an interpretation that steep inclinations are the result of rapid motion of Laurentia from low to high latitudes and back again. The rates of Laurentia's motion associated with these models are also summarized in Figure \ref{fig:Laurentia_paleolatitude}. Over much of the record, Laurentia's pole positions can be satisfied through motion of the continent at rates of $<$10 cm/yr with intervals of more rapid motion such as during the Keweenawan Track and in the Paleozoic (Fig. \ref{fig:Laurentia_paleolatitude}). \subsection{Paleoenvironmental constraints on paleolatitude} Sedimentary rocks whose deposition is associated with specific climatic conditions have the potential to provide insight into paleolatitude and therefore can be a tool for the evaluation of paleogeographic models. Relevant deposits in the Proterozoic include glacial deposits deposited by continental ice sheets, carbonates deposited in carbonate-saturated (and thereby likely to be warm) marine environments, and evaporite deposits deposited where evaporation exceeded precipitation. Interpretations of paleolatitude based on glacial deposits during the Proterozoic are complicated by the evidence for multiple global and low-latitude glacial intervals associated with the Snowball Earth climate state \citep{Evans2003b}. Evaporite deposits are particularly compelling as paleolatitude constraints given that their deposition is interpreted to be associated with arid regions resulting from large-scale Hadley cell downwelling \citep{Evans2006a}. While moisture in the subtropics can change along with Earth's climate, the overall pattern of $\sim$10-35\textdegree\ of latitude being where annual mean evaporation exceeds precipitation persists \citep{Burls2017a}. Using a compilation of paired paleomagnetically-determined paleolatitude constraints and evaporite occurrence, \cite{Evans2006a} demonstrated that over the past 2 billion years large-scale evaporite deposition was consistently located in subtropical latitudes that correspond to the latitudes of modern arid zones. This finding is consistent both with the geocentric axial dipole hypothesis used to calculate paleolatitude and the long-term stability of large-scale convection circulation cells. \begin{figure*} \centering \includegraphics[width=7 in]{../Figures/Fig7_Laurentia_evaporite_figure.pdf} \caption{\textbf{Paleolatitude of Laurentia evaporites.} Left panel: The paleolatitude of evaporite deposits as reconstructed by the simple Laurentia model shown in Fig. \ref{fig:Laurentia_paleolatitude} combined with the Phanerozoic model of \cite{Torsvik2017a} and as reconstructed by the model of \cite{Li2013a} for the Neoproterozoic. Proterozoic evaporite deposits in this panel are discussed in the text while Phanerozoic ones are taken from the compilation of \cite{Evans2006a}. The evaporite lines extend from the maximum to minimum age constraints with points at the preferred depositional age. The evaporite paleolatitude points are labeled with numbers that correspond to the numbers on the present-day location map in the right panel. These numbers correspond to: 1 -- Altyn Formation (Belt-Purcell Supergroup); 2: Wallace/Helena Formations -- Belt-Purcell Supergroup; 3 -- Iqqittuq Formation (Bylot Supergroup); 4 -- Ten Stone Formation (Mackenzie Mountains Supergroup); 5 -- Minto Inlet Formation (Shaler Supergroup); 6 -- Redstone River Formation (Mackenzie Mountains Supergroup); 7 -- Kilian Formation (Shaler Supergroup); 8 -- Silurian Michigan Basin ; 9 -- Devonian Western Canada; 10 -- Carboniferous Canadian Maritime; 11 -- Carboniferous Sverdrup; 12 -- Permian Midcontinental USA; 13 -- Jurassic Gulf of Mexico.} \label{fig:Laurentia_evaporites} \end{figure*} Proterozoic evaporite deposits are documented within the following units that were deposited following the amalgamation of Laurentia (Fig. \ref{fig:Laurentia_evaporites}): \begin{itemize} %\item The Stark Formation of the Slave Province contains displacive halite pseudomorphs throughout as well as massive breccias interpreted to be evaporite-collapse breccias \citep{Pope2003a}. The formation is intruded by 1865 $\pm$ 15 Compton laccoliths and is interpreted to have been deposited \item The Altyn Formation of the Belt-Purcell Supergroup contains pseudomorphs after gypsum crystals and anhydrite within shallow-water carbonates with relict gypsum and anhydrite preserved within secondary silica \citep{White1984a}. The correlative Prichard Formation is intruded by 1468.8 $\pm$ 2.5 Ma sills \citep{Sears1998a}. Halite molds and casts are present within mudstones of the overlying Grinell Formation \citep{Pratt2019a}. Higher in the Belt-Purcell Supergroup stratigraphy, within the Wallace Formation, there is stratiform scapolite --- a metamorphic mineral interpreted to have formed from a halite precursor within the Wallace Formation \citep{Hietanen1967a}. There are also halite and gypsum pseudomorphs within carbonate mudstones of the correlative to underlying Helena Formation \citep{Pratt2001a,Winston2007a}. These deposits are older than the 1443 $\pm$ 7 Ma Purcell lavas and further constrained in age by a tuff with a U-Pb date of 1454 $\pm$ 9 Ma within the Helena Formation \citep{Evans2000c}. \item The Mesoproterozoic Iqqittuq Formation of the Borden basin (formerly part of the Society Cliffs Formation) contains bedded gypsum deposits (massive and laminated with beds that reach a thickness of 2.5 meters) and shale with halite casts \citep{Kah2001a}. These deposits are bracketed between Re-Os dates of 1048 $\pm$ 12 Ma for an underlying shale and 1046 $\pm$ 16 Ma for an overlying shale \citep{Gibson2018a}. \item The Tonian Ten Stone Formation of the Mackenzie Mountains Supergroup (formerly known as the Gypsum Formation) contains a $\sim$500-meter thick succession dominated by gypsum with minor anhydrite interpreted to have been deposited in a deep-water (below wave base) restricted marine basin \citep{Turner2016a}. These thick bedded sulfate deposits are older than cross-cutting 775.1 $\pm$ 0.5 Ma sills of the Gunbarrel large igneous province (U-Pb date from \citealp{Milton2017a}) and younger than ca. 1005 Ma detrital zircons \citep{Turner2016a}. The overlying Ram Head Formation has been correlated with the Bitter Springs Stage which is constrained between 811.5 $\pm$ 0.3 Ma and 788.7 $\pm$ 0.2 Ma \citep{Macdonald2010a, Swanson-Hysell2015a} suggesting that the evaporites are ca. 820 Ma \citep{Turner2016a}. These deposits are hypothesized to be correlative with sulfate evaporites within the Minto Inlet Formation of the Shaler Supergroup \citep{Jones2010a, Turner2016a}. \item The Tonian Kilian Formation of the Shaler Supergroup contains nodules of gypsum and anhydrite interpreted to have been deposited in an intertidal to supratidal evaporitic mudflat environment \citep{Prince2014a}. The Kilian Formation is interpreted to post-date the Bitter Springs Stage and be correlative with the Redstone River Formation of the Coates Lake Group in the Mackenzie Mountains that contains bedded gypsum as well as gypsum-bearing siltstone \citep{Jefferson1989a, Jones2010a}. The Redstone River Formation is younger than the 777.7 $\pm$ 2.5 Ma volcanics and older than a 732.2 $\pm$ 4.7 Ma Re-Os date from the overlying Coppercap Formation \citep{Rooney2014a}. These units are also interpreted to be correlative with the Callison Lake Formation which contains pseudomorphs after gypsum and is constrained to have been deposited between Re-Os dates of 752.7 $\pm$ 5.5 Ma and 739.9 $\pm$ 6.5 Ma \citep{Strauss2015a}. \end{itemize} In Figure \ref{fig:Laurentia_evaporites}, the paleolatitude of these evaporite deposits are reconstructed using the simple Laurentia model developed in this work as well as with the \cite{Li2013a} model for the late Mesoproterozoic to Neoproterozoic. The position of major Phanerozoic evaporite basins of North America are also shown with their paleolatitude reconstructed with the paleogeographic model of \cite{Torsvik2017a}. These paleogeographic models reconstruct evaporite deposition to have been within 30\textdegree\ of the equator in both the Phanerozoic and Proterozoic. In the Tonian period, evaporite deposition may have occurred equatorward of 10\textdegree\ (Fig. \ref{fig:Laurentia_evaporites}). In interpreting these data, it is important to note that there is high evaporation in the subtropics and the tropics. Within the tropical rain belt (0 to $\sim$10\textdegree\ latitude) these high evaporation rates are typically overwhelmed by precipitation such that global zonal mean precipitation exceeds evaporation within $\sim$10\textdegree\ of the equator (within $\sim$8\textdegree$\;$ of the equator over land). Evaporation typically exceeding precipitation from those latitudes ($\sim$10 to 15\textdegree) towards higher ones with evaporation minus precipitation being at a maximum at $\sim$20-25\textdegree\ \citep{Park2021a}. However, continental interiors near the equator can also be arid due to regional precipitation patterns leading to aridity and the formation of evaporites. For example, Lake Magadi in Kenya at a latitude of 1.9\textdegree S is a saline lake where thick bedded evaporites have accumulated \citep{Eugster1980a}. Caution is therefore needed when interpreting paleolatitude from evaporites in terrestrial and intracratonic settings given that they could occur both in tropical and subtropical latitudes. Therefore low-latitude evaporites ca. 750 Ma could reflect increased aridity in the tropical interior of the Rodinia supercontinent. Overall, the paleomagnetic data as synthesized in the paleogeographic models are in agreement with the Laurentia evaporite record from the Mesoproterozoic to the Cenozoic (Fig. \ref{fig:Laurentia_evaporites}). \subsection{Evaluating Laurentia's Proterozoic paleogeographic neighbors} Many different paleogeographic connections between Laurentia and other Proterozoic cratons have been proposed and utilized in paleogeographic models both prior to and following the amalgamation of Laurentia's constituent Archean provinces. This section is not comprehensive in terms of proposed connections, but rather I seek to highlight and contextualize some of the more prominent and/or well-supported models. These connections are often discussed in the context of hypothesized supercontinents given the hypothesis that Laurentia was an important constituent of Nuna, following ca. 1.85 Ga Trans-Hudson orogenesis, and of Rodinia, following ca. 1.05 Ga Grenvillian orogenesis. \subsubsection{Paleogeographic connections prior to initial Laurentia assembly} Within this volume, \cite{Salminen2021a} describe proposed Neoarchean to Paleoproterozoic groupings of Archean provinces (``supercratons'' in the terminology of \citealp{Bleeker2003a}) prior to Laurentia assembly. In particular, they discuss the hypothesis of Superia (named after the Superior province) wherein the Superior province of Laurentia is central to a group of Archean lithospheric blocks \citep{Bleeker2006a}, including the Kola and Karelia provinces of Baltica and the Wyoming province of Laurentia, that broke up prior to 2.0 Ga. This hypothesis was based on proposed shared connections in the source of mafic intrusive rocks from 2.45 to 2.11 Ga with Kola and Karelia on the southeastern margin of the Superior Province \citep{Davey2020a}. \cite{Salminen2021a} argue that the originally proposed Superia fit is not consistent with Baltica paleomagnetic poles. Instead, they propose a Superia (II) fit where there is a connection between the blocks between 2.68 and 2.05 Ga with some internal rotations. Another proposed connection evaluated by \cite{Salminen2021a}, also based on the correlation of mafic dikes, is one between the Slave province of Laurentia and the Dharwar province of India \citep{French2010a} as part of Sclavia (named after the Slave province; \citealp{Bleeker2003a}). This Slave-Dharwar connection is found to be consistent with ca. 2.23 Ga and ca. 2.19 Ga pairs of paleomagnetic poles if modified into a Sclavia (II) orientation \citep{Salminen2021a}. These blocks would have had a distinct drift history from Superia for most of the Paleoproterozoic \citep{Salminen2021a}. \begin{figure*} \centering \includegraphics[width=\textwidth]{../Figures/Fig8_Rodinia_Reconstruction.pdf} \caption{\textbf{Paleogeographic reconstructions of Laurentia and other select conjugate Proterozoic continents leading up to Rodinia assembly in the late Mesoproterozoic and to its initial break-up in the Neoproterozoic}. The hypothesized connection between Siberia and Laurentia is implemented following \cite{Evans2016b} who interpret this relationship as persistent from 1.7 to 0.7 Ga. The reconstruction of North China to Laurentia follows \cite{Ding2021a}. The omission of South China from Rodinia follows ​\cite{Park2021b}​. The reconstruction implements Kalahari and Amazonia cratons as conjugates with Laurentia in the Grenvillian orogeny \citep{Hoffman1991a}. The Australia-East Antarctica relationship with Laurentia follows \cite{Swanson-Hysell2012a} and is similar to the Neoproterozoic reconstruction between the continents of \cite{Li2011a} and implementing their relative rotation of North Australia relative to South and West Australia. This configuration back to ca. 1140 Ma is consistent with a comparison between the Laurentia poles of that age and the coeval Mt. Isa dikes pole from North Australia and with the Keweenawan Track if the the Nonesuch and Freda poles are interpreted to be ca. 1080 (consistent with chronostratigraphic constraints; \citealp{Slotznick2018b}) with further motion by ca. 1070 Ma. The time slices show the rapid motion of Laurentia implied by the paleomagnetic poles which is consistent with the timing of collisional orogenesis associated with the Grenvillian orogeny. The assembled Rodinia persisted until initial rifting ca. 775 Ma with episodic rifting continuing until ca. 530 Ma.} \label{fig:Grenville_reconstructions} \end{figure*} \subsubsection{Amazonia} In the central and southern Appalachians there are inliers of rocks that were metamorphosed during the Ottawan phase of the Grenvillian orogeny \citep{McLelland2013a}. On the basis of whole-rock Pb-isotope data, \cite{Loewy2003a} and \cite{Fisher2010a} proposed that these inliers are fragments of lithosphere of another continent that were transferred to Laurentia during the Grenvillian orogeny and left behind when the Iapetus Ocean formed. In particular, \cite{Fisher2010a} suggested that the Suns\'as orogen of Amazonia is the best match for southern and central Appalachian inliers. This positioning is consistent with a paleogeographic model wherein Amazonia is a major portion of the conjugate continental lithosphere that collided with Laurentia during Rodinia assembly (Fig. \ref{fig:Grenville_reconstructions}; \citealp{Hoffman1991a, Evans2013b, Cawood2017a}). While the lack of ca. 1100 to 1000 Ma poles from Amazonia precludes a robust paleomagnetic test, this scenario is consistent with the available late Mesoproterozoic poles from Amazonia (ca. 1200 Nova Floresta pole and ca. 1150 Fortuna Formation pole; \citealp{DAgrella-Filho2021a}) as shown in \cite{Evans2013b}. In this paleogeographic scenario, the basement inliers of the Appalachian Orogen in the Blue Ridge region are interpreted to be the leading edge of Amazonia with initial collision ca. 1080 Ma initiating the Ottawan phase of the Grenvillian orogeny (Fig. \ref{fig:tectonic_history}). Subsequent separation of Amazonia in the Neoproterozoic would have led to the formation of the Iapetus Ocean as Rodinia rifted apart. Departure of Amazonia potentially occurred as early as ca. 700 Ma in the Paleo-Iapetus Ocean model of \citet{Robert2020a} in conjunction with rifting in eastern Laurentia. A significantly later separation ca. 560 Ma would be predicted if Amazonia were further north (present-day coordinates) along Laurentia's margin given the lack of evidence of earlier rifting until after ca. 620 Ma north of the New York promontory \citep{Allen2010a}. \subsubsection{Australia and East Antarctica} It has long been argued that there are shared aspects of the geologic history between Australia, East Antarctica and Laurentia \citep{Moores1991a}. The extent of Antarctic lithosphere that was conjoined with Australia prior to the assembly of Gondwana is uncertain, but there are strong connections between the Gawler province of the South Australia craton and the Terre Ad\'elie province of Antarctica that is commonly interpreted to extend to the Miller ranges of the Trans-Antarctic Mountains as the Mawson craton \citep{Payne2009b}. Separation between these Antarctic provinces and Australia was largely accomplished in the current Cenozoic Era with the opening of the Tasmanian seaway. Correlations with Laurentia have led to a number of distinct proposed reconstructions of Australia + East Antarctica along the western margin of Laurentia at different times in the Proterozoic. Metamorphism ca. 1.6 Ga on the eastern margin of the North Australia craton associated with the Issan-Jana orogeny has been interpreted to be the result of collisional orogenesis with the western Laurentia margin \citep{Nordsvan2018a, Pourteau2018a, Gibson2020a}. That Laurentia was the conjugate continent for this orogeny is argued to be supported by detrital zircon date spectra from the Georgetown Inlier of the eastern North Australia craton that have similarities with possible Laurentia sources \citep{Nordsvan2018a}. In the model of \cite{Nordsvan2018a}, the inlier is a continental ribbon that rifted from Laurentia ca. 1.68 Ga and was then caught up in ca. 1.60 Ga collision between the North Australia craton and northwestern Laurentia although others interpret it to be part of an extended Australian margin \citep{Gibson2020a}. The ca. 1.60 Ga Racklan-Forward orogeny in northwestern Laurentia records arc-continent collision that could have been followed by continent-continent collision between Laurentia and Australia \citep{Thorkelson2005a, Furlanetto2013a}. This timing of the conjoinment of the cratons would put Australia in a position that honors subsequent Mesoproterozoic correlations. Detrital zircons dates of ca. 1.61 to 1.50 Ga in the ca. 1.45 Ga lower Belt-Purcell supergroup of Laurentia are interpreted to have been sourced from the North Australia craton \citep{Jones2015a}. The U-Pb-Hf signatures of detrital zircons from ca. 1.45 to 1.30 Ga sedimentary rocks of Tasmania have also been interpreted to indicate a Laurentia source consistent with such a paleogeographic connection \citep{Mulder2015a}. Additionally, ca. 1.44 Ga granites in East Antarctica (recovered as glacial clasts and inferred from detrital zircons) have the same age and isotopic signatures as the distinctive `A-type' granites in Laurentia \citep{Goodge2008a}. The interpretation of \cite{Goodge2008a, Goodge2017a} is that there is an extension of the southwestern Laurentia magmatic belt into Antarctica (that is currently overlain by the East Antarctic ice sheet). The correlation of the eastern North Australia craton with northwestern Laurentia and that of \textbf{s}outh\textbf{w}est Laurentia with \textbf{E}ast \textbf{A}n\textbf{t}arctica leads to the SWEAT fit proposed to have initiated in the Paleoproterozoic by \cite{Moores1991a}. Given that this tight-fit configuration likely was not sustained into the Neoproterozoic, as discussed below, researchers have taken to referring to this configuration as the ``proto-SWEAT'' reconstruction \citep{Payne2009b, Kirscher2020a}. A comparison between paleomagnetic data from the ca. 1.32 Ga Derim Derim sills of the North Australia craton and the ca. 1.31 Ga Nain anorthosite of Laurentia are consistent with this SWEAT configuration leading to the interpretation that it persisted from 1.6 Ga to at least 1.3 Ga \citep{Kirscher2020a}. If the North Australian craton was continuous with South Australia, this interpretation would have the ca. 1.47 to 1.40 Ga Belt-Purcell basin be an intracontinental rift and makes it more difficult to explain the ca. 1.35 Ga East Kootenay orogeny on that margin. A paleomagnetic pole from the ca. 1.21 Ga Gnowangerup-Fraser dike swarm is argued to be inconsistent with a conjoined relationship between Australia and Laurentia ca. 1.2 Ga as it implies a high-latitude for Australia and distinct shape of the apparent polar wander path \citep{Pisarevsky2014a}. There is not a coeval pole from Laurentia for comparison ca. 1.21 Ga and data do indicate poleward motion for Laurentia between the ca. 1.24 and 1.18 Ga constraints. Nevertheless, comparison between latest Mesoproterozoic (ca. 1070 Ma) and Neoproterozoic (ca. 750 Ma) paleomagnetic poles from Australia and Laurentia are inconsistent with a configuration where Australia+East Antarctica are both tightly positioned against western Laurentia as they are in the proto-SWEAT fit. Rather, while these pole comparisons could be consistent with East Antarctica against southwestern Laurentia, they require the eastern Australian margin to be rotated further away from Laurentia as in Figure \ref{fig:Grenville_reconstructions}. There is a similarity between Australia's paleomagnetic pole database and that of Laurentia's in that both pole paths have a similar position between ca. 1070 Ma and ca. 770 Ma poles \citep{Swanson-Hysell2012a}. This similarity could support interpretations of a unified Rodinia containing both Australia and Laurentia throughout that time interval \citep{Swanson-Hysell2012a} that subsequently broke up ca. 650 Ma \citep{Li2011a}. \subsubsection{Baltica} Based on correlation of Archean provinces and Paleoproterozoic orogenic belts, \cite{Gower1990a} reconstructed Baltica to Laurentia in a position known as the NENA (northern Europe and North America) configuration. This configuration had originally been proposed by \citet{Patchett1978a} and \citet{Piper1980a} largely on the basis of paleomagnetic pole comparisons. This connection proposes a tight fit between present-day northern Norway and Russia's Kola Peninsula with eastern Greenland \citep{Gower1990a, Salminen2021b}. In this position, Baltica and Laurentia are hypothesized to share a long-lived accretionary margin wherein the Gothian orogen of Baltica (where accretionary orogenesis was active ca. 1.66 to 1.52 Ga; \citealp{Bergstrom2020a}) is a continuation of the Mazatzal-Labradorian orogenic belts of Laurentia \citep{Karlstrom2001a}. Baltica and Laurentia as conjoined cratons with a shared active margin features as a major component of Paleoproterozoic to Mesoproterozoic paleogeographic reconstructions \citep{Evans2011a, Zhang2012a, Elming2021a}. Rotating Baltica into the NENA connection position results in matched paleomagnetic pole pairs between ca. 1.78 to 1.26 Ga \citep{Buchan2000a, Evans2008a, Swanson-Hysell2021b} that could be extended to ca. 1.12 Ga if a virtual geomagnetic pole (that is a pole position calculated from paleomagnetic data of a single cooling unit) from the Salla dike of northeastern Finland is considered to be a representative paleomagnetic pole for Baltica \citep{Salminen2009b}. These data constrain the NENA connection to have been maintained until at least 1.26 Ga and perhaps until 1.12 Ga \citep{Salminen2021b}. Many paleogeographic models consider there to still have been close proximity between Baltica and Laurentia in the latest Mesoproterozoic into the Neoproterozoic. This continued connection has been hypothesized based on correlation of the Grenvillian orogeny with the Sveconorwegian orogeny of southwestern Baltica \citep{Gower1990b} as well as proposed similarities between the apparent polar wander path of Laurentia's Grenville loop and Baltica's Sveconorwegian loop that led reconstructions to be based on the alignment of these paths and the resulting position of Baltica relative to Laurentia \citep{Piper1980a,Pisarevsky2003a}. However, as geochronology has improved (e.g. \citealp{Gong2018b}), the ages of poles in the Sveconorwegian loop are now constrained to be younger than the Grenville loop poles falling in a gap in Laurentia's pole record \citep{Evans2015a,Fairchild2017a} which makes it more challenging to test proposed post-NENA configurations between the cratons. In terms of the orogenic timing, the main phase of the Sveconorwegian orogeny from ca. 1.05 to 0.98 Ga corresponds temporally with the Grenvillian orogeny \citep{Stephens2020a}. There is currently an active debate in the literature whether the Sveconorwegian orogen is the result of collisional or accretionary orogenesis \citep{Stephens2020a}. \cite{Slagstad2019a} favor an accretionary orogeny and by contrasting this setting with the collisional orogenesis of the Grenvillian orogen argue that there should not be a link between Laurentia and Baltica ca. 1.0 Ga. However, in a recent critical review of the constraints, the high pressure nature of metamorphism and the cratonward propagation of orogenesis (in contrast to older accretionary orogenesis) led \cite{Stephens2020a} to favor the interpretation that the Sveconorwegian orogen is a record of prolonged continent-continent collision initiating ca. 1.06 Ga. In this model, Baltica could have been on the same plate as Laurentia during the rapid late Mesoporoterozoic motion leading up to collisional Grenvillian orogenesis and the associated assembly of Rodinia (Fig. \ref{fig:Grenville_reconstructions}). Early Tonian intrusive granitoids and extrusive calc-alkaline volcanics in East Greenland and related Arctic terranes including northeast Svalbard, with ages between 975 and 915 Ma \citep{McClelland2019a}, are interpreted to have formed within a magmatic arc or in a syn-collisional setting \citep{Johansson1999a}. This magmatic activity is hypothesized to be the result of subduction and accretionary orogenesis termed the Valhalla orogeny \citep{Cawood2010a}. This geological evidence for an active margin in East Greenland in the earliest Neoproterozoic has motivated a tectonic model wherein Baltica rifted off East Greenland in the late Mesoproterozoic while staying proximal to Laurentia via a clockwise rotation that would have severed the NENA connection \citep{Cawood2010a}. Such a clockwise rotation of Baltica relative to Laurentia from the earlier Paleoproterozoic-Mesoproterozoic NENA configuration with a shared Labradorian to Gothian orogenic belt to one where the Sveconorwegian orogen is close to the Grenvillian orogen in Labrador (northeast Canada) is implemented in many paleogeographic models (e.g. \citealp{Evans2009a} and as shown in Fig. \ref{fig:Grenville_reconstructions}). This position relative to Laurentia results in a joint Laurentia-Baltica APWP with large oscillations implied by Baltica poles. These oscillations would imply that following the Grenville loop, Laurentia moved northward across the equator and then back south to a similar Grenville loop position prior to the ca. 780 Ma Laurentia poles \citep{Evans2015a,Fairchild2017a}. A challenge with this model in terms of regional tectonics, is that it is unclear what conjugate lithosphere rifted from East Greenland leading to the thick sedimentary succession interpreted to have been deposited on a thermally-subsiding passive margin from ca. 850 Ma into the Cryogenian (Fig. \ref{fig:tectonic_history}; \citealp{Maloof2006a}). \cite{Malone2014a} interpreted this Neoproterozoic basin development along the East Greenland margin to have been the result of extension in a back-arc setting. Magmatism ca. 615 Ma in both Laurentia and Baltica (in the relative configuration shown in Figure \ref{fig:Grenville_reconstructions}) is interpreted to be associated with the plume-related Central Iapetus Magmatic Province (CIMP; \citealp{Tegner2019a}). This ca. 615 to 590 Ma magmatism is hypothesized to have initiated the breakup between Baltica and Laurentia that led to the opening of the Iapetus Ocean (Figs. \ref{fig:tectonic_history} and \ref{fig:Grenville_reconstructions}; \citealp{Cawood2001a, Tegner2019a}). The cratons would once again be conjoined ca. 430 Ma during their collision that led to the Caledonian orogeny resulting in a continent that is referred to as Laurussia \citep{Torsvik2017a}. \subsubsection{Kalahari} High-quality paleomagnetic poles constrain the coeval ca. 1109 Ma Umkondo large igneous province of the Kalahari craton and the early flood basalts of the Midcontinent Rift of Laurentia to have been separated by more than 50\textdegree\ of latitude at the time they were emplaced. These poles reconstruct the craton margins as having been separated by more than 30\textdegree\ of latitude \citep{Swanson-Hysell2015a}. These data make it difficult to envision a shared origin of magmatism and pose a challenge to approaches that seek to reconstruct paleogeography on the basis on ages of LIPs alone. However, with the subsequent rapid motion of Laurentia to low latitudes (Figs. \ref{fig:Laurentia_paleolatitude} and \ref{fig:Grenville_reconstructions}), it is possible that the Kalahari was a conjugate craton to the (south)eastern margin of Laurentia during the time of the Grenvillian orogeny. This conjugate relationship based on the interpretation that the Namaqua-Natal belt in southern Kalahari records late Mesoproterozoic collisional orogenesis was proposed in \cite{Hoffman1991a} and is implemented in most reconstructions of Rodinia (e.g. \citealp{Li2008a}). Whether the Grenvillian margin of Laurentia and the Namaqua-Natal belt of Kalahari faced one another and could have been conjugates can be evaluated by paired paleomagnetic and geochronologic data sets from the Umkondo LIP and the Midcontinent Rift. The preferred interpretation of \cite{Swanson-Hysell2015a} and \cite{Kasbohm2015a} is that sites with northerly declinations from the Umkondo Province correspond to the reversed polarity directions from the early magmatic stage in the Midcontinent Rift (e.g. \citealp{Swanson-Hysell2014a}) such that Namaqua-Natal margin faced the Grenvillian margin of Laurentia (Fig. \ref{fig:Grenville_reconstructions}). The late Mesoproterozoic apparent polar wander paths for Laurentia and Kalahari are consistent with them becoming conjoined as in Figure \ref{fig:Grenville_reconstructions} \citep{Swanson-Hysell2015a}. The record of the Namaqua belt wherein there is granitoid plutonism and arc accretion up to ca. 1090 Ma followed by peak granulite metamorphism ca. 1065 to 1045 Ma \citep{Diener2013a, Spencer2015a} is consistent with a scenario wherein Kalahari was on the upper plate that collided with Laurentia at the time of the Ottawan phase of Grenvillian orogenesis following subduction of oceanic lithosphere associated with an intervening ocean. If they indeed became conjoined in Rodinia as in Figure \ref{fig:Grenville_reconstructions}, the separation of Kalahari from Laurentia may have initiated ca. 795 Ma heralded by the emplacement of the Gannakouriep diabase dike swarm \citep{Rioux2010a, DeKock2021a} and volcanic rocks in southeastern Laurentia (e.g., the ca. 750 Ma Mount Rogers volcanics and associated rift-related sediments \cite{Aleinikoff1995a, MacLennan2020a}). A ca. 750 Ma volcano-sedimentary sequence in northwest Kalahari is interpreted to be rift-related \citep{Borg2003a} and could be correlative with the Mount Rogers Formation. This late Tonian timing for separation could be consistent with a position along the southeastern Laurentia margin, although subsequent Ediacaran rifting and Cambrian thermal subsidence in the region would need to be attributed to the rifting of microcontinents such as Arequipa and Cuyania \citep{Escayola2011a, Martin2019a}. \subsubsection{North China} The latest Mesoproterozoic to earliest Neoproterozoic pole path of the North China craton includes a swath of paleomagnetic poles with a similar arc length to the Keweenawan Track to Grenville Loop of Laurentia's APWP \citep{Zhao2019a, Ding2021a, Zhang2021a}. While the chronostratigraphic age constraints on these North China poles are much looser than those from Laurentia, \cite{Zhao2019a} proposed that the North China poles can be aligned with the Keweenawan Track to reconstruct North China as being conjoined to the northwestern margin of Laurentia from prior to ca. 1110 Ma into the early Neoproterozoic. In this model, North China would have been at polar latitudes ca. 1110 Ma and moved rapidly with Laurentia as it transited towards the equator. A challenge with interpreting these North China poles as primary is that they include data from limestones that would reconstruct them to have been deposited at very high latitude ($>$80\textdegree\ for the limestones of the lower member of the Nanfen Formation). Such high-latitude limestones would require non-uniformitarian depositional conditions for carbonate precipitation which is more dominant in low latitudes due to the temperature dependence of carbonate saturation. As additional support for a North China--Laurentia connection, \cite{Zhao2019a} pointed to similarities in the detrital zircon age spectra between early Neoproterozoic sediments in NW Laurentia and North China basins as supporting this reconstruction. In particular, sediment transport from Laurentia could provide a source for ca. 1.18 Ga zircons (from the Shawinigan orogen) and ca. 1.08 Ga zircons (from the Grenville orogen). If North China was in this position, the timing of its arrival adjacent to Laurentia is unclear. The ca. 1220 Ma dikes pole of the North China craton is not coincident with the 1237 $\pm$ 5 Ma Sudbury dikes pole in this reconstructed position leading \cite{Zhao2019a} and \cite{Zhang2021a} to suggest that North China arrived on the Laurentian margin between ca. 1220 and 1110 Ma although they note a lack of evidence for known North China orogenesis at this time. Laurentia's pole path has a gap from the 1237 $\pm$ 5 Ma Sudbury pole to the 1184 $\pm$ 5 Ma Narssaq Gabbro and Hvidaal dike poles during which it ascends to the Logan Loop (Figs. \ref{fig:Laurentia_poles} and \ref{fig:Laurentia_paleolatitude}). In its reconstructed position, the ca. 1220 Ma North China pole is close to this gap such that it is possible that the APWPs are consistent and that North China arrived on the northwestern Laurentia margin prior to 1220 Ma. In terms of departing from this position, one possibility is that its departure is associated with latest Neoproterozoic extension in northwestern Laurentia. \subsubsection{Siberia} Southern Siberia and northern Laurentia have been proposed to be connected on the basis of a similar history of Paleoproterozoic collision \citep{Rainbird1998a}, matches in the age of mafic intrusives rocks interpreted as shared large igneous provinces from the Paleoproterozoic to the Neoproterozoic \citep{Ernst2016a}, and comparisons of paleomagnetic poles \citep{Evans2011a, Evans2016b}. U-Pb dates from mafic intrusive rocks are interpreted by \cite{Ernst2016a} as resulting from shared large igneous provinces between southern Siberia and northern Laurentia from the time of Laurentia's amalgamation ca. 1.8 Ga all the way up to the time of the ca. 720 Ma Franklin large igneous province (LIP). For example, mafic intrusions and lavas that are grouped as the Irkutsk LIP in Siberia have been dated in Siberia to be similar in age to dates developed from the Franklin LIP \citep{Denyszyn2009a, Ernst2016a}. Comparisons of paleomagnetic poles between Laurentia and Siberia support this tight and internally stable fit between southern Siberia and northern Laurentia from ca. 1.64 to at least ca. 0.76 Ga \citep{Evans2016b}. Overlap in Laurentia and Siberia pole positions with such a reconstruction are achieved in the Statherian Period of the Paleoproterozoic (the ca. 1.64 Ga Nersa complex of Siberia compared to the Melville Bugt dikes of Laurentia), the Calymmian Period of the Mesoproterozoic (the ca. 1.50 Ga Anabar intrusions with the ca. 1.48 Ga St. Francois Mountains igneous province of Laurentia), the Stentian Period of the Mesoproterozoic (a number of roughly chronostratigraphically constrained Siberia poles with the 1.11 to 1.08 Ga Keweenawan Track of Laurentia) and the Tonian Period of the Neoproterozoic (the ca. 758 Ma Kitoi pole of Siberia with Tonian Laurentia poles). The correlation of the Malgina pole of Siberia with the Keweenawan Track gains additional support in that it correlates the normal-polarity Maya superchron \citep{Gallet2012a} with the Portage Lake normal-polarity zone \citep{Swanson-Hysell2019a} that is interpreted as a normal-polarity superchron (termed the Keweenawan Normal Superchron in \citealp{Driscoll2016a}). This correlation both works with the tight fit and resolves hemispheric ambiguity to put both cratons together in the northern hemisphere ca. 1100 Ma (Fig. \ref{fig:Grenville_reconstructions}). Connections between Archean provinces may have also existed prior to Laurentia assembly such as the hypothesized connection between the Slave Province and the Tungus Province of Siberia which has the Thelon orogen correlated to Paleoproterozoic orogenesis in the Akitkan fold belt \citep{Condie1994a, Rainbird1998a, Evans2011a}. The interpretation that the Franklin LIP is recorded in both Laurentia and Siberia constrains the break-up between the continents to post-date 720 Ma. Separation between the continents likely initiated associated with the Franklin LIP given that the southern margin of Siberia became an active margin during the Cryogenian Period \citep{Powerman2015a}. \subsection{The record implies plate tectonics throughout the Proterozoic} \label{sec:plate_tectonics} \begin{figure*} \centering \includegraphics[width=\textwidth]{../Figures/Fig9_baked_contact_timeline_all.pdf} \caption{\textbf{Paleomagnetic poles with positive baked contact tests from Laurentia and other cratons}. This timeline shows the age of paleomagnetic poles with positive baked contact tests within the Nordic Paleogeography Workshop compilation \citep{Evans2021a}. Positive baked contact tests require the presence of an appreciable geomagnetic field. In turn, the presence of geomagnetic field requires heat flow across the core mantle boundary that is maintained by plate tectonics, but that would be stifled by a stagnant lid.} \label{fig:baked_contact} \end{figure*} Even without considering other continents, there is strong evidence both in Laurentia's geological and paleomagnetic record for differential plate tectonic motion between 2.2 and 1.8 Ga. The continued history of accretionary orogenesis and the evaluation of Laurentia's pole path in comparison to other continents from 1.8 Ga onward supports the continual operation of plate tectonics throughout the rest of the Proterozoic and Phanerozoic as well. While this evidence fits with the majority of interpretations of the timing of initiation of modern-style plate tectonics (see summary in \citealp{Korenaga2013a}), there continue to be proponents of a stagnant lid throughout the Mesoproterozoic Era (1.6 to 1.0 Ga) and into the Neoproterozoic with plate tectonics not initiating until ca. 0.8 Ga \citep{Hamilton2011a, Stern2018a}. These arguments rest on the relative lack of Proterozoic low-temperature high-pressure metamorphic rocks such as blueschists that form in subduction zones \citep{Stern2013a}. Alternative interpretations for the lack of blueschists in the Proterozoic call upon both their low preservation potential or secular changes in mantle chemistry and/or temperature \citep{Brown2019a}. \cite{Palin2015a} proposed that such a shift in metamorphic regime is the predicted result of secular evolution of mantle chemistry and changing MgO composition of oceanic crust rather than a harbinger of the onset of plate tectonics. An alternative hypothesis is that earlier in the Proterozoic slab breakoff occurred at shallower depths limiting the formation and preservation of low-temperature/high-pressure metamorphic rocks \citep{Brown2019a}. In this model, secular mantle cooling led to increasingly strong lithosphere that enabled deeper continental subduction and slab breakoff depths. While the secular change in the abundance of low-temperature/high-pressure metamorphic rocks is intriguing, to argue that there was not differential plate tectonic motion in the Paleoproterozoic and Mesoproterozoic is to ignore a vast breadth and depth of geological and paleomagnetic data. From a paleomagnetic perspective, there is strong support for independent and differential motion between the Slave and Superior provinces from 2.2 to 1.8 Ga as is illustrated in Figure \ref{fig:Superior_Slave_recons}. From a geological perspective, the Trans-Hudson orogenic interval, the Grenville orogenic interval, and the Appalachian orogenic interval are all well-explained with a mobilistic interpretation that includes phases of accretionary orogenesis followed by collisional orogenesis (Fig. \ref{fig:tectonic_history}). One could counter that this perspective results from a plate-tectonic-centric viewpoint that lacks creativity to see the record as resulting from other processes than modern-style plate tectonics. However, in addition to the broad geological record showing an amalgamation of terranes as would be expected to arise through plate tectonics, there are also Proterozoic obducted ophiolites that are well-explained through accretionary plate tectonics as well as eclogites such as those preserved in the Trans-Hudson orogen \citep{Weller2017a}. These eclogites preserve evidence for high-pressure/low-temperature metamorphic conditions ca. 1.8 Ga. Similar to the Himalayan orogen, these rocks are interpreted to be the result of deep subduction and exhumation of continental crust during convergent tectonics \citep{Weller2017a}. Outside of Laurentia, there are examples of Paleoproterozoic eclogites with geochemical affinity to oceanic crust such as that documented in the ca. 1.9 Ga Ubendian belt of the Congo craton \citep{Boniface2012a}. Mesoproterozoic ophiolites were also obducted to Laurentia such as the Pyrites ophiolite complex within the Shawinigan orogen and the Coal Creek Domain of the Llano uplift \citep{Chiarenzelli2011a, McLelland2013a}. Another perspective on Proterozoic tectonics is that the record is one of intermittent subduction \citep{Silver2008a, ONeill2013a}. In such a model, there are extended intervals with a stagnant lid alternating with intervals of differential plate motion. In particular, it has been argued that the Mesoproterozoic Era was an interval when Earth was in a stagnant regime without mobile plate tectonics \citep{Silver2008a, ONeill2013a}. The long-lived accretionary history of Laurentia following the amalgamation of the Archean provinces is difficult to reconcile with such an interpretation (Figs. \ref{fig:Laurentia_map} and \ref{fig:tectonic_history}). The record of paleomagnetic poles also show that there was progressive motion of Laurentia through the Proterozoic (Figs. \ref{fig:Laurentia_paleolatitude} and \ref{fig:Laurentia_reconstructions}). Using data from Laurentia alone, however, it is difficult to ascertain whether this motion is due to plate tectonic motion or rotation of the entire solid Earth through true polar wander. True polar wander can lead to a changing position relative to the spin axis even with a stagnant lid. One interval when the Laurentian paleomagnetic record demands that some of the motion is through differential plate tectonics is in the latest Mesoproterozoic. At that time, the pole path is very well-resolved with many high-quality paleomagnetic poles between 1110 and 1070 Ma (Table 2; Fig. \ref{fig:Laurentia_poles}). The progression of the poles requires rotation about an Euler pole that is distinct from a great circle path which would result if the motion were solely due to true polar wander \citep{Swanson-Hysell2019a}. These poles constrain rapid motion of Laurentia leading up to collisional orogenesis associated with the Grenvillian orogeny, as illustrated in Figure \ref{fig:Grenville_reconstructions}. These data provide strong evidence for differential plate motion at the time and are inconsistent with a stagnant lid. Rather, the orogenic interval within the Mesoproterozoic bears similarity with that of the Paleozoic (0.54 to 0.25 Ga) and reveals Laurentia to have been a central player in amalgamation of continents associated with the supercontinents Rodinia and Pangea. An additional constraint supporting ongoing plate tectonics throughout the Proterozoic comes from paleomagnetic evidence of a sustained geomagnetic field. In a prolonged stagnant lid regime, there would not be sufficient heat flow across the core-mantle boundary to sustain a geodynamo \citep{Nimmo2000a, Buffett2000b}. One way to get insight into the ancient geomagnetic field is through paleointensity experiments on igneous rocks that enable estimates of ancient field strength to be developed. Paleointensity data developed from units that are included within the Laurentia paleomagnetic poles database indicate a significant geomagnetic field in the Neoarchean \citep{Selkin2000a} through to the Mesoproterozoic \citep{Macouin2006a, Sprain2018a}. However, paleointensity experiments are challenging and prone to failure due to alteration during laboratory heating or non-ideal rock magnetic behavior. As a result, it is significantly more challenging to develop reliable paleointensity data than to develop the reliable paleodirectional data used to calculate paleomagnetic poles. Therefore, the paleointensity database is sparser than the compilation of reliable paleomagnetic poles. Paleodirectional data themselves can given insight into the presence of a significant geomagnetic field --- particularly those with positive baked contact tests (Fig. \ref{fig:baked_contact}). Baked contact tests indicate that, at the time of dike emplacement, there was an appreciable field such that both the cooling magma and the heated country rock in the vicinity of a dike were able to acquire a primary coherent magnetization direction. Additionally, since paleomagnetic poles are typically developed from many individual cooling units across a region, the similarity of the directions across an igneous province indicates that the magnetizations were dominantly acquired from the geomagnetic field rather than being influenced by local variable crustal magnetizations. The record of abundant positive baked contact tests and coherent paleomagnetic poles (Fig. \ref{fig:baked_contact}; Table 2) supports the persistence of a geomagnetic field through the Paleoproterozoic and Mesoproterozoic. This record implies ongoing active plate tectonics that enabled sufficient core-mantle boundary heat flow to power the geodynamo. \subsection{Conclusion} The paleogeographic record of Laurentia is rich in constraints through the Precambrian both in terms of the geological and geochronological data on tectonism and the record of paleomagnetic poles. Data from the Slave and Superior provinces of Laurentia provide what is arguably the strongest evidence of differential plate tectonics in the Rhyacian and Orosirian Periods of the Paleoproterozoic Era (2.3 to 1.8 Ga) leading up to the collision of these microcontinents during the Trans-Hudson orogeny. The collisions of these and other Archean provinces led to the formation of the core of Laurentia. Subsequent crustal growth occurred through multiple intervals of accretionary orogenesis through the late Paleoproterozoic and Mesoproterozoic until the continent-continent collision of the Grenvillian orogeny that was ongoing at the Mesoproterozoic-Neoproterozoic boundary (1.0 Ga). The lead-up to this orogeny was associated with rapid plate motion of Laurentia from high latitudes towards the equator recorded by the Logan Loop and Keweenawan Track of paleomagnetic poles. Following a return to high latitudes, as constrained by paleomagnetic poles of the Grenville Loop, Laurentia straddled the equator at the time of Cryogenian Snowball Earth glaciation as part of the Rodinia supercontinent. Rifting and passive margin development then isolated Laurentia in late Ediacaran period and into the early Paleozoic Era. Subsequent accretionary and collisional orogenesis occurred associated with the Appalachian orogenic interval with Laurentia first colliding with Avalonia-Baltica to become Laurussia and Laurussia then uniting with Gondwana to form the supercontinent Pangea. While the details of the conjugate continents to Laurentia are better reconstructed for this last Wilson cycle, the broad features of the Trans-Hudson, Grenvillian and Appalachian orogenic intervals bear similarities. In each case, accretionary collision of arc terranes was followed by continent-continent collision. The major difference is that the collisions of the Grenvillian and Appalachian orogenic intervals resulted in relatively minor growth of Laurentia compared to the Trans-Hudson orogeny. This difference is the result of break-up following the Grenvillian and Appalachian orogenic intervals having occurred along the same margin as collision while the major orogens of the Trans-Hudson orogenic interval have remained sutured. As a result, Laurentia has been a formidable continent for the past 1.8 billion years. As can be seen in the Chapters on Archean paleogeography \citep{Salminen2021b}, Nuna \citep{Elming2021a}, and Rodinia \citep{Evans2021b}, the constraints from Laurentia are at the center of paleogeographic models through the Precambrian and will continue to be as the next generation of paleogeographic models are developed. %%%%%%%%%%%% Supplementary Methods %%%%%%%%%%%% %\footnotesize %\section*{Methods} %%%%%%%%%%%%% Acknowledgements %%%%%%%%%%%%% \footnotesize \subsection*{Acknowledgements} This work was supported by NSF CAREER Grant EAR-1847277. The manuscript benefited from reviews from Athena Eyster, David Evans, and Lauri Pesonen as well as discussions and manuscript feedback from Francis Macdonald, Yuem Park, Toby Rivers, Sarah Slotznick, Justin Strauss, and Yiming Zhang. Bruce Buffett provided insights on the implications of a stagnant lid regime for the generation of the geomagnetic field. Many participants in the Nordic Paleomagnetism Workshop have contributed to the compilation and evaluation of the pole list utilized herein. Particular acknowledgement goes to David Evans for maintaining and distributing the compiled pole lists. GPlates, and in particular the pyGPlates API, was utilized in this work \citep{Muller2018b}. Figures were made using Matplotlib \citep{Hunter2007a} in conjunction with cartopy \citep{Met-Office2010a} and pmagpy \citep{Tauxe2016a} within an interactive Python environment \citep{Perez2007a}. The chapter text as well as code, data, and reconstructions used in this paper are openly available and licensed for any form of reuse with attribution (CC BY 4.0) in this repository: \url{https://github.com/Swanson-Hysell-Group/Laurentia_Paleogeography} which is also archived on Zenodo (https://doi.org/10.5281/zenodo.5129927). \printendnotes \subsection*{Glossary} \noindent\textbf{accretionary orogeny } Lithospheric deformation associated with the subduction of oceanic lithosphere and the addition of material from the downgoing plate such as island arcs. \noindent\textbf{allochthonous } An adjective denoting that a rock or terrane originated in a position at significant distance from the lithospheric block where it currently resides. \noindent\textbf{Archean } A geologic eon spanning from 4,000 to 2,500 million years ago (4 to 2.5 Ga). \noindent\textbf{Archean province } A contiguous area of Archean continental lithosphere typically surrounded by Proterozoic orogens inferred to be suture zones (e.g. Superior province). \noindent\textbf{Canadian shield } The large area of Canada with exposed Precambrian rock, or rock covered by thin soil, that is well-exposed due to Pleistocene glacial erosion. \noindent\textbf{collisional orogeny } Lithospheric deformation resulting from the collision of two significant provinces of continental lithosphere. \noindent\textbf{conjugate } Adjective referring to continents or continental margins that were previously conjoined. \noindent\textbf{craton } The stable and relatively immobile continental lithosphere in the interior of continents. In this chapter, craton is predominantly used in reference to Laurentia which formed through the collision of Archean provinces and grew further through subsequent accretionary and collisional orogenesis. Note that in other usages the term can be focused on stable Archean lithosphere such as the individual Archean provinces of Laurentia. \noindent\textbf{Cryogenian Period } The geologic period that lasted from ca. 717 to 635 million years ago during which time there were two global glaciations. The start of the period is provisionally defined as the first evidence of low-latitude glaciation. It is the second geologic period of the Neoproterozoic Era being preceded by the Tonian Period and followed by the Ediacaran Period. \noindent\textbf{Ediacaran Period } The third geologic period of the Neoproterozoic Era from ca. 635 to 539 Ma million years ago. It is the final period of the Proterozoic Eon and is followed by the Cambrian Period. \noindent\textbf{Elzevirian orogen } The orogen resulting from the Mesoproterozoic Elzevirian orogeny when there was accretion of arc terranes to eastern Laurentia. \noindent\textbf{evaporite } A chemical sedimentary deposit consisting of minerals that crystallize from water that were supersaturated in salts due to evaporation. \noindent\textbf{Ga } Giga-annum, one billion (10$^9$) years. This term is used as an abbreviation for ``billions of years before present.'' \noindent\textbf{geocentric axial dipole hypothesis } The hypothesis that, when it is time-averaged, Earth's magnetic field is dominantly a dipole aligned with the spin axis. \noindent\textbf{geodynamo } The mechanism whereby convective flow in Earth's fluid outer core generates Earth's magnetic field. \noindent\textbf{Granite-rhyolite province } A geologic province that comprises widespread Mesoproterozoic rhyolite and granite extending from Labrador, Canada to west Texas, USA. \noindent\textbf{Grenville orogen } An orogen resulting from the collision Grenvillian orogeny between Laurentia and conjugate continent(s) near the end of the Mesoproterozoic. \noindent\textbf{Hadley cell } Large-scale atmospheric circulation where air rises near the equator, flows poleward, and descends in the subtropics. This circulation drives convective tropical precipitation and the dry downwelling air leads to aridity in the subtropics. \noindent\textbf{Hearne province } An Archean province of Laurentia extending from southern Alberta, Canada to Hudson Bay. It is framed by the Rae province to the northwest and the Trans-Hudson orogen to the southeast. It is also referred to as the Hearne craton. \noindent\textbf{hematite } An iron oxide mineral with a formula of Fe\textsubscript{2}O\textsubscript{3} that commonly holds magnetization in geologic materials, particularly oxidized sedimentary rocks. \noindent\textbf{juvenile } An adjective referring to rocks formed from melt recently extracted from the mantle. \noindent\textbf{Laurentia } The Precambrian cratonic core of the North America continent and Greenland that formed through the amalgamation of Archean provinces in the Paleoproterozoic and subsequent accretion. \noindent\textbf{large igneous province (LIP) } A region of voluminous and rapidly emplaced volcanics and intrusions that are typically of mafic composition. These provinces are often interpreted to result from decompression melting of an upwelling mantle plume. \noindent\textbf{lithosphere } The rigid outermost layer of the Earth that is broken into tectonic plates and responds to the emplacement of a load by flexural bending. \noindent\textbf{Ma } Mega-annum, one million (10$^6$) years. This term is used as an abbreviation for ``millions of years before present.'' \noindent\textbf{Manikewan Ocean } An ocean basin interpreted to have existed between the Slave+Rae+Hearne+North Atlantic provinces and Superior province that closed leading up to the Trans-Hudson orogeny. \noindent\textbf{Mazatzal orogen } An orogen resulting from latest Paleoproterozoic accretion of volcanic arc and back-arc terranes with southern Laurentia. \noindent\textbf{Medicine Hat province } An Archean province of Laurentia extending from northern Montana, USA into southern Alberta and Saskatchewan, Canada. It is framed by a suture with the Hearne province to the north, the Trans-Hudson orogen to the east, and the Great Falls tectonic zone to the south. It is also referred to as the Medicine Hat block. \noindent\textbf{Mesoproterozoic } A geologic era spanning from 1,600 to 1,000 million years ago. \noindent\textbf{Meta Incognita province } A province of Archean basement rocks that comprises most of southern Baffin Island. It is also referred to as the Meta Incognita microcontinent. \noindent\textbf{Midcontinent Rift } A major Mesoproterozoic intracratonic rift where there was co-location of large igneous province magmatism and extension in Laurentia's interior centered on the Lake Superior region. \noindent\textbf{monazite } A phosphate mineral (Ce,La,Nd,Th)(PO\textsubscript{4},SiO\textsubscript{4}) found as an accessory phase in metamorphic rocks that can be targeted by U-Pb geochronology to date metamorphic events. \noindent\textbf{Nagssugtoqidian orogen } An orogen resulting from the Paleoproterozoic collision between the Rae and North Atlantic provinces. \noindent\textbf{Neoproterozoic } A geologic era spanning from 1,000 to 539 million years ago. \noindent\textbf{North Atlantic province } An Archean province of Laurentia in southernmost Greenland and northeastern Labrador, Canada. It is also referred to as the North Atlantic craton. \noindent\textbf{Nuna } A hypothesized supercontinent interpreted to have formed late in the Paleoproterozoic era and to have broken apart during the Mesoproterozoic. \noindent\textbf{ophiolite } Oceanic lithosphere that as been accreted onto continental lithosphere. \noindent\textbf{orogen } A region of lithosphere that has undergone deformation during a mountain-building event (an \textbf{orogeny}). \noindent\textbf{paleolatitude } The past latitude of a given point on Earth's surface at a given time typically calculated from paleomagnetic data using the geocentric axial dipole hypothesis. \noindent\textbf{paleomagnetic pole } A calculated position from paleomagnetic data that is interpreted to correspond to the ancient position of Earth's spin axis (the north pole) through application of the geocentric axial dipole hypothesis. The uncertainty on the pole position is given as a circle with a radius of a given angle ($A_{95}$). \noindent\textbf{Paleoproterozoic } A geologic era spanning from 2,500 to 1,600 million years ago. \noindent\textbf{Penokean orogen } An orogen resulting from Paleoproterozoic accretion of an oceanic arc and the Marshfield terrane continental block along the southern margin of the Superior province. \noindent\textbf{Phanerozoic } A geologic eon spanning from 539 million years ago to the present day. \noindent\textbf{Picuris orogen } An orogen resulting from a Mesoproterozoic orogeny interpreted from metamorphic rocks with Mesoproterozoic-aged protoliths in northern New Mexico, USA. \noindent\textbf{plate tectonics } A process where the lithosphere is in distinct pieces that move relative to one another. \noindent\textbf{Precambrian } A commonly used informal term to refer to geologic time prior to the Cambrian Period that started 539 million years ago. \noindent\textbf{Proterozoic } A geologic eon spanning from 2,500 to 539 million years ago. \noindent\textbf{province } A spatial entity with a shared geologic history. The term is used in this chapter to refer to Archean provinces that moved as independent cratonic blocks prior to Laurentia's amalgmation (e.g. the Superior province). It is also used to refer to zones of crustal growth associated with orogens and the products of contemporaneous magmatic activity (large igneous provinces). \noindent\textbf{Rae province } An Archean province of Laurentia extending from the region of Lake Athabasca northeast to northern Baffin Island in Arctic Canada. It is framed by the Thelon orogen to the west, the Taltson orogen to the southwest and the Hearne province to the east. It is also referred to as the Rae craton. \noindent\textbf{Rodinia } A hypothesized supercontinent interpreted to have formed late in the Mesoproterozoic era at the time of the Grenvillian orogeny and to have broken apart during the Neoproterozoic. \noindent\textbf{Slave province } An Archean province of Laurentia extending to the north from the region of Great Slave Lake in northern Canada. It is framed by the Thelon orogen to the east and the Great Bear Arc to the west. It is also referred to as the Slave craton. \noindent\textbf{Shawinigan orogen } An orogen resulting from the Mesoproterozoic Shawinigan orogeny when there was accretion of terranes to eastern Laurentia. \noindent\textbf{Snowbird orogen } An orogen resulting from Paleoproterozoic collision between the Rae and Hearne provinces prior to the Trans-Hudson orogeny. The Snowbird tectonic zone is part of the orogen. \noindent\textbf{supercontinent } A large continent where most of Earth's continental lithosphere has been concentrated into a large landmass. The supercontinent Pangea that existed ca. 200 Ma is the archetypal supercontinent. A threshold of 75$\%$ of extant continental crust has been proposed for a continent to be considered a supercontinent \citep{Meert2012a}. Gondwana (Australia + India + Africa + South America) constituted $\sim$60$\%$ of continental lithosphere and was a constituent of Pangea such that \cite{Evans2016a} proposed that it and similar landmasses should be called \textbf{semi-supercontinents}. \noindent\textbf{supercraton } A landmass that subsequently split into constituent crustal provinces. The term is typically applied to groupings of Archean provinces (cratons). \noindent\textbf{Superior province } The largest Archean province of Laurentia framed by the Trans-Hudson orogen to the west, the Grenvillian orogen to the east and the Penokean orogen to the south. It is also referred to as the Superior craton. \noindent\textbf{stagnant lid } A planetary state where there is a single lithospheric plate (`lid'). The lithosphere is relatively stable and immobile in comparison to a planet with active plate tectonics where there is motion between multiple lithospheric plates. \noindent\textbf{Thelon orogen } An orogen resulting from the Paleoproterozoic collision orogeny between the Slave and Rae provinces. \noindent\textbf{thermal remanent magnetization } Magnetization acquired by magnetic minerals in rocks as they cool typically following crystallization from magma. \noindent\textbf{Tonian Period } The first geologic period of the Neoproterozoic Era from ca. 1000 to 717 Ma million years ago. It followed by the Cryogenian Period. \noindent\textbf{Torngat orogen } An orogen resulting from the Paleoproterozoic collision orogeny between the Meta Incognita and North Atlantic provinces. \noindent\textbf{true polar wander } The rotation of the solid Earth about the liquid outer core to maintain rotational equilibrium. This process results in Earth's lithosphere undergoing a single coherent rotation relative to the spin axis. \noindent\textbf{Trans-Hudson orogen } An orogen resulting from the Paleoproterozoic collision between the composite Slave+Rae+Hearne provinces and the Superior province. \noindent\textbf{Wopmay orogen } An orogen resulting from Paleoproterozoic collision between the Hottah terrane, a continental magmatic arc, and the western margin of the Slave province. \noindent\textbf{Wyoming province } An Archean province of Laurentia underlying much of Wyoming, USA and southeast Montana, USA. It is framed by the Trans-Hudson orogen to the east (sometimes referred to the Black Hills orogen within the USA) and the Great Falls tectonic zone to the north. It is also referred to as the Wyoming craton. \noindent\textbf{Yavapai orogen } An orogen resulting from Paleoproterozoic collision and accretion of oceanic arc terranes with southern Laurentia. \noindent\textbf{zircon } A nesosilicate mineral with the chemical name of zirconium silicate and a chemical formula of ZrSiO\textsubscript{4}. %%%%%%%%%%%%%% Bibliography %%%%%%%%%%%%%% \bibliographystyle{gsabull} \footnotesize\bibliography{../../references/allrefs} \newpage {\scriptsize \begin{landscape} \textbf{Table 2}: Compilation of paleomagnetic poles from Laurentia \begin{ThreePartTable} \begin{TableNotes} \footnotesize site lon -- longitude of paleomagnetic locality; site lat -- latitude of paleomagnetic locality; plon -- longitude of the paleomagnetic pole position; plat -- latitude of the paleomagnetic pole position; A$_{95}$ -- angle of 95$\%$ confidence on the pole position; Duluth lat -- latitude of Duluth, MN implied by the paleomagnetic pole \end{TableNotes} \begin{longtable}{p{1.4 in}p{1.2 in}rrrrrrrrp{1.2 in}} \toprule terrane & unit name & age (Ma) & rating & site lon & site lat & plon & plat & A$_{95}$ & Duluth lat & pole reference \\ \hline \midrule \endhead \midrule \multicolumn{11}{r}{{Continued on next page}} \\ \hline \midrule \endfoot \bottomrule \insertTableNotes \endlastfoot Laurentia-Wyoming & Stillwater Complex - C2 & 2705$^{+4}_{-4}$ & A & 249.2 & 45.2 & 335.8 & -83.6 & 4.0 & & \cite{Selkin2008a} \\ \hline Laurentia-Superior(East) & Otto Stock dikes and aureole & 2676$^{+5}_{-5}$ & B & 279.9 & 48.0 & 227.0 & 69.0 & 4.8 & & \cite{Pullaiah1975b} \\ \hline Laurentia-Slave & Defeat Suite & 2625$^{+5}_{-5}$ & B & 245.5 & 62.5 & 64.0 & -1.0 & 15.0 & & \cite{Mitchell2014a} \\ \hline Laurentia-Superior(East) & Ptarmigan-Mistassini dikes & 2505$^{+2}_{-2}$ & B & 287.0 & 54.0 & 213.0 & -45.3 & 13.8 & & \cite{Evans2010a} \\ \hline Laurentia-Superior(East) & Matachewan dikes R & 2466$^{+23}_{-23}$ & A & 278.0 & 48.0 & 238.3 & -44.1 & 1.6 & & \cite{Evans2010a} \\ \hline Laurentia-Superior(East) & Matachewan dikes N & 2446$^{+3}_{-3}$ & A & 278.0 & 48.0 & 239.5 & -52.3 & 2.4 & & \cite{Evans2010a} \\ \hline Laurentia-Slave & Malley dikes & 2231$^{+2}_{-2}$ & A & 249.8 & 64.2 & 310.0 & -50.8 & 6.7 & & \cite{Buchan2012a} \\ \hline Laurentia-Superior(East) & Senneterre dikes & 2218$^{+6}_{-6}$ & A & 283.0 & 49.0 & 284.3 & -15.3 & 5.5 & & \cite{Buchan1993a} \\ \hline Laurentia-Superior(East) & Nipissing N1 sills & 2217$^{+4}_{-4}$ & A & 279.0 & 47.0 & 272.0 & -17.0 & 10.0 & & \cite{Buchan2000a} \\ \hline Laurentia-Slave & Dogrib dikes & 2193$^{+2}_{-2}$ & A & 245.5 & 62.5 & 315.0 & -31.0 & 7.0 & & \cite{Mitchell2014a} \\ \hline Laurentia-Superior(East) & Biscotasing dikes & 2170$^{+3}_{-3}$ & A & 280.0 & 48.0 & 223.9 & 26.0 & 7.0 & & \cite{Evans2010a} \\ \hline Laurentia-Wyoming & Rabbit Creek, Powder River and South Path dikes & 2160$^{+11}_{-8}$ & A & 252.8 & 43.9 & 339.2 & 65.5 & 7.6 & & \cite{Kilian2015a} \\ \hline Laurentia-Slave & Indin dikes & 2126$^{+3}_{-18}$ & A & 245.6 & 62.5 & 256.0 & -36.0 & 7.0 & & \cite{Buchan2016a} \\ \hline Laurentia-Superior(West) & Marathon dikes N & 2124$^{+3}_{-3}$ & A & 275.0 & 49.0 & 198.2 & 45.4 & 7.7 & 43.3 & \cite{Halls2008a} \\ \hline Laurentia-Superior(West) & Marathon dikes R & 2104$^{+3}_{-3}$ & A & 275.0 & 49.0 & 182.2 & 55.1 & 7.5 & 38.8 & \cite{Halls2008a} \\ \hline Laurentia-Superior(West) & Cauchon Lake dikes & 2091$^{+2}_{-2}$ & A & 263.0 & 56.0 & 180.9 & 53.8 & 7.7 & 37.5 & \cite{Evans2010a} \\ \hline Laurentia-Superior(West) & Fort Frances dikes & 2077$^{+5}_{-5}$ & A & 266.0 & 48.0 & 184.6 & 42.8 & 6.1 & 33.6 & \cite{Evans2010a} \\ \hline Laurentia-Superior(East) & Lac Esprit dikes & 2069$^{+1}_{-1}$ & A & 282.0 & 53.0 & 170.5 & 62.0 & 6.4 & & \cite{Evans2010a} \\ \hline Laurentia-Greenland-Nain & Kangamiut dikes & 2042$^{+12}_{-12}$ & B & 307.0 & 66.0 & 273.8 & 17.1 & 2.7 & & \cite{Fahrig1976b} \\ \hline Laurentia-Slave & Lac de Gras dikes & 2026$^{+5}_{-5}$ & A & 249.6 & 64.4 & 267.9 & 11.8 & 7.1 & & \cite{Buchan2009a} \\ \hline Laurentia-Superior(East) & Minto dikes & 1998$^{+2}_{-2}$ & A & 285.0 & 57.0 & 171.5 & 38.7 & 13.1 & & \cite{Evans2010a} \\ \hline Laurentia-Slave & Rifle Formation & 1963$^{+6}_{-6}$ & B & 252.9 & 65.9 & 341.0 & 14.0 & 7.7 & & \cite{Evans1981a} \\ \hline Laurentia-Rae & Clearwater Anorthosite & 1917$^{+7}_{-7}$ & B & 251.6 & 57.1 & 311.8 & 6.5 & 2.9 & & \cite{Halls1999a} \\ \hline Laurentia-Wyoming & Sourdough mafic dike swarm & 1899$^{+5}_{-5}$ & A & -108.3 & 44.7 & 292.0 & 49.2 & 8.1 & & \cite{Kilian2016b} \\ \hline Laurentia-Slave & Ghost Dike Swarm & 1887$^{+5}_{-9}$ & A & 244.6 & 62.6 & 286.0 & -2.0 & 6.0 & & \cite{Buchan2016a} \\ \hline Laurentia-Slave & Mean Seton/Akaitcho/Mara & 1885$^{+5}_{-5}$ & B & 250.0 & 65.0 & 260.0 & -6.0 & 4.0 & & \cite{Mitchell2010c} \\ \hline Laurentia-Slave & Mean Kahochella, Peacock Hills & 1882$^{+4}_{-4}$ & B & 250.0 & 65.0 & 285.0 & -12.0 & 7.0 & & \cite{Mitchell2010c} \\ \hline Laurentia-Superior(West) & Molson (B+C2) dikes & 1879$^{+6}_{-6}$ & A & 262.0 & 55.0 & 218.0 & 28.9 & 3.8 & 47.6 & \cite{Evans2010a} \\ \hline Laurentia-Slave & Takiyuak Formation & 1876$^{+10}_{-10}$ & B & 246.9 & 66.1 & 249.0 & -13.0 & 8.0 & & \cite{Irving1979a} \\ \hline Laurentia-Slave & Douglas Peninsula Formation, Pethei Group & 1876$^{+10}_{-10}$ & B & 249.7 & 62.8 & 258.0 & -18.0 & 14.2 & & \cite{Irving1979a} \\ \hline Laurentia-Slave & Pearson A/Peninsular/Kilohigok sills & 1870$^{+4}_{-4}$ & A & 250.0 & 65.0 & 269.0 & -22.0 & 6.0 & & \cite{Mitchell2010c} \\ \hline Laurentia-Superior & Haig/Flaherty/Sutton Mean & 1870$^{+1}_{-1}$ & B & 281.0 & 56.2 & 245.8 & 1.0 & 3.9 & & Nordic workshop calculation based on data of \cite{Schmidt1980a, Schwarz1982a} \\ \hline Laurentia-Trans-Hudson orogen & Boot-Phantom Pluton & 1838$^{+1}_{-1}$ & B & 258.1 & 54.7 & 275.4 & 62.4 & 7.9 & 73.8 & \cite{Symons1999a} \\ \hline Laurentia-Rae & Sparrow dikes & 1827$^{+4}_{-4}$ & B & 250.2 & 61.6 & 291.0 & 12.0 & 7.9 & & \cite{McGlynn1974a} \\ \hline Laurentia-Rae & Martin Formation & 1818$^{+4}_{-4}$ & A & 251.4 & 59.6 & 288.0 & -9.0 & 8.5 & & \cite{Evans1973a} \\ \hline Laurentia & East Central Minnesota Batholith & 1779$^{+2}_{-2}$ & NR & 265.8 & 45.5 & 265.8 & 20.4 & 4.5 & 63.5 & \cite{Swanson-Hysell2021b} \\ \hline Laurentia & Dubawnt Group & 1785$^{+35}_{-35}$ & B & 265.6 & 64.1 & 277.0 & 7.0 & 8.0 & 49.4 & \cite{Park1973a} \\ \hline Laurentia-Trans-Hudson orogen & Deschambault Pegmatites & 1766$^{+5}_{-5}$ & B & 256.7 & 54.9 & 276.0 & 67.5 & 7.7 & 68.9 & \cite{Symons2000a} \\ \hline Laurentia-Trans-Hudson orogen & Jan Lake Granite & 1758$^{+1}_{-1}$ & B & 257.2 & 54.9 & 264.3 & 24.3 & 16.9 & 67.3 & \cite{Gala1995a} \\ \hline Laurentia & Cleaver dikes & 1741$^{+5}_{-5}$ & A & 242.0 & 67.5 & 276.7 & 19.4 & 6.1 & 61.7 & \cite{Irving2004a} \\ \hline Laurentia-Greenland & Melville Bugt diabase dikes & 1633$^{+5}_{-5}$ & B & 303.0 & 74.6 & 273.8 & 5.0 & 8.7 & 45.5 & \cite{Halls2011a} \\ \hline Laurentia & Western Channel Diabase & 1590$^{+3}_{-3}$ & A & 242.2 & 66.4 & 245.0 & 9.0 & 6.6 & 47.5 & \cite{Irving1972a} \\ \hline Laurentia & St.Francois Mountains Acidic Rocks & 1476$^{+16}_{-16}$ & A & 269.5 & 37.5 & 219.0 & -13.2 & 6.1 & 15.8 & \cite{Meert2002b} \\ \hline Laurentia & Michikamau Intrusion & 1460$^{+5}_{-5}$ & A & 296.0 & 54.5 & 217.5 & -1.5 & 4.7 & 24.7 & \cite{Emslie1976a} \\ \hline Laurentia & Spokane Formation & 1458$^{+13}_{-13}$ & A & 246.8 & 48.2 & 215.5 & -24.8 & 4.7 & 4.2 & \cite{Elston2002a} \\ \hline Laurentia & Snowslip Formation & 1450$^{+14}_{-14}$ & A & 245.9 & 47.9 & 210.2 & -24.9 & 3.5 & 1.4 & \cite{Elston2002a} \\ \hline Laurentia & Tobacco Root dikes & 1448$^{+49}_{-49}$ & B & 247.6 & 47.4 & 216.1 & 8.7 & 10.5 & 31.9 & \cite{Harlan2008a} \\ \hline Laurentia & Purcell Lava & 1443$^{+7}_{-7}$ & A & 245.1 & 49.4 & 215.6 & -23.6 & 4.8 & 5.3 & \cite{Elston2002a} \\ \hline Laurentia & Rocky Mountain intrusions & 1430$^{+15}_{-15}$ & B & 253.8 & 40.3 & 217.4 & -11.9 & 9.7 & 16.0 & Nordic workshop calculation based on data of \cite{Harlan1994a,Harlan1998a} \\ \hline Laurentia & Mistastin Pluton & 1425$^{+25}_{-25}$ & B & 296.3 & 55.6 & 201.5 & -1.0 & 7.6 & 15.1 & \cite{Fahrig1976a} \\ \hline Laurentia & McNamara Formation & 1401$^{+6}_{-6}$ & A & 246.4 & 46.9 & 208.3 & -13.5 & 6.7 & 9.6 & \cite{Elston2002a} \\ \hline Laurentia & Pilcher, Garnet Range and Libby Formations & 1385$^{+23}_{-23}$ & A & 246.4 & 46.7 & 215.3 & -19.2 & 7.7 & 8.8 & \cite{Elston2002a} \\ \hline Laurentia-Greenland & Zig-Zag Dal Basalts & 1382$^{+2}_{-2}$ & B & 334.8 & 81.2 & 242.8 & 12.0 & 3.8 & 43.8 & \cite{Marcussen1983a} \\ \hline Laurentia-Greenland & Victoria Fjord dolerite dikes & 1382$^{+2}_{-2}$ & B & 315.3 & 81.5 & 231.7 & 10.3 & 4.3 & 36.6 & \cite{Abrahamsen1987a} \\ \hline Laurentia-Greenland & Midsommersoe Dolerite & 1382$^{+2}_{-2}$ & B & 333.4 & 81.6 & 242.0 & 6.9 & 5.1 & 39.0 & \cite{Marcussen1983a} \\ \hline Laurentia & Nain Anorthosite & 1305$^{+15}_{-15}$ & B & 298.2 & 56.5 & 206.7 & 11.7 & 2.2 & 28.1 & \cite{Murthy1978a} \\ \hline Laurentia-Greenland & North Qoroq intrusives & 1275$^{+1}_{-1}$ & B & 314.6 & 61.1 & 202.6 & 13.2 & 8.3 & 21.0 & \cite{Piper1992a} \\ \hline Laurentia-Greenland & Kungnat Ring dike & 1275$^{+2}_{-2}$ & B & 311.7 & 61.2 & 198.7 & 3.4 & 3.2 & 11.1 & \cite{Piper1977b} \\ \hline Laurentia & Mackenzie dikes grand mean & 1267$^{+2}_{-2}$ & A & 250.0 & 65.0 & 190.0 & 4.0 & 5.0 & 11.2 & \cite{Buchan2000a} \\ \hline Laurentia-Greenland & West Gardar Dolerite dikes & 1244$^{+8}_{-8}$ & B & 311.7 & 61.2 & 201.7 & 8.7 & 6.6 & 17.1 & \cite{Piper1977b} \\ \hline Laurentia-Greenland & West Gardar Lamprophyre dikes & 1238$^{+11}_{-11}$ & B & 311.7 & 61.2 & 206.4 & 3.2 & 7.2 & 15.9 & \cite{Piper1977b} \\ \hline Laurentia & Sudbury dikes Combined & 1237$^{+5}_{-5}$ & A & 278.6 & 46.3 & 192.8 & -2.5 & 2.5 & 8.3 & \cite{Palmer1977a} \\ \hline Laurentia-Scotland & Stoer Group & 1199$^{+70}_{-70}$ & B & 354.5 & 58.0 & 238.4 & 37.2 & 7.7 & 43.9 & Nordic workshop calculation \\ \hline Laurentia-Greenland & Hviddal Giant dike & 1184$^{+5}_{-5}$ & B & 313.7 & 60.9 & 215.3 & 33.2 & 9.6 & 43.3 & \cite{Piper1977a} \\ \hline Laurentia-Greenland & Narssaq Gabbro & 1184$^{+5}_{-5}$ & B & 313.8 & 60.9 & 225.4 & 31.6 & 9.7 & 48.8 & \cite{Piper1977a} \\ \hline Laurentia-Greenland & South Qoroq Intr. & 1163$^{+2}_{-2}$ & A & 314.6 & 61.1 & 215.9 & 41.8 & 13.1 & 48.7 & \cite{Piper1992a} \\ \hline Laurentia-Greenland & Giant Gabbro dikes & 1163$^{+2}_{-2}$ & B & 313.7 & 60.9 & 226.1 & 42.3 & 9.4 & 55.5 & \cite{Piper1977a} \\ \hline Laurentia-Greenland & NE-SW Trending dikes & 1160$^{+5}_{-5}$ & B & 314.6 & 61.1 & 230.8 & 33.4 & 5.7 & 53.5 & \cite{Piper1992a} \\ \hline Laurentia & Ontario lamprophyre dikes & 1143$^{+12}_{-12}$ & NR & 273.3 & 48.8 & 223.3 & 58.0 & 9.2 & 61.2 & \cite{Piispa2018a} \\ \hline Laurentia & Abitibi dikes & 1141$^{+2}_{-2}$ & A & 279.0 & 48.0 & 215.5 & 48.8 & 14.1 & 55.4 & \cite{Ernst1993a} \\ \hline Laurentia & Nipigon sills and lavas & 1109$^{+2}_{-2}$ & A & 270.9 & 49.1 & 217.8 & 47.2 & 4.0 & 56.4 & Nordic workshop calculation based on data of \cite{Palmer1970a, Robertson1971a, Pesonen1979a, Pesonen1979b, Middleton2004a, Borradaile2006a} \\ \hline Laurentia & Lowermost Mamainse Point volcanics -R1 & 1109$^{+2}_{-3}$ & A & 275.3 & 47.1 & 227.0 & 49.5 & 5.3 & 62.9 & \cite{Swanson-Hysell2014a} \\ \hline Laurentia & Lower Osler volcanics -R & 1108$^{+3}_{-3}$ & A & 272.3 & 48.8 & 218.6 & 40.9 & 4.8 & 54.6 & \cite{Swanson-Hysell2014b} \\ \hline Laurentia & Middle Osler volcanics -R & 1107$^{+4}_{-4}$ & A & 272.4 & 48.8 & 211.3 & 42.7 & 8.2 & 50.5 & \cite{Swanson-Hysell2014b} \\ \hline Laurentia & Upper Osler volcanics -R & 1105$^{+1}_{-1}$ & A & 272.4 & 48.7 & 203.4 & 42.3 & 3.7 & 45.1 & \cite{Halls1974a, Swanson-Hysell2014b, Swanson-Hysell2019a} \\ \hline Laurentia & Lower Mamainse Point volcanics -R2 & 1105$^{+3}_{-4}$ & A & 275.3 & 47.1 & 205.2 & 37.5 & 4.5 & 43.9 & \cite{Swanson-Hysell2014a} \\ \hline Laurentia & Mamainse Point volcanics -C (lower N, upper R) & 1101$^{+1}_{-1}$ & A & 275.3 & 47.1 & 189.7 & 36.1 & 4.9 & 32.9 & \cite{Swanson-Hysell2014a} \\ \hline Laurentia & North Shore lavas -N & 1097$^{+3}_{-3}$ & A & 268.7 & 46.3 & 181.7 & 31.1 & 2.1 & 24.5 & \cite{Tauxe2009a,Swanson-Hysell2019a} \\ \hline Laurentia & Chengwatana Volcanics & 1095$^{+2}_{-2}$ & B & 267.3 & 45.4 & 186.1 & 30.9 & 8.2 & 27.3 & \cite{Kean1997a} \\ \hline Laurentia & Portage Lake Volcanics & 1095$^{+3}_{-3}$ & A & 271.2 & 47.0 & 182.5 & 27.5 & 2.3 & 22.7 & \cite{Books1972a, Hnat2006a} as calculated in \cite{Swanson-Hysell2019a} \\ \hline Laurentia & Uppermost Mamainse Point volcanics -N & 1094$^{+6}_{-4}$ & A & 275.3 & 47.1 & 183.2 & 31.2 & 2.5 & 25.6 & \cite{Swanson-Hysell2014a} \\ \hline Laurentia & Cardenas Basalts and Intrusions & 1091$^{+5}_{-5}$ & B & 248.1 & 36.1 & 185.0 & 32.0 & 8.0 & 27.3 & \cite{Weil2003a} \\ \hline Laurentia & Schroeder Lutsen Basalts & 1090$^{+2}_{-7}$ & A & 269.1 & 47.5 & 187.8 & 27.1 & 3.0 & 25.9 & \cite{Fairchild2017a} \\ \hline Laurentia & Central Arizona diabases -N & 1088$^{+11}_{-11}$ & A & 249.2 & 33.7 & 175.3 & 15.7 & 7.0 & 9.6 & \cite{Donadini2011b} \\ \hline Laurentia & Lake Shore Traps & 1086$^{+1}_{-1}$ & A & 271.9 & 47.6 & 186.4 & 23.1 & 4.0 & 22.3 & \cite{Kulakov2013a} \\ \hline Laurentia & Michipicoten Island Formation & 1084$^{+1}_{-1}$ & A & 274.3 & 47.7 & 174.7 & 17.0 & 4.4 & 10.2 & \cite{Fairchild2017a} \\ \hline Laurentia & Nonesuch Shale & 1080$^{+4}_{-10}$ & B & 271.5 & 47.0 & 178.1 & 7.6 & 5.5 & 5.7 & \cite{Henry1977a} \\ \hline Laurentia & Freda Sandstone & 1070$^{+14}_{-10}$ & B & 271.5 & 47.0 & 179.0 & 2.2 & 4.2 & 2.4 & \cite{Henry1977a} \\ \hline Laurentia & Haliburton Intrusions & 1015$^{+15}_{-15}$ & B & 281.4 & 45.0 & 141.9 & -32.6 & 6.3 & -47.0 & \cite{Warnock2000a} \\ \hline Laurentia & Adirondack fayalite granite & 990$^{+20}_{-20}$ & NR & 285.5 & 44.0 & 132.7 & -28.4 & 6.9 & -50.7 & \cite{Brown2012a} \\ \hline Laurentia & Adirondack metamorphic\ orthosites & 970$^{+20}_{-20}$ & NR & 286.0 & 44.0 & 149.0 & -25.1 & 11.6 & -37.5 & \cite{Brown2012a} \\ \hline Laurentia & Adirondack Microcline gneiss & 960$^{+20}_{-20}$ & NR & 285.0 & 44.0 & 151.1 & -18.4 & 10.5 & -31.5 & \cite{Brown2012a} \\ \hline Laurentia-Scotland & Torridon Group & 925$^{+145}_{-145}$ & B & 354.3 & 57.9 & 220.9 & -17.7 & 7.1 & -8.6 & Nordic workshop calculation \\ \hline Laurentia-Svalbard & Lower Grusdievbreen Formation & 831$^{+20}_{-20}$ & B & 18.0 & 79.0 & 204.9 & 19.6 & 10.9 & -5.3 & \cite{Maloof2006a} \\ \hline Laurentia-Svalbard & Upper Grusdievbreen Formation & 800$^{+11}_{-11}$ & B & 18.2 & 78.9 & 252.6 & -1.1 & 6.2 & 11.5 & \cite{Maloof2006a} \\ \hline Laurentia & Gunbarrel dikes & 778$^{+2}_{-2}$ & B & 248.7 & 44.8 & 138.2 & 9.1 & 12.0 & -18.4 & Calculation from \cite{Eyster2020a} based on data of \cite{Harlan1993a, Harlan1997a} \\ \hline Laurentia-Svalbard & Svanbergfjellet Formation & 770$^{+19}_{-40}$ & B & 18.0 & 78.5 & 226.8 & 25.9 & 5.8 & 12.8 & \cite{Maloof2006a} \\ \hline Laurentia & Uinta Mountain Group & 760$^{+6}_{-10}$ & B & 250.7 & 40.8 & 161.3 & 0.8 & 4.7 & -10.7 & \cite{Weil2006b} \\ \hline Laurentia & Carbon Canyon & 757$^{+7}_{-7}$ & NR & 248.2 & 36.1 & 166.0 & -0.5 & 9.7 & -8.5 & \cite{Weil2004a} as calculated in \cite{Eyster2020a} \\ \hline Laurentia & Carbon Butte/Awatubi & 751$^{+8}_{-8}$ & NR & 248.5 & 35.2 & 163.8 & 14.2 & 3.5 & 1.0 & \cite{Eyster2020a} \\ \hline Laurentia & Franklin event grand mean & 718$^{+2}_{-2}$ & A & 275.4 & 73.0 & 162.1 & 6.7 & 3.0 & -5.7 & \cite{Denyszyn2009b} \\ \hline Laurentia & Long Range dikes & 615$^{+2}_{-2}$ & B & 303.3 & 53.7 & 175.3 & -19.0 & 17.4 & -15.5 & \cite{Murthy1992a} \\ \hline Laurentia & Baie des Moutons complex & 583$^{+2}_{-2}$ & B & 301.0 & 50.8 & 152.7 & -42.6 & 12.0 & -45.1 & \cite{McCausland2011a} \\ \hline Laurentia & Baie des Moutons complex & 583$^{+2}_{-2}$ & B & 301.0 & 50.8 & 141.5 & 34.2 & 15.4 & 4.2 & \cite{McCausland2011a} \\ \hline Laurentia & Callander Alkaline Complex & 575$^{+5}_{-5}$ & B & 280.6 & 46.2 & 121.4 & -46.3 & 6.0 & -67.1 & \cite{Symons1991a} \\ \hline Laurentia & Catoctin Basalts & 572$^{+5}_{-5}$ & B & 281.8 & 38.5 & 116.7 & -42.0 & 17.5 & -69.0 & \cite{Meert1994a} \\ \hline Laurentia & Sept-\^Iles layered intrusion & 565$^{+4}_{-4}$ & B & 293.5 & 50.2 & 141.0 & 20.0 & 6.7 & -7.9 & \cite{Tanczyk1987a} \\ \hline \end{longtable} \end{ThreePartTable} \end{landscape} } %%%%%%%%%%%% Supplementary Figures %%%%%%%%%%%% %\clearpage %%%%%%%%%%%%%%%% End %%%%%%%%%%%%%%%% %\end{multicols} % Method B for two-column formatting (doesn't play well with line numbers), comment out if using method A \end{document}
{ "alphanum_fraction": 0.7423798016, "avg_line_length": 251.3554868624, "ext": "tex", "hexsha": "823b756a82933b747edb483dee232c838f3b6af8", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-04-15T06:28:43.000Z", "max_forks_repo_forks_event_min_datetime": "2019-04-15T06:28:43.000Z", "max_forks_repo_head_hexsha": "7e8685d3a262c47d6bda0913686f84dd794cb2aa", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "Swanson-Hysell-Group/Laurentia_Paleogeography", "max_forks_repo_path": "Manuscript/Laurentia_Paleogeo_Manuscript.tex", "max_issues_count": 8, "max_issues_repo_head_hexsha": "7e8685d3a262c47d6bda0913686f84dd794cb2aa", "max_issues_repo_issues_event_max_datetime": "2021-07-23T18:58:54.000Z", "max_issues_repo_issues_event_min_datetime": "2019-07-09T17:40:07.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "Swanson-Hysell-Group/Laurentia_Paleogeography", "max_issues_repo_path": "Manuscript/Laurentia_Paleogeo_Manuscript.tex", "max_line_length": 4609, "max_stars_count": 1, "max_stars_repo_head_hexsha": "7e8685d3a262c47d6bda0913686f84dd794cb2aa", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "Swanson-Hysell-Group/Laurentia_Paleogeography", "max_stars_repo_path": "Manuscript/Laurentia_Paleogeo_Manuscript.tex", "max_stars_repo_stars_event_max_datetime": "2019-04-15T06:28:32.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-15T06:28:32.000Z", "num_tokens": 41770, "size": 162627 }
% ****** Start of file langevin.tex ****** %% \documentclass[% reprint, %superscriptaddress, %groupedaddress, %unsortedaddress, %runinaddress, %frontmatterverbose, %preprint, %showpacs,preprintnumbers, %nofootinbib, %nobibnotes, %bibnotes, amsmath,amssymb, aps, %pra, %prb, %rmp, %prstab, %prstper, %floatfix, ]{revtex4-1} \usepackage{graphicx}% Include figure files \usepackage{dcolumn}% Align table columns on decimal point \usepackage{bm}% bold math \usepackage{subcaption} \usepackage{float} %\usepackage{hyperref}% add hypertext capabilities %\usepackage[mathlines]{lineno}% Enable numbering of text and display math %\linenumbers\relax % Commence numbering lines %\usepackage[showframe,%Uncomment any one of the following lines to test %%scale=0.7, marginratio={1:1, 2:3}, ignoreall,% default settings %%text={7in,10in},centering, %%margin=1.5in, %%total={6.5in,8.75in}, top=1.2in, left=0.9in, includefoot, %%height=10in,a5paper,hmargin={3cm,0.8in}, %]{geometry} \DeclareMathOperator\erf{erf} \DeclareMathOperator\erfc{erfc} \begin{document} \preprint{APS/123-QED} \title{Intensity Distribution of a Dilute Solution of Point Emitters under Gaussian Detection} \author{Helmut H. Strey} \affiliation{Biomedical Engineering Department and Laufer Center for Physical and Quantitative Biology, Stony Brook University, Stony Brook NY 11794-5281.}%Lines break automatically or can be forced with \\ \date{\today}% It is always \today, today, % but any date may be explicitly specified \begin{abstract} \begin{description} \item[PACS numbers] May be entered using the \verb+\pacs{#1}+ command. \end{description} \end{abstract} \pacs{Valid PACS appear here}% PACS, the Physics and Astronomy % Classification Scheme. %\keywords{Suggested keywords}%Use showkeys class option if keyword %display desired \maketitle %\tableofcontents \onecolumngrid \subsection{Introduction} One strategy to accurately measure the concentration of a dilute solution of fluorecent molecules is to employ the properties of the Poisson distribution. The basic idea of this method is illustrated in Fig. \ref{fig:poissonconc}. \begin{figure}[H] \begin{center} \resizebox{.9\textwidth}{!}{% \includegraphics[height=3cm]{Gaussian_vs_box.png}% \quad \includegraphics[height=3cm]{Poisson.png}% } \caption{The properties of a Poisson distribution can be employed to measure the concentration of dilute solutions of fluorescent molecules. To the left we see a box illumination profile (solid line) with four point emitters. To the right, we show the probability distribution of a box illumination with average concentration of one point particle inside the box where each particle has an emitting intensity of 1. On the left we added a Gaussian profile (dashed line) with the same total area as the box profile to show how for this case the total measured intensity is the sum of the intensity contributions of all particles}\label{fig:poissonconc} \end{center} \end{figure} In single molecule techniques one often employs a Gaussian illumination profile to measure fluorescence intensities from a dilute solution of molecules. Typically such measurements are performed by single photon detection and result in a sequence of single photon arrival times. Several techniques have emerged from this approach: (1) Fluorescence correlation spectroscopy measures the concentration and diffusion coefficient of fluorescent molecules by analyzing the intensity autocorrelation function; (2) Photon Counting Histogram (PCH) is analyzing the distribution function of time-binned photon counts to measure the concentration and brightness of a dilute solution of fluorescent molecules. In this article we develop a general framework for calculating the intensity distribution of a dilute solution of Point emitters under Gaussian Detection. As compared to the Photon Counting Histogram method, we do not assume that the intensities are averaged over a time window but are taken as instantaneous snapshots of individual spacial emmiter distributions. We will show that the resulting Intensity distributions dramatically change character at low emitter concentrations when considering different dimensionalities of the Gaussian Detection. In particular the one- and two-dimensional solutions are strongly structured, exhibiting discontinuities in their derivative at integer multiples of the brightness. This makes these distributions strong candidates to distinguish and measure concentrations of mixtures of emitters of different brightnesses. Finally, we will discuss a maximum likelihood method to determine concentration and brightness from a sequence of single photon arrival times. \subsection{Intensity Probability Distribution} In this section we will calculate the probability distribution of intensities from a single fluorescent particle confined in length $L$. We write the probability of finding a particle at $x$ as: \begin{equation} p(x) = \left\{ \begin{array}{l@{\quad : \quad}l} \frac{1}{L} & -\frac{L}{2} \le x \le \frac{L}{2} \\ 0 & other \quad x \end{array} \right. \end{equation} The fluorescent intensity for a particle at position $x$ is proportional to the illumination profile given by: \begin{equation} \Phi(x) = \Phi_{0}\exp{\left(-\frac{2x^{2}}{w^2}\right)} \end{equation} %In order to calculate $p(I)$ we need to evaluate the following integral %\begin{equation} % p(\Phi)d\Phi = \int_{x:\Phi<\Phi(x)<\Phi+d\Phi} p(x)dx %\end{equation} %for all x for which the condition is fullfilled. The integral can be evaluated by inverting $\Phi(x)$: where $\Phi_{0}$ is specfic for each fluorescent species and includes all the contributions of quantum and detection efficiencies. We can calculate some of the properties of this intensity distribution function. The expectation value of $\Phi^{n}$ is \begin{equation} E[\Phi^{n}] = \sqrt{\frac{\pi}{2}}\frac{w}{L\sqrt{n}}\Phi_{0}^{n} \erf{\left(\sqrt{\frac{n}{2}}\frac{L}{w}\right)} \end{equation} all moments $E[\Phi^{n}]$ are inversely proportional to $L$. When applied to a situation with fixed particle concentration $N/L$, $E[\Phi_{m}^{n}]$ is constant for large $L$ since $N$ goes up for larger $L$.\\ In order to find the intensity probability distribution at a certain concentration in an infinite volume (or length), we need to find the characteristic function of $p(\Phi)$. \begin{equation} c(k) = \int_{\Phi(L/2)}^{\Phi_{0}}\exp(ik\Phi)p(\Phi)d\Phi \end{equation} by expanding the exponential function we see that \begin{equation} \begin{aligned} c(k) &= \int_{\Phi(L/2)}^{\Phi_{0}}\sum_{n=0}^{\infty}\frac{(ik)^{n}\Phi^{n}}{n!}p(\Phi)\\ &=1+\sum_{n=1}^{\infty}\frac{(ik)^{n}E[\Phi^{n}]}{n!}\\ &=1+\sqrt{\frac{\pi}{2}}\frac{w}{L}\sum_{n=1}^{\infty}\frac{(ik)^{n}}{n!\sqrt{n}}\Phi_{0}^{n} \erf{\left(\sqrt{\frac{n}{2}}\frac{L}{w}\right)} \end{aligned} \end{equation} this characteristic function is for one particle in length $L$. For more particles we need to convolute the probability distribution functions N times. Alternatively we can take $c(k)$ to the power of N. We can accomplish this by recognizing that the particle concentration is given by $c=N/L$ or $L=N/c$. Inserting this into the previous equation we get: \begin{equation} c(k) = 1+\sqrt{\frac{\pi}{2}}\frac{wc}{N}\sum_{n=1}^{\infty}\frac{(ik)^{n}}{n!\sqrt{n}}\Phi_{0}^{n} \erf{\left(\sqrt{\frac{n}{2}}\frac{N}{wc}\right)} \end{equation} In order to take $c(k)$ to the power of $N$ we will take advantage of two properties. First, \begin{equation} \lim_{N\rightarrow \infty}\erf{\left(\sqrt{\frac{n}{2}}\frac{N}{wc}\right)} = 1 \end{equation} and second, \begin{equation} \lim_{N\rightarrow \infty}\left(1+\frac{x}{N}\right)^{N} = \exp(x) \end{equation} combining both results in \begin{equation} \lim_{N\rightarrow \infty}c(k) = C(k) = \exp\left(\sqrt{\frac{\pi}{2}}wc\sum_{n=1}^{\infty}\frac{(ik)^{n}}{n!\sqrt{n}}\Phi_{0}^{n} \right) \end{equation} Now we explore how the intensity probability distribution changes when considering higher dimensions. For example, in a realistic experiment a confocal laser spot is often described as a three dimensional Gaussian intensity distribution: \begin{equation} \Phi(x,y,z) = \Phi_{0}\exp{\left(-\frac{2(x^{2}+y^{2})}{w_{xy}^2}\right)}\exp{\left(-\frac{2z^{2}}{w_{z}^2}\right)} \end{equation} assuming that we calculate the expectation value of $\Phi^n$ in a box of $L^3$ for one fluorescent particle then we get \begin{equation} E[\Phi^{n}] = \left(\frac{\pi}{2}\right)^{\frac{3}{2}}\frac{w_{xy}^{2}w_{z}}{L^{3}n^{\frac{3}{2}}}\Phi_{0}^{n} \erf{\left(\sqrt{\frac{n}{2}}\frac{L}{w_{xy}}\right)}^{2}\erf{\left(\sqrt{\frac{n}{2}}\frac{L}{w_{z}}\right)} \end{equation} which allows us to express the characteristic function as (using $c=N/L^{3}$) \begin{equation} c(k) = 1+\left(\frac{\pi}{2}\right)^{\frac{3}{2}}\frac{cw_{xy}^{2}w_{z}}{N}\sum_{n=1}^{\infty}\frac{(ik)^{n}}{n!n^{\frac{3}{2}}}\Phi_{0}^{n} \erf{\left(\sqrt{\frac{n}{2}}\frac{1}{w_{xy}}\left(\frac{N}{c}\right)^{\frac{1}{3}}\right)}^{2}\erf{\left(\sqrt{\frac{n}{2}}\frac{1}{w_{z}}\left(\frac{N}{c}\right)^{\frac{1}{3}}\right)} \end{equation} As before we are taking the limit to to $N\rightarrow \infty$ \begin{equation} \lim_{N\rightarrow \infty}c(k) = C(k) = \exp\left(\left(\frac{\pi}{2}\right)^{\frac{3}{2}}cw_{xy}^{2}w_{z}\sum_{n=1}^{\infty}\frac{(ik)^{n}}{n!n^{\frac{3}{2}}}\Phi_{0}^{n}\right) \end{equation} similarily for 2d we get \begin{equation} \lim_{N\rightarrow \infty}c(k) = C(k) = \exp\left(\frac{\pi}{2}cw_{xy}^{2}\sum_{n=1}^{\infty}\frac{(ik)^{n}}{n!n}\Phi_{0}^{n}\right) \end{equation} this form can be expressed as a function of special functions $Si$ and $Ci$ as follows \begin{equation} \begin{aligned} Si(k)&=\int_{0}^{k}\frac{\sin t}{t}dt = \sum_{n=0}^{\infty}\frac{(-1)^{n}k^{2n+1}}{(2n+1)!(2n+1)}\\ Ci(k)&=\gamma+\ln k +\int_{0}^{k}\frac{\cos t - 1}{t}dt=\gamma + \ln k + \sum_{n=1}^{\infty}\frac{(-1)^{n}k^{2n}}{(2n)!(2n)} \end{aligned} \end{equation} resulting in \begin{equation} \begin{aligned} C(k) &= \exp\left(\frac{\pi}{2}cw_{xy}^{2}\left(Ci(k\Phi_{0})-\gamma-\ln k\Phi_{0}+iSi(k\Phi_{0})\right)\right)\\ &=(k\Phi_{0})^{-\frac{\pi}{2}cw_{xy}^{2}}\exp\left(-\frac{\pi}{2}cw_{xy}^{2}\gamma\right)\exp\left(\frac{\pi}{2}cw_{xy}^{2}\left(Ci(k\Phi_{0})+iSi(k\Phi_{0})\right)\right) \end{aligned} \end{equation} for large $k$, $C(k)$ decays like \begin{equation} \lim_{k\rightarrow \infty} C(k) = i(k\Phi_{0})^{-\frac{\pi}{2}cw_{xy}^{2}}\sin \left(\frac{\pi}{2}cw_{xy}^{2}\right) \end{equation} since the limiting values for $k\rightarrow \infty$ for $Ci(k\Phi_{0})$ is zero and for $Si(k\Phi_{0})$ is $\pi/2$ %\begin{equation} % C(k) = \exp\left(\frac{\pi}{2}cw_{xy}^{2}w_{z}\int_{0}^{1}\frac{\exp(ik\Phi_{0}y)-1}{y}dy \right) %\end{equation} \subsection{Integral representations of the exponent of $C(k)$} Here we can take advantage of the integral representation of the gamma function \begin{equation} \int_{0}^{\infty}t^{b}\exp(-nt)dt = \frac{\Gamma (b+1)}{n^{b+1}} \end{equation} For the 1-dimensional case, we are going to use \begin{equation} \frac{1}{\sqrt{n}}=\frac{1}{\sqrt{\pi}}\int_{0}^{\infty}dt\frac{\exp(-tn)}{\sqrt{t}} \end{equation} \begin{equation} \begin{aligned} C(k) &= \exp\left(\frac{1}{\sqrt{2}}wc\int_{0}^{\infty}dt\frac{1}{\sqrt{t}}\sum_{n=1}^{\infty}\frac{(ik)^{n}}{n!}\Phi_{0}^{n}\exp(-tn) \right)\\ &= \exp\left(\frac{1}{\sqrt{2}}wc\int_{0}^{\infty}dt\frac{1}{\sqrt{t}}(\exp(ik\Phi_{0}\exp(-t))-1) \right)\\ &= \exp\left(\frac{1}{\sqrt{2}}wc\int_{0}^{\infty}dt\frac{\cos(k\Phi_{0}\exp(-t))-1+i\sin(k\Phi_{0}\exp(-t))}{\sqrt{t}} \right) \end{aligned} \end{equation} using a variable transform $y=\exp(-t)$ we find \begin{equation} C(k) = \exp\left(\frac{1}{\sqrt{2}}wc\int_{0}^{1}dy\frac{\exp(ik\Phi_{0}y)-1}{y\sqrt{-\ln y}}\right) \end{equation} Similarly, in the 2-dimensional case, we find that \begin{equation} \begin{aligned} C(k) &= \exp\left(\frac{\pi}{2}cw_{xy}^{2}\sum_{n=1}^{\infty}\frac{(ik)^{n}}{n!n}\Phi_{0}^{n}\right)\\ &= \exp\left(\frac{\pi}{2}cw_{xy}^{2}\int_{0}^{\infty}dt(\exp(ik\Phi_{0}\exp(-t))-1) \right)\\ &= \exp\left(\frac{\pi}{2}cw_{xy}^{2}\int_{0}^{\infty}dt\cos(k\Phi_{0}\exp(-t))-1+i\sin(k\Phi_{0}\exp(-t)) \right) \end{aligned} \end{equation} after a variable transform $y=\exp(-t)$ we find \begin{equation} \begin{aligned} C(k) &= \exp\left(\frac{\pi}{2}cw_{xy}^{2}\int_{0}^{1}dy\frac{\exp(ik\Phi_{0}y)-1}{y}\right)\\ &= \exp\left(\frac{\pi}{2}cw_{xy}^{2}\int_{0}^{k\Phi_{0}}dx\frac{\exp(ix)-1}{x}\right)\\ &= \exp\left(-\frac{\pi}{2}cw_{xy}^{2}Ein(k\Phi_{0})\right) \end{aligned} \end{equation} where $Ein(x)$ is related to the Exponential Integral $E_{1}(x) = -\gamma -\ln x +Ein(x)$, which clarifies the connection to the Trigonometic integrals in the previous section. Similarly, in the 3-dimensional case, we can get \begin{equation} \begin{aligned} C(k) &= \exp\left(\frac{\pi}{\sqrt{2}}cw_{xy}^{2}w_{z}\int_{0}^{\infty}dt\sqrt{t}\sum_{n=1}^{\infty}\frac{(ik)^{n}}{n!}\Phi_{0}^{n}\exp(-tn)\right)\\ &= \exp\left(\frac{\pi}{\sqrt{2}}cw_{xy}^{2}w_{z}\int_{0}^{\infty}dt\sqrt{t}(\exp(ik\Phi_{0}\exp(-t))-1) \right) \end{aligned} \end{equation} and after a variable transform $y=\exp(-t)$ we find \begin{equation} C(k) = \exp\left(\frac{\pi}{\sqrt{2}}cw_{xy}^{2}w_{z}\int_{0}^{1}\frac{\sqrt{-\ln{y}}}{y}(\exp(ik\Phi_{0}y)-1)dy \right) \end{equation} \begin{acknowledgments} We wish to acknowledge funding by the NSF (DMR Award 1106044), the NIH (5R21DA03846702), and discussions with Alexei Borodin, Nikita Nikrasov, Eugene and Joachim R\"adler. \end{acknowledgments} \end{document}
{ "alphanum_fraction": 0.6983534577, "avg_line_length": 56.2345679012, "ext": "tex", "hexsha": "b324a137636ad8761187bb8f2dd66239396076e9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "657e5fee47ea0315cec8e85d68dd8e967ddd0dfc", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "hstrey/Intensity-Distribution", "max_forks_repo_path": "IntDistGauss.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "657e5fee47ea0315cec8e85d68dd8e967ddd0dfc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "hstrey/Intensity-Distribution", "max_issues_repo_path": "IntDistGauss.tex", "max_line_length": 1708, "max_stars_count": null, "max_stars_repo_head_hexsha": "657e5fee47ea0315cec8e85d68dd8e967ddd0dfc", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "hstrey/Intensity-Distribution", "max_stars_repo_path": "IntDistGauss.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4642, "size": 13665 }
\chapter{Used Components}\label{ch:usedComponents} \section{Sensors}\label{sec:sensors} The sensors are only described as far as necessary to reproduce the results. \subsection{Thermal Camera}\label{ssec:HWthermalCamera} As thermal camera a PI400 from Optris is used. The PI400 has a uncooled $17\micro\meter \times 17\micro\meter$ sensor with a optical resolution of $382 \times 288$ pixel. In the spectral range of \SI{8}{\micro\meter} $-$ \SI{14}{\micro\meter} it has a temperature range of either \SI{-20}{\celsius} $-$ \SI{100}{\celsius} or \SI{0}{\celsius} $-$ \SI{250}{\celsius} or \SI{150}{\celsius}$-$ \SI{900}{\celsius} depending on the temperature in the field of view\cite{PI400}. Beside of the camera Optris also delivers a ROS node and provides the source code on GitHub\cite{OptrisROSNode}. The driver and \ac{SDK} can be downloaded on the Optris homepage. \subsection{Stereo Camera}\label{ssec:HWstereoCamera} The stereo camera comes from StereoLabs and is called ZED. The camera has a basis line of $120\milli\meter$ and provides up to $100$ depth images per second. Each depth image can have a resolution up to $1920 \times 1080$ pixels with a accuracy of $2\percent$ at a distance smaller than $3\meter$ and $4\percent$ for distances smaller than $15\meter$\cite{ZED}. The \ac{SDK} is provided on StereoLabs homepage and the ROS Node can be found on GitHub\cite{ZEDROSNode}. \subsection{LiDAR}\label{ssec:HWLiDAR} Hokuyo is the producer of the used \ac{LiDAR} of the type UTM-30LX. The sensor provides a field of view of $270\degree$ and a detection range from $0.1\meter$ to $30\meter$. As angular resolution are $0.25\degree$ specified\cite{UTM-30LX}. The software is provided in the same manner as for the previous sensors. The \ac{SDK} is provided as download at the producers homepage and the ROS node is published on GitHub\cite{URG_node} \subsection{IMU}\label{ssec:HWIMU} The IMU is produced by Xsens and from the type MTi-G-710. Xsens equips the MTi-G-710 with several sensors which are gyroscope, magnetometer, barometer, GNSS and accelerometer. All measurements are published by a frequency up to $2\kilo\hertz$\cite{IMU}. The ROS node is not published in the common way, the source code for the ROS node is integrated in the \ac{SDK} which can be downloaded at the Xsens homepage. \section{ROS}\label{sec:ros} \ac{ROS} is initial developed by Stanford AI Robot as a prototype of a flexible and dynamic framework for personal robots. It then got extended by Willow Garage, a robotics incubator. The used open-source license enabled a wide range of developers and researchers to contribute, which boosted the rise of \ac{ROS}. Another big moment was the handover from Willow Garage to the new founded \ac{OSRF} in 2012 \cite{rosHistory}. Nowadays it is used in a wide range of hobby to scientific projects. But also the industry uses \ac{ROS} to develop different kind of robotic solutions. Even if \ac{ROS} has the term "Operating System" in the name it acts at the top of a classic operating system like GNU/Linux as middleware between sensors and high level applications. The supported, most used and best tested distribution to use with \ac{ROS} is Ubuntu \cite{rosInstallationOS}. The framework can installed with a different amount of packages where the base installation only provides the necessary parts for building, packaging and communication. The desktop installation includes the base package with single tools to visualize the system and the desktop-full installation provides more software for simulation and prediction. The \ac{OSRF} also provides additional software which can be installed by demand. The different installation make it possible to keep the installation relatively small and run \ac{ROS} on small systems with low capacity \cite{rosInstallations}. In the following, \ac{ROS} will be refer to the desktop-full installation and if additional packages are needed, they will be named. \subsection{Communication}\label{ssec:communication} The single nodes work together as a peer-to-peer network which is managed from one special node which is called the ROS-master. Also the communication is organized in topics which have a specific type of information. A node can publish into a topic, which mean it provides information, and subscribe a topic to get information. The number of subscribed topics and topics to publish are not limited, so that it is possible to get information from e.g. multiple sensors and compute them. \begin{figure}[ht] \centering \includegraphics[width=0.30\textwidth]{img/ros_master/ros_master1.png} \includegraphics[width=0.30\textwidth]{img/ros_master/ros_master2.png} \includegraphics[width=0.30\textwidth]{img/ros_master/ros_master3.png} \caption{The process how a camera node informs the ROS-master about the advertisement of information in the images topic and an other image viewer node subscribe to the same topic which leads to a peer-to-peer connection over the images topic.}\label{fig:ros_master} \end{figure} The advertisement and subscriptions are managed by the ROS-Master which holds the information to establish the peer-to-peer connection. The first step to establish a successful connection is the notification of the master about the new topic. To that moment no data is send, because the topic has no subscriber. To use the information an other node needs to subscribe to the topic by informing the master about the subscription. Now the master gives the information to establish a peer-to-peer connection between the nodes and the information goes directly from one node to the other. Figure \ref{fig:ros_master} visualizes the process with an example how a camera node and image viewer node establish their connection \cite{rosMaster}. The exchanged information are structured in messages which have predefined structures. ROS provides messages for the most common usages but they can also be custom made. Each topic is strongly related to a message type, even if the type is not checked by the ROS-master. \begin{wrapfigure}[10]{O}{0.3\textwidth} \centering \includegraphics[width=0.3\textwidth]{img/ros_master/service.png} \caption{A service invocation is not related to a topic and usually is not a constant flow of messages} \label{fig:service_invocation} \end{wrapfigure} The publish/subscribe model supports a very flexible way to communicate and can easily scale to big many-to-many models. But for request/reply situations it is not appropriated. In these situations services can be used. They are defined by a pair of messages, one for the request and one for reply. The call of a service is similar to the connection with a topic except that it is just for the time between the request and the reply. But nodes can also establish a persistent connection to a service, which causes less robustness but a higher performance. \subsection{Visualization}\label{ssec:visualization} In ROS the visualization of data is realized in the tool named \texttt{RVIZ}. To the user \texttt{RVIZ} itself acts as a host application for different plugins where each message type has it's own plugin. For ROS \texttt{RVIZ} acts as a normal node so that the individual plugins needs to be configured to subscribe to the relevant topics and how the visualization should look like. \texttt{RVIZ} provides a number of plugins for the standard messages, but one can also develop their own plugins to visualize the custom messages. The visualization always happens with respect to one coordinate frame from the data. To calculate the position of one measurement in the frame of another measurement \texttt{RVIZ} takes a lookup in the \texttt{\textbackslash tf} (transformation) topic which is dedicated to transformations between different frames. \subsection{Data Recording}\label{ssec:dataRecording} The ability to record and playback data is a crucial part of ROS. Since all available data in ROS are exchanged via topics one just needs to subscribe to the topics witch holds the data that should be recorded. The file format to save the records is typical a bag. A bag is created with a tool like \texttt{rosbag}. The tool subscribes to the wanted topics and writes the messages into the bag file, as they come. After finishing the recording a bag file can be played back with the same tool. For the rest of the ROS network it behaves like the messages are published by the original nodes, which makes the handling very easy. To make the process of writing and reading into bag files efficient the messages are not saved as messages but in the same representation as in the network transport layer. To have the ability to invest \ac{ROS} bags without installing \ac{ROS}, a programmatic API is provided to iterate over the messages in a \ac{ROS} bag \cite{rosBag}. \section{MATLAB}\label{sec:matlab} MATLAB stands for Matrix Laboratory and is a computing environment with it's own programming language. It's originally designed by Cleve Moler to give his students easy access to LINPACK and EISPACK which are Fortran packages to solve Eigensystem and Linear Equation problems. Later it got rewritten and extended in C from Jack Little and Steve Bangert. The main extensions where functions, toolboxes and graphics. Today it is developed and distributed by MathWorks, which is founded by Moler, Little and Bangert. Over the time the number of toolboxes, tools and features increased. So did the number of users in universities. Together with Simulink, which is an other product from MathWorks, MATLAB is used at over 5000 universities and can be termed as the engineers language~\cite{introductionMatlab}. \subsection{ROS Toolbox}\label{ssec:rosToolbox} The \ac{ROS} toolbox is the interface to \ac{ROS} for MATLAB and Simulink and provides the possibilities to create nodes and process the messages from a \ac{ROS} network. It also provides functions to read from \ac{ROS} bags and handling standard message types.
{ "alphanum_fraction": 0.7937700321, "avg_line_length": 70.8085106383, "ext": "tex", "hexsha": "31a0ba4c3d58690d249f36202c04c6cf6a233b6b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "85d646f7780eebc20487ecae303e881929f25cdc", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "worldpotato/BachelorThesis", "max_forks_repo_path": "content/06_used_components.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "85d646f7780eebc20487ecae303e881929f25cdc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "worldpotato/BachelorThesis", "max_issues_repo_path": "content/06_used_components.tex", "max_line_length": 300, "max_stars_count": null, "max_stars_repo_head_hexsha": "85d646f7780eebc20487ecae303e881929f25cdc", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "worldpotato/BachelorThesis", "max_stars_repo_path": "content/06_used_components.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2302, "size": 9984 }
\chapter{Conclusion}\label{chpt:conclusion} In summary, the main contribution of the work underlying this thesis is \gls{PyExaFMM}; a three dimensional \gls{KIFMM} simulation library with some parallel features. The software has been designed to be well testable and extensible, however it is currently someway behind state of the art implementations in terms of speed \cite{Malhotra:2015:CCP, exafmm}. Furthermore, from table (\ref{table:3_1_jit}), we see that some current optimisations, namely \gls{JIT} compilation for some Numpy based functions, have been naively applied. To bring \gls{PyExaFMM} in line with state of the art \gls{KIFMM} software, extensions which fully take advantage of modern computing hardware will have to be implemented. Modern heterogenous computers have access to multiple multi-core \gls{CPU} and \gls{GPU} units, with multiple levels of memory-cache, and vectorisation available at the processor level \cite{Malhotra:2015:CCP}. The current optimisations within \gls{PyExaFMM} do not take advantage of shared or distributed memory parallelism. As mentioned in Chapter \ref{chpt:1_introduction}, Section \ref{sec:1_1_fmm_overview}, the near-field \gls{P2P} calculations can be transferred to \gls{GPU}s for acceleration \cite{Hwu:2011:MKP}, alternatively as discussed above fast Newton iterations and \gls{AVX} vectorisation at the \gls{CPU} level is used to accelerate the \gls{P2P} calculations in both major \gls{KIFMM} implementations \cite{Malhotra:2015:CCP, exafmm}. These optimisations are dependent on the available hardware, however they share a common approach in distributing a single \gls{P2P} instruction, across multiple particle interactions, following the \gls{SIMD} paradigm. The calculation of the far-field \gls{M2L} operator matrices can also be accelerated by the above techniques. Furthermore, randomised \gls{SVD} compression \cite{Erichson:2019:JOSS, Halko:2011:SIAM} can be implemented to also take advantage of shared memory parallelism as described in Chapter \ref{chpt:2_strategy_for_practical_implementation}, Section \ref{sec:2_4_svd_compression}, reducing further the cost of computing low-rank approximations of the \gls{M2L} operator matrices. The mixed performance of \gls{JIT} compilation for the construction of trees, leaves a lot of room for for further optimisation in \gls{PyExaFMM}. Specifically, state of the art implementations construct adaptive trees in parallel, taking advantage of the distributed memory programming paradigm with \gls{MPI} \cite{Malhotra:2015:CCP}. PVFMM, for example, chunks particle data across processors, constructing subtrees in parallel, and uses \gls{MPI} to pass multipole and local expansion coefficients to other processes as required during the main \gls{FMM} loop. In addition to the above optimisations, the current \gls{PyExaFMM} codebase can be further sanitised. Specifically, integration tests that test the way in which modules interact, should be added on top of the current unit test suite. Additionally, it should be a priority to perform more detailed code profiling to clearly identify the most significant memory and \gls{CPU} bottlenecks. Despite its limitations, \gls{PyExaFMM} achieves the complexity bound of the \gls{FMM} algorithm. Furthermore, it represents a significant first step towards the goal of an open source Python \gls{KIFMM} implementation which sacrifices as little computational performance as possible, in comparison to major compiled language implementations.
{ "alphanum_fraction": 0.8126603935, "avg_line_length": 70.14, "ext": "tex", "hexsha": "437268598be373c4d1fb94bd514f1af2665d5a8f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c9cad2703b6263e82fa32b025c8c3ab942367fd6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "skailasa/msc_thesis", "max_forks_repo_path": "conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c9cad2703b6263e82fa32b025c8c3ab942367fd6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "skailasa/msc_thesis", "max_issues_repo_path": "conclusion.tex", "max_line_length": 98, "max_stars_count": null, "max_stars_repo_head_hexsha": "c9cad2703b6263e82fa32b025c8c3ab942367fd6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "skailasa/msc_thesis", "max_stars_repo_path": "conclusion.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 907, "size": 3507 }
\documentclass{article} \usepackage[letterpaper]{geometry} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{csquotes} \usepackage{latexsym, amsmath,amssymb} \usepackage{graphicx} \usepackage{subcaption} \usepackage{booktabs,multicol} \usepackage[table]{xcolor} \usepackage{comment} \usepackage[normalem]{ulem} % easy author affiliations \usepackage{authblk} \renewcommand\Affilfont{\small} \usepackage[hyphens]{url} \usepackage[breaklinks=true,linkcolor=blue, citecolor=blue, urlcolor=blue, colorlinks=true]{hyperref} \usepackage[ backend=biber, style=numeric, citestyle=numeric-comp, sorting=none, bibencoding=UTF-8, giveninits=true, maxbibnames=1000, ]{biblatex} \addbibresource{references.bib} \hyphenation{NumFOCUS} \usepackage[textsize=footnotesize,textwidth=2cm]{todonotes} \usepackage[xcolor]{changebar} % *** Be very precise and careful about including whitespace and punctuation in your edits *** \newcommand{\add}[1]{{\sloppy\cbcolor{teal}\textcolor{teal}{\cbstart {#1}\cbend}}} % add \newcommand{\delete}[1]{\sloppy\cbcolor{red}\textcolor{red}{\cbdelete \sout{#1}}} \newcommand{\addnoul}[1]{{\textcolor{teal}{#1}}} % add with no underline \newcommand{\deletenoso}[1]{{\textcolor{red}{#1}}} % delete with no strikeout %% For the final version, use these four commands instead % \renewcommand{\add}[1]{#1} % \renewcommand{\addnoul}[1]{#1} % \renewcommand{\delete}[1]{} % \renewcommand{\deletenoso}[1]{} % remove for final %\usepackage{lineno} \newcommand\joss{\textit{JOSS}} \title{Journal of Open Source Software (JOSS): design and first-year review} \author[1]{Arfon M.~Smith\thanks{Corresponding author, \href{mailto:[email protected]}{[email protected]}}} \author[2]{Kyle E.~Niemeyer} \author[3]{Daniel S.~Katz} \author[4]{Lorena A.~Barba} \author[5]{George~Githinji} \author[6]{Melissa Gymrek} \author[7]{Kathryn D.~Huff} \author[8]{Christopher R.~Madan} \author[9]{Abigail Cabunoc Mayes} \author[10]{Kevin M.~Moerman} \author[11]{Pjotr Prins} \author[12]{Karthik Ram} \author[13]{Ariel Rokem} \author[14]{Tracy K.~Teal} \author[15]{Roman Valls Guimera} \author[13]{Jacob~T.~Vanderplas} \date{June 2017} \affil[1]{Data Science Mission Office, Space Telescope Science Institute, Baltimore, MD, USA} \affil[2]{School of Mechanical, Industrial, and Manufacturing Engineering, Oregon State University, Corvallis, OR, USA} \affil[3]{National Center for Supercomputing Applications \& Department of Computer Science \& Department of Electrical and Computer Engineering \& School of Information Sciences, University of Illinois at Urbana--Champaign, Urbana, IL, USA} \affil[4]{Department of Mechanical and Aerospace Engineering, George Washington University, Washington, DC, USA} \affil[5]{KEMRI--Wellcome Trust Research Programme, Kilifi, Kenya} \affil[6]{Departments of Medicine \& Computer Science and Engineering, University of California, San Diego, CA, USA} \affil[7]{Department of Nuclear, Plasma, and Radiological Engineering, University of Illinois at Urbana--Champaign, Urbana, IL, USA} \affil[8]{School of Psychology, University of Nottingham, Nottingham, United Kingdom} \affil[9]{Mozilla Foundation, Toronto, Ontario, Canada} \affil[10]{Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA \& The University of Dublin, Trinity College, Dublin, Ireland} \affil[11]{University of Tennessee Health Science Center, Memphis, TN, USA \& University Medical Centre Utrecht, Utrecht, The Netherlands} \affil[12]{Berkeley Institute for Data Science, University of California, Berkeley, Berkeley, CA, USA} \affil[13]{eScience Institute, University of Washington, Seattle, WA, USA} \affil[14]{Data Carpentry, Davis, CA, USA \& Michigan State University, East Lansing, MI, USA} \affil[15]{University of Melbourne Centre for Cancer Research, Melbourne, Australia} \begin{document} \maketitle % remove for final %\linenumbers \begin{abstract} This article describes the motivation, design, and progress of the Journal of Open Source Software (\joss{}). \joss{} is a free and open-access journal that publishes articles describing research software. It has the dual goals of improving the quality of the software submitted and providing a mechanism for research software developers to receive credit. While designed to work within the current merit system of science, \joss{} addresses the dearth of rewards for key contributions to science made in the form of software. \joss{} publishes articles that encapsulate scholarship contained in the software itself, and its rigorous peer review targets the software components: functionality, documentation, tests, continuous integration, and the license. A \joss{} article contains an abstract describing the purpose and functionality of the software, references, and a link to the software archive. The article is the entry point of a \joss{} submission, which encompasses the full set of software artifacts. Submission and review proceed in the open, on GitHub. Editors, reviewers, and authors work collaboratively and openly. Unlike other journals, \joss{} does not reject articles requiring major revision; while not yet accepted, articles remain visible and under review until the authors make adequate changes (or withdraw, if unable to meet requirements). Once an article is accepted, \joss{} gives it a digital object identifier (DOI), deposits its metadata in Crossref, and the article can begin collecting citations on indexers like Google Scholar and other services. Authors retain copyright of their \joss{} article, releasing it under a Creative Commons Attribution 4.0 International License. In its first year, starting in May 2016, \joss{} published 111 articles, with more than 40 additional articles currently under review. \joss{} is a sponsored project of the nonprofit organization NumFOCUS and is an affiliate of the Open Source Initiative (OSI). \end{abstract} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} Modern scientific research produces many outputs beyond traditional articles and books. Among these, research software is critically important for a broad spectrum of fields. Current practices for publishing and citation do not, however, acknowledge software as a first-class research output. This deficiency means that researchers who develop software face critical career barriers. The \textit{Journal of Open Source Software} (\joss{}) was founded in May 2016 to offer a solution within the existing publishing mechanisms of science. It is a developer-friendly, free and open-access, peer-reviewed journal for research software packages. \joss{} recently passed its first anniversary, having published more than a hundred articles. This article discusses the motivation for creating a new software journal, delineates the editorial and review process, and summarizes the journal's first year of operation via submission statistics. The sixteen authors of this article are the members of the \joss{} Editorial Board at the end of its first year (May 2017). Arfon Smith is the founding editor-in-chief, and the founding editors are Lorena A.~Barba, Kathryn Huff, Daniel Katz, Christopher Madan, Abigail Cabunoc Mayes, Kevin Moerman, Kyle Niemeyer, Karthik Ram, Tracy Teal, and Jake Vanderplas. Five new editors joined in the first year to handle areas not well covered by the original editors, and to help manage the large and growing number of submissions. They are George Githinji, Melissa Gymrek, Pjotr Prins, Ariel Rokem, and Roman Valls Guimera. The \joss{} editors are firm supporters of open-source software for research, with extensive knowledge of the practices and ethics of open source. This knowledge is reflected in the \joss{} submission system, peer-review process, and infrastructure. The journal offers a familiar environment for developers and authors to interact with reviewers and editors, leading to a citable published work: a software article. With a Crossref digital object identifier (DOI), the article is able to collect citations, empowering the developers/authors to gain career credit for their work. \joss{} thus fills a pressing need for computational researchers to advance professionally, while promoting higher quality software for science. \section{Background and motivation}\label{background} %\subsection{The importance of software for science} A 2014 study of UK Russell Group Universities~\cite{Hettrick} reports that $\sim$90\% of academics surveyed said they use software in their research, while more than 70\% said their research would be impractical without it. About half of these UK academics said they develop their own software while in the course of doing research. Similarly, a 2017 survey of members of the US National Postdoctoral Association found that 95\% used research software, and 63\% said their research would be impractical without it~\cite{US-PDA-survey}. Despite being a critical part of modern research, software lacks support across the scholarly ecosystem for its publication, acknowledgement, and citation~\cite{Niemeyer:2016sc}. Academic publishing has not changed substantially since its inception. Science, engineering, and many other academic fields still view research articles as the key indicator of research productivity, with research grants being another important indicator. Yet, the research article is inadequate to fully describe modern, data-intensive, computational research. \joss{} focuses on research software and its place in the scholarly publishing ecosystem. \subsection{Why publish software?} Most academic fields still rely on a one-dimensional credit model where academic articles and their associated citations are the dominant factor in the success of a researcher's career. Software creators, in order to increase the likelihood of receiving career credit for their work, often choose to publish ``software articles'' that act as placeholder publications pointing to their software. At the same time, recent years have seen a push for sharing open research software~\cite{Barnes:2010ut,Vandewalle:2012cl,Morin:2012hz,Ince:2012iy,NatureMethodsEditorialBoard:2014gu,Prins:natbio}. Beyond career-credit arguments for software creators, publishing research software enriches the scholarly record. Buckheit and Donoho paraphrased Jon Claerbout, a pioneer of reproducible research, as saying: ``An article about a computational result is advertising, not scholarship. The actual scholarship is the full software environment, code and data, that produced the result.''~\cite{Buckheit1995}. The argument that articles about computational science are not satisfactory descriptions of the work, needing to be supplemented by code and data, is more than twenty years old! Yet, despite the significance of software in modern research, documenting its use and including it in the scholarly ecosystem presents numerous challenges. \subsection{Challenges of publishing software} The conventional publishing mechanism of science is the research article, and a researcher's career progression hinges on collecting citations for published works. Unfortunately, software citation~\cite{Smith2016} is in its infancy (as is data citation~\cite{data-citation,10.7717/peerj-cs.1}). Publishing the software itself and receiving citation credit for it may be a better long-term solution, but this is still impractical. Even when software (and data) are published so that they can be cited, we do not have a standard culture of peer review for them. This leads many developers today to publish software articles. The developer's next dilemma is where to publish, given the research content, novelty, length and other features of a software article. % Since 2012, Neil Chue Hong has maintained a growing list of journals that accept software articles~\cite{software-papers-list}. He includes both generalist journals, accepting software articles from a variety of fields, and domain-specific journals, accepting both research and software articles in a given field. % For many journals, particularly the domain-specific ones, a software article must include novel results to justify publication. From the developer's point of view, writing a software article can involve a great deal of extra work. Good software includes documentation for both users and developers that is sufficient to make it understandable. A software article may contain much of the same content, merely in a different format, and developers may not find value in rewriting their documentation in a manner \delete{that is perhaps} less useful to their users and collaborators. These issues may lead developers to shun the idea of software articles and prefer to publish the software itself. Yet, software citation is not common and the mostly one-dimensional credit model of academia (based on article citations) means that publishing software often does not ``count'' for career progression~\cite{Smith2016,Niemeyer:2016sc}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{The Journal of Open Source Software} To tackle the challenges mentioned above, the \textit{Journal of Open Source Software} (\joss{}) launched in May 2016~\cite{arfondotorg} with the goal of drastically reducing the overhead of publishing software articles. \joss{} offers developers a venue to publish their research software\delete{, dressed up as software articles} \add{with brief high-level articles}, thus enabling citation credit for their work. In this section we describe the goals and principles, infrastructure, and business model of \joss{}, and compare it with other software journals. \subsection{Goals and principles} \joss{} articles are deliberately short and only include an abstract describing the high-level functionality of the software, a list of the authors of the software (with their affiliations), a list of key references, and a link to the software archive and software repository. Articles are not allowed to include other content often found in software articles, such as descriptions of the API (application programming interface) and novel research results obtained using the software. The software API should already be described in the software documentation, and domain research results do not belong in \joss{}---these should be published in a domain journal. The \joss{} design and %review process implementation are based on the following principles: \begin{itemize} \item Other than their short length, \joss{} articles are conventional articles in every other sense: the journal has an ISSN, articles receive Crossref DOIs with high-quality submission metadata, and articles are appropriately archived. \item Because software articles are ``advertising'' and simply pointers to the \textit{actual} scholarship (the software), short abstract-length submissions are sufficient for these ``advertisements.'' \item Software is a core product of research and therefore the software itself should be archived appropriately when submitted to and reviewed in \joss{}. \item Code review, documentation, and contributing guidelines are important for open-source software and should be part of any review. In \joss{}, they are the focus of peer review. (While a range of other journals publish software, with various peer-review processes, the focus of the review is usually the submitted article and reviewers might not even look at the code.) The \joss{} review process itself, described in \S\ref{thereview}, was based on the on-boarding checklist for projects joining the rOpenSci collaboration~\cite{ropensci}. %\todo{not sure the next part is a design principle - it seems to fit better in the list below } In addition, to promote the reuse of software, all \joss{} submissions must have an official open source license (one recognized by the Open Source Initiative~\cite{OSI}). \end{itemize} Acceptable \joss{} submissions also need to meet the following criteria: \begin{itemize} \item The software must be open source by the Open Source Initiative (OSI) definition (\href{https://opensource.org}{opensource.org}). \item The software must have a research application. \item The submitter should be a major contributor to the software they are submitting. \item The software should be a significant new contribution to the available open-source software that either enables some new research challenge(s) to be addressed or makes addressing research challenges significantly better (e.g., faster, easier, simpler.) \item The software should be feature-complete, i.e., it cannot be a partial solution. \end{itemize} \subsection{How \joss{} works}\label{howitworks} \joss{} is designed as a small collection of open-source tools that leverage existing infrastructure such as GitHub, Zenodo, and Figshare. A goal when building the journal was to minimize the development of new tools where possible. \subsubsection*{The \joss{} web application and submission tool} The \joss{} web application and submission tool is hosted at \href{http://joss.theoj.org}{http://joss.theoj.org}. It is a simple Ruby on Rails web application~\cite{joss-site} that lists accepted articles, provides the article submission form (see Figure~\ref{fig:submission}), and hosts journal documentation such as author submission guidelines. This application also automatically creates the review issue on GitHub once a submission has been pre-reviewed by an editor and accepted to start peer review in \joss{}. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{submission.png} \caption{The \joss{} submission page. A minimal amount of information is required for new submissions. \label{fig:submission}} \end{figure} \subsubsection*{Open peer review on GitHub} \joss{} conducts reviews on the \texttt{joss-reviews} GitHub repository~\cite{joss-reviews}. Review of a submission begins by the opening of a new GitHub issue, where the editor-in-chief assigns an editor, the editor assigns a reviewer, and interactions between authors, reviewer(s), and editor proceed in the open. Figure~\ref{fig:review} shows an example of a recent review for the (accepted) \texttt{hdbscan} package~\cite{McInnes2017}. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{review.png} \caption{The \texttt{hdbscan} GitHub review issue. \label{fig:review}} \end{figure} \subsubsection*{Whedon and the Whedon-API} Many of the tasks associated with \joss{} reviews and editorial management are automated. A core RubyGem library named \texttt{Whedon}~\cite{whedon-gem} handles common tasks associated with managing the submitted manuscript, such as compiling the article (from its Markdown source) and creating Crossref metadata. An automated bot, \texttt{Whedon-API}~\cite{whedon-api}, handles other parts of the review process (such as assigning editors and reviewers based on editor input) and leverages the \texttt{Whedon} RubyGem library. For example, to assign the editor for a submission, one may type the following command in a comment box within the GitHub issue: \texttt{@whedon assign @danielskatz as editor}. Similarly, to assign a reviewer, one enters: \texttt{@whedon assign @zhaozhang as reviewer} (where the reviewer and editor GitHub handles identify them). The next section describes the review process in more detail. \subsection{Business model and content licensing} \joss{} is designed to run at minimal cost with volunteer labor from editors and reviewers. The following fixed costs are currently incurred: \begin{itemize} \item{Crossref membership: \$275. This is a yearly fixed cost for the \joss{} parent entity---\textit{Open Journals}---so that article DOIs can be registered with Crossref.} \item{Crossref article DOIs: \$1. This is a fixed cost per article.} \item{\joss{} web application hosting (currently with Heroku): \$19 per month} \end{itemize} Assuming a publication rate of 100 articles per year results in a core operating cost of $\sim$\$6 per article. With 200 articles per year---which seems possible for the second year---the cost drops to $\sim$\$3.50 per article: \begin{align}\label{costs} (\$275 + (\$1 \times 100) + (\$19 \times 12)) / 100 &= \$6.03 \\ (\$275 + (\$1 \times 200) + (\$19 \times 12)) / 200 &= \$3.51 \;. \end{align} Submitting authors retain copyright of \joss{} articles and accepted articles are published under a Creative Commons Attribution 4.0 International License~\cite{cc}. Any code snippets included in \joss{} articles are subject to the MIT license~\cite{mit} regardless of the license of the submitted software package under review, which itself must be licensed under an OSI-approved license (see \href{https://opensource.org/licenses/alphabetical}{opensource.org/licenses/alphabetical} for a complete list). \subsection{Comparison with other software journals} \label{comparison} A good number of journals now accept, review, and publish software articles~\cite{software-papers-list}, \add{which we group into two categories. The first category of journals include those similar to \joss{}, which do not focus on a specific domain and only consider submissions of software\slash software articles:} the \textit{Journal of Open Research Software} (\textit{JORS}, \href{http://openresearchsoftware.metajnl.com}{openresearchsoftware.metajnl.com}), \textit{SoftwareX} (\href{https://www.journals.elsevier.com/softwarex/}{journals.elsevier.com/softwarex/}), and now \joss{}. Both \textit{JORS}~\cite{jorsreview} and \textit{SoftwareX}~\cite{els-software} now review both the article text and the software. In \joss{}, the review process focuses mainly on the software and associated material (e.g., documentation) and less on the article text, which is intended to be a brief description of the software. The role and form of peer review also varies across journals. In \textit{SoftwareX} and \textit{JORS}, the goal of the review is both to decide if the article is acceptable for publication and to improve it iteratively through a non-public, editor-mediated interaction between the authors and the anonymous reviewers. In contrast, \joss{} has the goal of accepting most articles after improving them as needed, with the reviewers and authors communicating directly and publicly through GitHub issues. \add{ The second category includes domain-specific journals that either accept software articles as a special submission type or exclusively consider software articles targeted at the domain. For example, \textit{Collected Algorithms} (CALGO) is a long-running venue for reviewing and sharing mathematical algorithms associated with articles published in \textit{Transactions on Mathematical Software} and other ACM journals. However, CALGO authors must transfer copyright to ACM and software is not available under an open-source license---this contrasts with \joss{}, where authors retain copyright and software must be shared under an open-source license. \textit{Computer Physics Communications} and \textit{Geoscientific Model Development} publish full-length articles describing application software in computational physics and geoscience, respectively, where review primarily focuses on the article. Chue Hong maintains a list of journals in both categories~\cite{software-papers-list}. } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Peer review in \joss{}} \label{thereview} In this section, we illustrate the \joss{} submission and review process using a representative example, document the review criteria provided to authors and reviewers, and explain a fast-track option for already-reviewed rOpenSci contributions. \subsection{The \joss{} process} Figure~\ref{fig:submission-flow} shows a typical \joss{} submission and review process, described here in more detail using the \texttt{hdbscan} package~\cite{McInnes2017} as an example: \begin{enumerate} \item Leland McInnes submitted the \texttt{hdbscan} software and article to \joss{} on 26 February 2017 using the web application and submission tool. The article is a Markdown file named \texttt{paper.md}, visibly located in the software repository (here, and in many cases, placed together with auxiliary files in a \texttt{paper} directory). \item Following a routine check by a \joss{} administrator, a ``pre-review'' issue was created in the \texttt{joss-reviews} GitHub repository~\cite{hdbscan-joss-pre-review}. In this pre-review issue, an editor (Daniel S.~Katz) was assigned, who then identified and assigned a suitable reviewer (Zhao Zhang). \add{Editors generally identify one or more reviewers from a pool of volunteers based on provided programming language and\slash or domain expertise.}\footnote{Potential reviewers can volunteer via \url{http://joss.theoj.org/reviewer-signup.html}} The editor then asked the automated bot \texttt{Whedon} to create the main submission review issue via the command \texttt{@whedon start review magic-word=bananas}. (``\texttt{magic-word=bananas}'' is a safeguard against accidentally creating a review issue prematurely.) \item The reviewer then conducted the submission review~\cite{hdbscan-joss-review} (see Figure~\ref{fig:review}) by working through a checklist of review items, as described in \S\ref{review-details}. The author, reviewer, and editor discussed any questions that arose during the review, and once the reviewer completed their checks, they notified the submitting author and editor. Compared with traditional journals, \joss{} offers the unique feature of holding a discussion---in the open within a GitHub issue---between the reviewer(s), author(s), and editor. Like a true conversation, discussion can go back and forth in minutes or seconds, with all parties contributing at will. This contrasts traditional journal reviews, where the process is merely an exchange between the reviewer(s) and author(s), via the editor, which can take months for each communication, and in practice is limited to one or two, perhaps three in some cases, exchanges due to that delay~\cite{tennant-peerreview}. Note that \joss{} reviews are subject to a code of conduct~\cite{code-of-conduct}, adopted from the Contributor Covenant Code of Conduct~\cite{contributor-covenant-coc}. Both authors and reviewers must confirm that they have read and will adhere to this Code of Conduct, during submission and with their review, respectively. \item After the review was complete, the editor asked the submitting author to make a permanent archive of the software (including any changes made during review) with a service such as Zenodo or Figshare, and to post a link to the archive in the review thread. This link, in the form of a DOI, was associated with the submission via the command \texttt{@whedon set 10.5281/zenodo.401403 as archive}. \item The editor-in-chief used the \texttt{Whedon} RubyGem library on his local machine to produce the compiled PDF, update the \joss{} website, deposit Crossref metadata, and issue a DOI for the submission (\href{https://doi.org/10.21105/joss.00205}{10.21105/joss.00205}). \item Finally, the editor-in-chief updated the review issue with the \joss{} article DOI and closed the review. The submission was then accepted into the journal. \end{enumerate} \begin{figure}[htp] \centering \includegraphics[width=0.75\textwidth]{JOSS-flowchart.pdf} \caption{The \joss{} submission and review flow including the various status badges that can be embedded on third-party settings such as GitHub README documentation~\cite{JOSS-publication-workflow}. \label{fig:submission-flow}} \end{figure} \subsection{\joss{} review criteria}\label{review-details} As previously mentioned, the \joss{} review is primarily concerned with the material in the software repository, focusing on the software and documentation. The specific items in the reviewer checklist are: \begin{itemize} \item Conflict of interest \begin{itemize} \item As the reviewer I confirm that there are no conflicts of interest for me to review this work (such as being a major contributor to the software). \end{itemize} \item Code of Conduct \begin{itemize} \item I confirm that I read and will adhere to the \href{http://joss.theoj.org/about#code_of_conduct}{\joss{} code of conduct}. \end{itemize} \item General checks \begin{itemize} \item \textbf{Repository}: Is the source code for this software available at the repository url? \item \textbf{License}: Does the repository contain a plain-text LICENSE file with the contents of an OSI-approved software license? \item \textbf{Version}: Does the release version given match the GitHub release? \item \textbf{Authorship}: Has the submitting author made major contributions to the software? \end{itemize} \item Functionality \begin{itemize} \item \textbf{Installation}: Does installation proceed as outlined in the documentation? \item \textbf{Functionality}: Have the functional claims of the software been confirmed? \item \textbf{Performance}: Have any performance claims of the software been confirmed? \end{itemize} \item Documentation \begin{itemize} \item \textbf{A statement of need}: Do the authors clearly state what problems the software is designed to solve and who the target audience is? \item \textbf{Installation instructions}: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution. \item \textbf{Example usage}: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems)? \item \textbf{Functionality documentation}: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)? \item \textbf{Automated tests}: Are there automated tests or manual steps described so that the function of the software can be verified? \item \textbf{Community guidelines}: Are there clear guidelines for third parties wishing to 1) contribute to the software 2) report issues or problems with the software, and 3) seek support? \end{itemize} \item Software paper \begin{itemize} \item \textbf{Authors}: Does the \texttt{paper.md} file include a list of authors with their affiliations? \item \textbf{A statement of need}: Do the authors clearly state what problems the software is designed to solve and who the target audience is? \item \textbf{References}: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)? \end{itemize} \end{itemize} \subsection{Fast track for reviewed rOpenSci contributions} For submissions of software that has already been reviewed under rOpenSci's rigorous onboarding guidelines~\cite{Ram:2016ws,Ram2017}, \joss{} does not perform further review. The editor-in-chief is alerted with a note ``This submission has been accepted to rOpenSci. The review thread can be found at \texttt{[LINK TO ONBOARDING ISSUE]},'' allowing such submissions to be fast-tracked to acceptance. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{A review of the first year}\label{firstyear} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% By the end of May 2017, \joss{} published 111 articles since its inception in May 2016, and had an additional 41 articles under consideration. Figure~\ref{fig:article_stats} shows the monthly and cumulative publication rates; on average, we published 8.5 articles per month, with some (nonstatistical) growth over time. %\todo[inline]{an interesting (anecdotal) observation from extracting software programming languages: more than anyone else, R package developers seem most likely to not mention the language they are using, as if they assume R is the only thing...} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.7\textwidth} \includegraphics[width=\textwidth]{JOSS-published-articles.pdf} \caption{Numbers of articles published per month.} \end{subfigure} \\ \begin{subfigure}[b]{0.7\textwidth} \includegraphics[width=\textwidth]{JOSS-cumsum-published-articles.pdf} \caption{Cumulative sum of numbers of articles published per month.} \end{subfigure} \caption{Statistics of articles published in \joss{} since its inception in May 2016 through May 2017. Data, plotting script, and figure files are available~\cite{JOSS-data-figs}.} \label{fig:article_stats} \end{figure} Figure~\ref{fig:article_review} shows the numbers of days taken for processing and review of the 111 published articles (i.e., time between submission and publication), including finding a topic editor and reviewer(s). Since the journal's inception in May 2016, articles spent on average 45.5 days between submission and publication (median 32 days, interquartile range 52.3 days) The shortest review took a single day, for \texttt{Application Skeleton}~\cite{Zhang2016:joss}, while the longest review took 190 days, for \texttt{walkr}~\cite{YuZhuYao2017:joss}. In the former case, the rapid turnaround can be attributed to the relatively minor revisions needed (in addition to quick editor, reviewer, and author actions and responses). In contrast, the latter case took much longer due to delays in selecting an editor and finding an appropriate reviewer, and a multimonth delay between selecting a reviewer and receiving reviews. In other cases with long review periods, some delays in responding to requests for updates may be attributed to reviewers (or editors) missing GitHub \add{notifications} from the review issue comments. \add{We have already taken steps to improve the ability of authors, reviewers, and editors to keep track of their submissions, including a prompt to new reviewers to unsubscribe from the main \texttt{joss-reviews} repository~\cite{joss-reviews} (to reduce unnecessary notifications) and a weekly digest email for \joss{} editors to keep track of their submissions. In the future we may collect the email addresses of reviewers so we can extend this functionality to them. } \begin{figure}[htbp] \centering \includegraphics[width=0.7\textwidth]{JOSS-article-review-times.pdf} \caption{Days between submission and publication dates of the 111 articles \joss{} has published, between May 2016--May 2017. Data, plotting script, and figure file is available~\cite{JOSS-data-figs}.} \label{fig:article_review} \end{figure} Figure~\ref{fig:programming_languages} shows the frequency of programming languages appearing in \joss{} articles. Python appears the most with over half of published software articles (54), while R is used in nearly one-third of articles (29). \add{We believe the popularity of Python and R in \joss{} submissions is the result of (1) the adoption of these languages (and open-source practices) in scientific computing communities and (2) our relationship with the rOpenSci project. } \begin{figure}[htbp] \centering \includegraphics[width=0.7\textwidth]{JOSS-software-languages.pdf} \caption{Frequency of programming languages from the software packages described by the 111 articles \joss{} published in its first year. Total sums to greater than 111, because some packages are multi-language. Data, plotting script, and figure file are available~\cite{JOSS-data-figs}. } \label{fig:programming_languages} \end{figure} % reviewer and editor stats Each article considered by \joss{} undergoes review by one or more reviewers. The set of 111 published articles have been reviewed by 93 unique reviewers. The majority of articles received a review by one reviewer (average of $1.11\pm 0.34$), with a maximum of three reviewers. Based on available data in the review issues, on average, editors reached out to 1.85$\pm$1.40 potential reviewers (at most 8 in one case) via mentions in the GitHub review issue. This does not include external communication, e.g., via email or Twitter. Overall, \joss{} editors contacted 1.65 potential reviewers for each actual review (based on means). Interestingly, the current reviewer list contains only 52 entries, as of this writing~\cite{JOSS-reviewers}. Considering the unique reviewer count of 93, we clearly have reached beyond those that volunteered to review a priori. Benefits of using GitHub's issue infrastructure and our open reviews include: 1) the ability to tag multiple people, via their GitHub handles, to invite them as potential reviewers; 2) the discoverability of the work so that people may volunteer to review without being formally contacted; 3) the ability to get additional, unprompted feedback and comments; and 4) the ability to find reviewers by openly advertising, e.g., on social media. Furthermore, GitHub is a well-known, commonly used platform where many (if not most) potential authors and reviewers already have accounts. Figure~\ref{fig:editors} shows the numbers of articles managed by each of the \joss{} editors. Editor-in-chief Arfon Smith stewarded the majority of articles published in the first year. This was somewhat unavoidable in the first three months after launch, as Smith served as the de facto sole editor for all submissions, with other members of the editorial board assisting. This strategy was not sustainable and, over time, we adopted the pre-review\slash review procedure to hand off articles to editors. Also, authors can now select during submission the appropriate editor based on article topic. \begin{figure}[htbp] \centering \includegraphics[width=0.7\textwidth]{JOSS-editor-counts.pdf} \caption{Numbers of articles handled by each of the \joss{} editors. Data, plotting script, and figure file are available~\cite{JOSS-data-figs}. } \label{fig:editors} \end{figure} In its first year, \joss{} also developed formal relationships with two US-based nonprofit organizations. In March 2017, \joss{} became a community affiliate of the Open Source Initiative (\href{https://opensource.org}{opensource.org}), the steward of the open-source definition, which promotes open-source software and educates about appropriate software licenses. And, in April 2017, \joss{} became a fiscally sponsored project of NumFOCUS (\href{https://www.numfocus.org}{numfocus.org}), a 501(c)(3) charity that supports and promotes ``world-class, innovative, open source scientific computing.'' Being associated with these two prominent community organizations increases the trust of the community in our efforts. Furthermore, as a NumFOCUS project, \joss{} will be able to raise funding to sustain its activities and grow. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{The next year for \joss{}} Our focus for the next year will be on continuing to provide a high-quality experience for submitting authors and reviewers, and making best use of the editorial board. In our first year, we progressed from a model where the editor-in-chief handled most central functions to one with more distributed roles for the editors, particularly that of ensuring that reviews are useful and timely. Editors can now select and self-assign to submissions they want to manage, while the editor-in-chief only assigns the remaining submissions. As \joss{} grows, the process of distributing functions across the editorial board will continue to evolve---and more editors may be needed. In the next year, we plan to complete a number of high-priority improvements to the \joss{} toolchain. Specifically, we plan on automating the final steps for accepting an article. For example, generating Crossref metadata and compiling the article are both currently handled by the editor-in-chief on his local machine using the \texttt{Whedon} RubyGem library. In the future, we would like authors and reviewers to be able to ask the \texttt{Whedon-API} bot to compile the paper for them, and other editors should be able to ask the bot to complete the submission of Crossref metadata on their behalf. Other improvements are constantly under discussion on the \joss{} GitHub repository (\href{https://github.com/openjournals/joss/issues}{github.com/openjournals/joss/issues}). In fact, anyone is able to report bugs and suggest enhancements to the experience. And, since the \joss{} tools are open source, we welcome contributions in the form of bug-fixes or enhancements via the usual pull-request protocols. Beyond roles and responsibilities for the editors, and improvements to the \joss{} tools and infrastructure, we will take on the more tricky questions about publishing software. One of these is how to handle new software versions. Unlike traditional research articles, which once published are static, software needs to change over time, at least for maintenance and to avoid software rot\slash collapse (where software stops working because of changes in the environment, such as dependencies on libraries or operating system). Because software needs to be under continuous development for maintenance, and because the potential uses of the software are seldom known at the start of a project, the need or opportunity arises to add features, improve performance, improve accuracy, etc. Once one or more changes have been made, software developers frequently rename the software with a new version number. Following semantic versioning practices (where software versions are of the form \texttt{MAJOR.MINOR.PATCH}, see \href{http://semver.org}{semver.org}), a small set of changes may lead to an increased version of the software; i.e., the patch element of the version number is incremented, or possibly the minor element is incremented if enough changes have been made. Once the developers determine the set of changes has grown large enough or if the API changed in a way that makes downstream software unable to use it any longer (a backward-incompatible change), the major version number of the software will be incremented. Each change may be made by a different developer, who may be making their first contribution to the software. This implies that a new version might correspond to a new set of authors if the software is published. Exactly how this process translates to \joss{} is not yet clear. The editorial board is supportive of a model where a new \joss{} article is published with each major version, but the details of how this would work, and whether it would be accepted by both developers and users (corresponding to \joss{} authors and readers, respectively) is unknown. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Conclusions} \label{conclusions} Software today encapsulates---and generates---important research knowledge, yet it has not entered the science publication ecosystem in a practical way. This situation is costly for science, through the lack of career progression for valuable personnel: research software developers. We founded \joss{} in response to the acute need for an answer to this predicament. \joss{} is a venue for authors who wish to receive constructive peer feedback, publish, and collect citations for their research software. The number of submissions confirms the keen demand for this publishing mechanism: more than 100 accepted articles in the first year and more than 40 others under review. Community members have also responded positively when asked to review submissions in an open and non-traditional format, contributing useful reviews of the submitted software. However, we are still overcoming initial hurdles to achieve our goals. \joss{} is currently not properly indexed by Google Scholar, despite the fact that \joss{} articles include the adequate metadata, and that we made an explicit request for inclusion on March 2017 (see GitHub \href{https://github.com/openjournals/joss/issues/130}{issue \#130}). Also, we may need to invest more effort into raising awareness of good practices for citing \joss{} articles. The journal cemented its position in the first year of operation, building trust within the community of open-source research-software developers and growing in name recognition. It also earned weighty affiliations with OSI and NumFOCUS, the latter bringing the opportunity to raise funding for sustained operations. Although publishing costs are low at \$3--6 per article, \joss{} does need funding, with the editor-in-chief having borne the expenses personally to pull off the journal launch. Incorporating a small article charge (waived upon request) may be a route to allow authors to contribute to \joss{} in the future, but we have not yet decided on this change. Under the NumFOCUS nonprofit umbrella, \joss{} is now eligible to seek grants for sustaining its future, engaging in new efforts like outreach, and improving its infrastructure and tooling. Outreach to other communities still unaware of \joss{} is certainly part of our growth strategy. Awareness of the journal so far has mostly spread through word-of-mouth and social networking, plus a couple of news articles~\cite{Nature:joss,SDtimes:joss}. We plan to present \joss{} at relevant domain conferences, like we did at the 2017 SIAM Conference on Computational Science \& Engineering~\cite{JOSS-CSE-poster} and the 16th Annual Scientific Computing with Python Conference (SciPy 2017). We are also interested in partnering with other domain journals that focus on (traditional) research articles. In such partnerships, traditional peer review of the research would be paired with peer review of the software, with \joss{} taking responsibility for the latter. Finally, the infrastructure and tooling of \joss{} have unexpected added values: while developed to support and streamline the \joss{} publication process, these open-source tools generalize to a lightweight journal-management system. The \joss{} web application and submission tool, the \texttt{Whedon} RubyGem library, and the \texttt{Whedon-API} bot could be easily forked to create overlay journals for other content types (data sets, posters, figures, etc.). The original artifacts could be archived on other services such as figshare, Zenodo, Dryad, arXiv, or engrXiv\slash AgriXiv\slash LawArXiv\slash PsyArXiv\slash SocArXiv\slash bioRxiv. This presents manifold opportunities to expand the ways we assign career credit to the digital artifacts of research. \joss{} was born to answer the needs of research software developers to thrive in the current merit traditions of science, but we may have come upon a generalizable formula for digital science. \section*{Acknowledgements} Work by K.~E.~Niemeyer was supported in part by the National Science Foundation (No.\ ACI-1535065). Work by P.~Prins was supported by the National Institute of Health (R01 GM123489, 2017--2022). Work by K.~Ram was supported in part by The Leona M.\ and Harry B.~Helmsley Charitable Trust (No.\ 2016PG-BRI004). Work by A.~Rokem was supported the Gordon \& Betty Moore Foundation and the Alfred P.~Sloan Foundation, and by grants from the Bill \& Melinda Gates Foundation, the National Science Foundation (No.\ 1550224), and the National Institute of Mental Health (No.\ 1R25MH112480). \printbibliography \end{document}
{ "alphanum_fraction": 0.7824533965, "avg_line_length": 75.2209856916, "ext": "tex", "hexsha": "816ba3dc186b05881893574c12ed8e965a09717c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "51012970b778eeab9ba3dfdcb041988c3a4919a2", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "openjournals/joss-papers-about-joss", "max_forks_repo_path": "2018-PeerJ/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "51012970b778eeab9ba3dfdcb041988c3a4919a2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "openjournals/joss-papers-about-joss", "max_issues_repo_path": "2018-PeerJ/main.tex", "max_line_length": 484, "max_stars_count": null, "max_stars_repo_head_hexsha": "51012970b778eeab9ba3dfdcb041988c3a4919a2", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "openjournals/joss-papers-about-joss", "max_stars_repo_path": "2018-PeerJ/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10664, "size": 47314 }
%!TEX root=../mythesis.tex % Chapter Template \chapter{Shared Encoders} % Main chapter title \chaptermark{Shared Encoders} % replace the chapter name with its abbreviated form \label{ch:shared_encoders} \section{Method}\label{sec:shared_encoders_methods} \begin{figure}[!htbp] \centering \includegraphics[width=0.9\linewidth]{shared_encoders/two_tower_shared.pdf} \caption[Two-tower architecture of DPR retriever with parameter sharing.]{ % The two-tower architecture of the DPR retriever with parameter sharing. % The same encoder will encode input texts into their embedding vectors with proper special tokens \texttt{[QST]} or \texttt{[CLS]} to distinguish the input types. } \label{fig:two_tower_shared} \end{figure} % In this chapter, we present our first contribution to the DPR architecture~\cite{karpukhin2020dense} -~\emph{shared encoders}. % Recall that the two-tower DPR retriever consists of a question encoder $E_Q$ and a passage encoder $E_P$ with the same architecture (i.e., BERT~\cite{devlin2019bert}) but different weights~\footnote{In the context of deep learning,~\emph{weights} refers to the values of the parameters of the model.}. % In this setup, the task of both encoders is to textual data to the same embedding space, one being question texts and one being passage texts. % Therefore, it emerges naturally that sharing parameters of these two models could be beneficial. % More specifically, we use the same set of parameters for the two encoders while assigning the special token \texttt{[CLS]} to the passage encoder and \texttt{[QST]} to the question encoder. % The similarity score between a question $q$ and a passage $p$ defined in~\eqref{eq:sim_score} then becomes: % \begin{equation} \text{sim}(q, p) = \mathbf{v}^\intercal_{\texttt{[QST]}} \mathbf{v}_{\texttt{[CLS]}} \in \mathbb{R} \end{equation} % ~\fref{fig:two_tower_shared} illustrates the architectural design of this approach. % Under this architecture, we allow the models to share the general world knowledge and natural language understanding capabilities and at the same time to distinguish the input types. % Furthermore, this approach can be seen as a multi-task training algorithm, in which the general encoder $E$ is trained to map both questions and passages to the same feature space. \section{Experimental Results}\label{sec:shared_encoders_results} \begin{table*}[t!] \setlength\tabcolsep{5pt} \centering \small \begin{tabular}{ll|cccc} \toprule \textbf{Negative type} & \textbf{Retriever} & Top-1 & Top-5 & Top-20 & Top-100 \\ \midrule \multirow{2}{*}{BM25} & DPR & 42.01 & 64.54 & 76.48 & 84.29 \\ &DPR (shared encoders) & \textbf{45.01} & \textbf{66.70} & \textbf{78.25} & \textbf{85.62} \\ \midrule \multirow{2}{*}{DPR hard negatives} & DPR & 49.36 & 67.34 & 78.09 & 85.40 \\ & DPR (shared encoders) & \textbf{53.02} & \textbf{71.30} & \textbf{80.89} & \textbf{86.93} \\ \bottomrule \end{tabular} \caption[Top-$\{1, 5, 20, 100\}$ retrieval accuracy on the Natural Questions test set of the DPR retriever with and without parameter sharing.]{ % Top-$\{1, 5, 20, 100\}$ retrieval accuracy on the Natural Questions test set, calculated as the percentage of top-$k$ retrieved passages that contain the answer. % We present the results on training with two different negative types, BM25 or hard negatives. % The proposed shared encoders approach consistently and substantially outperforms the baseline DPR model on various settings with no additional cost. } \label{tab:shared_encoders_results} \end{table*} We provide the retrieval results on NQ in~\tref{tab:shared_encoders_results}, where we train the DPR model with BM25 hard negative passages described in~\sref{sec:dpr_training} and DPR hard negative passages, respectively. % In the latter case, negative passages are obtained by performing retrieval with a DPR checkpoint then for each question taking the highest-scoring passage that does not contain the answer. % We note that our results on the original DPR architecture do not match those reported in the original paper~\cite{karpukhin2020dense}, as we trained all these models with a batch size of 24 instead of 128 given our computation budget. % Nevertheless, we observe a consistent and considerable improvement of the shared encoders across different training settings and different top-$k$ evaluation. % This attests to our hypothesis earlier that this approach allows knowledge sharing and multi-task training that are beneficial to the model performance. % Intriguingly, we observe that the improvement of the shared encoders over the DPR baseline is consistently higher with DPR hard negatives than BM25 hard negatives. % For example, for top-5 retrieval accuracy, the performance gain of DPR shared encoders with DPR hard negatives is 3.96 points which is almost double of that with BM25 hard negatives (2.16 points). % This is opposite to the general intuition that it becomes increasingly difficult to improve a model when its performance is increased. % We hypothesize that this attributes to the knowledge sharing power of shared encoders, which can capitalize more on such informative negatives as DPR hard negatives. % Additionally, we note that by sharing the parameters of the two encoders, we effectively reduce the memory footprint by half. % This is especially critical in retrieval training where in-batch negatives are used, hence gradient accumulation is not sufficient to accommodate for a smaller batch size. % We expect the shared encoders to outperform the baseline DPR model even further when trained on a larger batch size, which is an advantage of shared encoders brought about by the memory efficiency of the architectural design. % We leave it to a future work to empirically verify this hypothesis. % Finally, we note that given its efficiency and effectiveness, we treat the DPR retriever with shared encoders as the baseline DPR model for all subsequent experiments, unless otherwise noted.
{ "alphanum_fraction": 0.7744723284, "avg_line_length": 54.2072072072, "ext": "tex", "hexsha": "48787a9deabba5b38366d501245128dca930f01f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6332210e337677840e76dc7b2efbbf90cb5460ce", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "hnt4499/bachelor_thesis", "max_forks_repo_path": "Chapters/shared_encoders.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6332210e337677840e76dc7b2efbbf90cb5460ce", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "hnt4499/bachelor_thesis", "max_issues_repo_path": "Chapters/shared_encoders.tex", "max_line_length": 301, "max_stars_count": null, "max_stars_repo_head_hexsha": "6332210e337677840e76dc7b2efbbf90cb5460ce", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "hnt4499/bachelor_thesis", "max_stars_repo_path": "Chapters/shared_encoders.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1516, "size": 6017 }
\documentclass[11pt]{article} \usepackage[utf8]{inputenc} \usepackage{indentfirst} \usepackage[bitstream-charter]{mathdesign} \usepackage[T1]{fontenc} \usepackage[letter, margin=0.5in]{geometry} \usepackage{amsfonts} % blackboard math symbols \usepackage{amsmath, amssymb} \usepackage{nicefrac} % compact symbols for 1/2, etc. \usepackage{microtype} % microtypography \usepackage{graphicx} \usepackage{subfigure} \usepackage{multirow} \usepackage{caption} \usepackage{tabularx} \usepackage{authblk} \usepackage[labelformat=empty]{caption} \usepackage[ citestyle=numeric backend=biber, style=numeric, ]{biblatex} \addbibresource{main.bib} \title{\textsc{Autonomous Multi-Objective Therapeutics Design by Reinforcement Learning}} \author[1]{Yuanqing Wang} \author[1, 2]{Manh Nguyen} \author[1]{Josh Fass} \author[3]{Theofanis Karaletsos} \author[1]{John D. Chodera} \affil[1]{Memorial Sloan Kettering Cancer Center, New York, N.Y. 10065} \affil[2]{University of Pennsylvania, Philadelphia, Penn. 19104} \affil[3]{Uber AI Labs, San Francisco, Calif. 94103} \date{} \begin{document} \maketitle \section{Specific Aims} Molecular machine learning (ML)—statistical models that predict (the distributions of) properties of small molecules and proteins—have been shown to be promising in accelerating the design of novel therapeutics~\cite{wu2018moleculenet}. Nonetheless, to the best of our knowledge, the utilization of molecular ML in drug discovery campaigns are primarily limited to prioritizing synthesis and essaying, leaving decision-making processes to human experts. This stands out, in a time when reinforcement learning (RL) algorithms are able to autonomously navigate some highly sophisticated space, like busy city streets,~\cite{doi:10.1080/15472450.2017.1291351} or battlegrounds in video games~\cite{DBLP:journals/corr/abs-1710-03748}. The challenges in exploring chemical spaces using RL, we believe, could be summarized into two aspects: first, the chemical space is discrete and combinatorial, which stops us from directly using well-established continuous optimization methods. Secondly, the accurate assessment of the reward and cost functions, i.e. potency, physical properties, and synthesis complexity, are prohibitively expensive. This poses difficulties when training model-free RL models which usually depends on rapid querying of the oracle function. To circumvent these obstacles, we propose to partition the goal into two subaims, the first being to come up with ways to quantitatively characterize the uncertainty associated with predictions made graph nets~\cite{battaglia2018relational}---the modern workhorse of molecular ML. Upon completion of this task, we will then incorporate such uncertainty estimation to direct RL in chemical space search.\\ \noindent\textbf{Aim 1. Quantifying Prediction Uncertainty Associated with Graph Nets Predictions} \noindent\textbf{Aim 2. Model-Based Reinforcement Learning on Combinatorial Space} \section{Research Strategies---Significance} \noindent\textbf{Collecting data in a drug discovery campaign is money- and time-consuming.} Drug discovery, from a statistician's point of view, can be regarded as optimizing certain properties (potency, selectivity, and physical properties such as solubility) while constraining others (toxicity, side effects, and so forth) on \textit{chemical space}--the space spanned by the astronomically large space of synthetically accessible molecules. One complete round of such optimization typically take more than \$1 billion and 10 years~\cite{Paul2010}. The difficulty of this process can be attributed to multiple factors: the vastness of the chemical universe, the large number of optimization steps needed (up to 10,000 molecules per project), the potentially suboptimally rational choices made by human experts, and the high cost associated with each step of evaluation---the purchase, synthesis, and characterization of compounds. Although these costs are sensitive to conditions like the degree of parallelism, the project stage, and the location and the organization structure of the institution, generally speaking, the cost of characterization increases as greater precision is required. Alchemical free energy calculations, with the uncertainty within few kcal/mol~\cite{pmid28430432}, cost approximately \$5--10 per compound, whereas physical binding assays namely isothermal titration calorimetry (ITC) and NMR, which brings the uncertainty down to within 0.1~kcal/mol (within a very narrow dynamic range), cost around \$50--100 for each compound, even if we neglect the cost of synthesizing or purchasing the compound, which usually surpasses that of the characterization. \textit{In silico} drug discovery aims to reduce such high cost by providing quantitative insights on the relationship between structure and activity. We will dedicate the rest of this section to discussing the challenges in ligand- and structure-based drug discovery we aim to address in this project.\\\\ \begin{minipage}[tb]{\linewidth} \small \centering \resizebox{\textwidth}{!}{ \begin{tabular}[\textwidth]{c c c c c} \hline Type & Assays & Uncertainty (kcal/mol) & Cost per compound (\$) & Time per compound\\ \hline & Machine earning & & 0 & 0\\ \textit{in Silico} & Docking & & 0 & 0\\ & Alchemical calculations & <3 & $5\sim10$ & 24 hr \\\\ Chemical and physical & ITC & 0.1 kcal/mol & $20\sim40$ &$1\sim1.5$ hr\\ & NMR & & $50\sim100$ & $2\sim3$ hr\\ \hline \end{tabular}} \textbf{Table 1. Summary of the uncertainty, cost, and time required for common assays in drug discovery.} \end{minipage}\\\\ \noindent\textbf{Traditional machine learning methods are data-hungry, limiting their use in drug design, where data is limited.} The popular machine learning algorithms that power applications that shape our daily lives were trained on millions of pictures of cats and dogs or large canons of multilingual newsletters (ImageNet \cite{imagenet_cvpr09}: 1.3M images; WMT\cite{wmt19}: 36M English-French sentences pairs). For molecular machine learning, on the other hand, because of the aforementioned high costs, datasets with abundance anywhere near that magnitude would be a true luxury. (One of the most popular dataset of QM-calculated properties, QM9~\cite{ramakrishnan2014quantum}, totals 133,885 compounds.) Medicinal chemistry teams often produce no more than a few thousand data molecules in a high-throughput screening project. The cost would easily exceed \$ 1 billion if one wanted to construct a dataset of comparable size with ImageNet that composed of \textit{experimental} data. The scarcity of data in drug discovery poses several challenges. First, with less data, learning invariances directly from data is more difficult. This is particularly true for string-based methods~\cite{DBLP:journals/corr/Altae-TranRPP16}, which often use a recurrent neural network~\cite{DBLP:journals/corr/ChungGCB14, Hochreiter:1997:LSM:1246443.1246450} to learn the information along string representation of molecules, and do not guarantee the same results will be produced for the same molecule that may have several distinct valid representations. Second, non-Bayesian approaches are sensitive to outliers, especially when data are scarce, whereas outliers are almost inevitable for data from experiments of high complexity~\cite{pmid26201396}.\\ \section{Research Strategies---Approach} Here, we briefly review the formulation of \textit{graph nets} in the context of molecular ML. Molecules are modelled as undirected graphs of atoms which each carry attributes reflecting their chemical nature---a tuple of three sets:\begin{equation} \mathcal{G} = \{ \mathcal{V, E, U}\} \end{equation}where $\mathcal{V}$ is the set of the vertices (or nodes) (atoms), $\mathcal{E}$ the set of (hyper)edges (bonds, angles, and dihedral angles), and $\mathcal{U} = \{ \mathbf{u}\}$ the universal (global) attribute. The notations and formulations we adopt is proposed by Battaglia et al~\cite{DBLP:journals/corr/abs-1806-01261}. For more details regarding the representation of molecules as graphs and strategies to enhance the learning and inference efficiency for small molecule topology, see our previous publication. \cite{2019arXiv190907903W}. Generally speaking, a set of learnable functions govern the three stages of a graph net in both training or inference process: initialization, propagation, and readout. In \textit{propagation stage}, for each round of message passing, the attributes of nodes, edges, and the graph as a whole, $\mathbf{v}, \mathbf{e}, \text{and } \mathbf{u}$ are updated by trainable functions in the following order: \begin{align} \mathbf{e}_k^{(t+1)} &= \phi^e(\mathbf{e}_k^{(t)}, \sum_{i \in \mathcal{N}^e_k}\mathbf{v}_i, \mathbf{u}^{(t)}), \\ \bar{\mathbf{e}}_i^{(t+1)} &=\rho^{e\rightarrow v}(E_i^{(t+1)}), \\ \mathbf{v}_i^{(t+1)} &= \phi^v(\bar{\mathbf{e}}_i^{(t+1)}, \mathbf{v}_i^{(t)}, \mathbf{u}^{(t)}), \\ \bar{\mathbf{e}}^{(t+1)} &= \rho^{e \rightarrow u}(E^{(t+1)}), \\ \bar{\mathbf{v}}^{(t+1)} &= \rho^{v \rightarrow u}(V^{(t)}), \\ \mathbf{u}^{(t+1)} &= \phi^u(\bar{\mathbf{e}}^{(t+1)}, \bar{\mathbf{v}}^{(t+1)}, \mathbf{u}^{(t)}), \end{align}where $E_i=\{ \mathbf{e}_k, k\in \mathcal{N}_i^v\}$ is the set of attributes of edges connected to a specific node, $E_i = \{ e_k, k \in 1, 2, ..., N^e\}$ is the set of attributes of all edges, $V$ is the set of attributes of all nodes, and $\mathcal{N}^v$ and $\mathcal{N}^e$ denote the set of indices of entities connected to a certain node or a certain edge, respectively. $\phi^e$, $\phi^v$, and $\phi^u$ are update functions that take the \textit{environment} of the an entity as input and update the attribute of the entity, which could be stateful [as in recurrent neural networks (RNNs)] or not; $\rho^{e \rightarrow v}$, $\rho^{e \rightarrow u}$, and $\rho^{v \rightarrow u}$ are aggregate functions that aggregate the attributes of multiple entities into an \textit{aggregated} attribute which shares the same dimension with each entity. \subsection*{ \textbf{Aim 1} Assess strategies for quantifying prediction uncertainty in predictions made with graph models.} In this \textit{Aim}, we will study the formulations, sampling strategies, and performances of Bayesian graph nets, to establish an understanding of how uncertainty estimates could be used to improve molecular property estimation and efficient molecular optimization strategy.\\ \noindent\textbf{Rationale} In a neural network, simply by replacing the constants weights by distributions, one get a \textit{Bayesian} neural network~\cite{blundell2015weight, neal2012bayesian}. In the inference phase, we abstract the information from the data $\mathcal{D} = \{(x_i, y_i)\}$ into the posterior distribution of the weights $\mathbf{w}_\text{NN}$, and the distribution of the new data points could thus be expressed as: \begin{equation} P(y^{(n+1)}|x^{(n+1)}, \mathcal{D}) = \int P(y^{(n+1)}|x^{(n+1)}, \mathbf{w}_\text{NN}) \, P(\mathbf{w}_\text{NN} | \{(x_i, y_i)\}) \, \operatorname{d}\mathbf{w}_\text{NN}. \label{int} \end{equation} Note here that fitting a vanilla neural network is equivalent to finding the maximum likelihood estimation (MLP), or, in the cases with regularization, maximum a posteriori (MAP) of $\mathbf{w}_\text{NN}$ via backprop. The advantages of Bayesian neural networks could be summarized as follow: \begin{enumerate} \item \textbf{For single point predictions, it is less prone to overfit.} The stochasticity introduced to the inference process itself is a means of regularization. To put it in another way, since the uncertainty in the training data will be assessed, overly confident decisions based on outliers will be less likely to appear. \item \textbf{Representation is richer through cheap averaging.} The uncertainties could be used in simple reinforcement learning settings namely contextual bandits~\cite{slivkins2014contextual}. \end{enumerate} When it comes to \textit{in silico} drug discovery, these are significant advantages: \textbf{1.} would allow low-data learning with high tolerance for outliers, which is common in drug discovery projects; \textbf{2.} can potentially accelerate Bayesian active search for small molecules with optimal efficacy~\cite{garnett2012bayesian}. Traditionally, uncertainty estimation could be achieved via either ensemble model~\cite{dietterich2000ensemble} or dropout variational inference~\cite{gal2016dropout}, both of which some surrogate for Bayesian, could be applied to essentially all types of supervised learning scenarios. To the best of our knowledge, no one has studied the effect of Bayesian probabilistic models on graph learning, let alone in molecular machine learning field. We hypothesize that, Bayesian graph nets provide more accurate point estimates of molecular properties compared to their fixed-weights counterparts, especially when training data is limited. We furthermore hypothesize that the uncertainty estimate given by such formulation could accelerate molecular optimization.\\ \noindent\textbf{What formulations and sampling methods of Bayesian graph nets lead to more efficient data utilization and more generalizable models?} Despite their theoretical advantages, Bayesian models, in reality, could be difficult to construct and train. Therefore, we are interested in comparing how various formulations, sampling methods, and approximation techniques in Bayesian modelling would have different complexity--performance tradeoff. We start by reviewing two Bayesian formulations (one approximated, one full Bayesian). \textbf{Bayes-by-backprop (BBB)}~\cite{blundell2015weight} variationally optimizes the parameters of Bayesian networks $\theta$ by backpropagation. That is to say, it assumes that the weights in the neural network $\mathbf{w}$ follows a certain parameterized distribution $\mathbf{w} \sim q(\mathbf{w} | \theta)$. The parameters are thus found by minimizing the Kullback-Leibler (KL) divergence with the true Bayesian posterior on the weights: \begin{equation} \theta^* = \underset{\theta}{\operatorname{argmin}}\mathcal{D}_\mathtt{KL}[q(\mathbf{w} | \theta) || P(\mathbf{w}|\mathcal{D})], \end{equation}where the divergence could be approximated using Monte Carlo samples, \begin{equation} \mathcal{D}_\mathtt{KL}[q(\mathbf{w} | \theta) || P(\mathbf{w}|\mathcal{D})] \approx \sum\limits_{i=1}^n \log q(\textbf{w}^{(i)}|\theta) - \log P(\textbf{w}^{(i)}) - \log P(\mathcal{D} | \mathbf{w}^{(i)}). \end{equation} \textbf{Langevin dynamics}~\cite{leimkuhler2019partitioned} Langevin dynamics is a sampling method originated in molecular simulations, where the parameter space is sampled using the time-discretized integration Langevin equation. In this setting, rather than minimizing a loss function, we aim to sample the \textit{interesting} region of the posterior distribution of parameters in \ref{int}, $P(\mathbf{w}_\text{NN} | \{(x_i, y_i)\})$, where its value is not trivially small. Similar to sampling the low energy regions of a molecular system, effectively sampling the space of $\textbf{w}_\text{NN}$ can be achieved by high quality Langevin integrators. One example is BAOAB, which splits the integration of the stoachastic differential equation system of Langevin dynamics into linear "drift"($A$), linear "kick"($B$), and Ornstein-Uhlenbeck process($O$)~\cite{schobel1999stochastic}. We will compare the performance of the models yielded from employing these sampling methods, and with vanilla graph nets (with dropout and in ensemble), in terms of accuracy of point estimate, computation complexity, and reliability of uncertainty estimation. We will split the dataset, namely QM9~\cite{ramakrishnan2014quantum}, into training, validation, and test (80:10:10), and independently train incrementally on 10\%, 20\%, 30\%, ... of the training data, and test on test set to study the change of performance w.r.t. the exposure of training data. In the mean time, we will record the trajectory of loss function to compare the speed of convergence. The leave-one-out (LOO) uncertainty estimation of samples used in training set could be approximated using \cite{Vehtari_2016}. We will test if the uncertainty estimations include the ground truth value most of the time. Finally, the computation efficiency could be evaluated by the number of parameters, complexity analysis, and by timing the training and inference on various hardware.\\ \noindent\textbf{What does uncertainty estimation really mean? Can it be used to drive a RL agent?} The functional uncertainty given by a Bayesian model reflects the reliability of the prediction, given the weights in the model. We are interested to study if such uncertainty could be integrated into a RL system and used by the agent to make informative moves when exploring the chemical spaces. To be more specific, we will define a function of the molecular topology with its ground truth values known within a dataset. To mimic the real applications in drug discovery projects, this could be a combination of solubility, binding energy to a specific protein, and toxicity. The agent is allowed to access a certain number of ground truth values at each step, based on which the regression models are trained, and decisions are made, namely using Thompson sampling~\cite{slivkins2014contextual}, to access the next batch of molecules. We will compare the number of steps needed for each agents to reach the region where the target function are high in values. We believe this experiment will cast light on the role and significance of uncertainty estimation in reinforcement learning-aided drug discovery.\\ \end{document}
{ "alphanum_fraction": 0.7780151813, "avg_line_length": 117.0065789474, "ext": "tex", "hexsha": "204b370b0361daa9a73e1827a257c76bb45027e8", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-02-19T18:35:27.000Z", "max_forks_repo_forks_event_min_datetime": "2020-04-25T03:23:40.000Z", "max_forks_repo_head_hexsha": "413a349ab42912d8a668a645effde8e70ba608a6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "choderalab/pinot", "max_forks_repo_path": "drafts/problem-statement/main.tex", "max_issues_count": 89, "max_issues_repo_head_hexsha": "413a349ab42912d8a668a645effde8e70ba608a6", "max_issues_repo_issues_event_max_datetime": "2021-04-02T19:36:50.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-27T21:18:55.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "choderalab/pinot", "max_issues_repo_path": "drafts/problem-statement/main.tex", "max_line_length": 1131, "max_stars_count": 13, "max_stars_repo_head_hexsha": "413a349ab42912d8a668a645effde8e70ba608a6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "choderalab/pinot", "max_stars_repo_path": "drafts/problem-statement/main.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-28T19:29:35.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-23T21:53:06.000Z", "num_tokens": 4453, "size": 17785 }