markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Step 4.b: Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations \[Back to [top](toc)\]$$\label{taustildesourceterms}$$Recall from above that\begin{array}\ \partial_t \tilde{\tau} &+ \partial_j \underbrace{\left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right)} &= s \\\partial_t \tilde{S}_i &+ \partial_j \underbrace{\left(\alpha \sqrt{\gamma} T^j{}_i \right)} &= \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.\end{array}Here we will define all terms that go inside the $\partial_j$'s on the left-hand side of the above equations (i.e., the underbraced expressions):
# Step 4.c: tau_tilde flux def compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star): global tau_tilde_fluxU tau_tilde_fluxU = ixp.zerorank1(DIM=3) for j in range(3): tau_tilde_fluxU[j] = alpha**2*sqrtgammaDET*T4UU[0][j+1] - rho_star*vU[j] # Step 4.d: S_tilde flux def compute_S_tilde_fluxUD(alpha, sqrtgammaDET, T4UD): global S_tilde_fluxUD S_tilde_fluxUD = ixp.zerorank2(DIM=3) for j in range(3): for i in range(3): S_tilde_fluxUD[j][i] = alpha*sqrtgammaDET*T4UD[j+1][i+1]
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 5: Define source terms on RHSs of GRHD equations \[Back to [top](toc)\]$$\label{grhdsourceterms}$$ Step 5.a: Define $s$ source term on RHS of $\tilde{\tau}$ equation \[Back to [top](toc)\]$$\label{ssourceterm}$$Recall again from above the $s$ source term on the right-hand side of the $\tilde{\tau}$ evolution equation is given in terms of ADM quantities and the stress-energy tensor via$$s = \underbrace{\alpha \sqrt{\gamma}}_{\text{Term 3}}\left[\underbrace{\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}}_{\text{Term 1}}\underbrace{- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha}_{\text{Term 2}} \right],$$
def compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU): global s_source_term s_source_term = sp.sympify(0) # Term 1: for i in range(3): for j in range(3): s_source_term += (T4UU[0][0]*betaU[i]*betaU[j] + 2*T4UU[0][i+1]*betaU[j] + T4UU[i+1][j+1])*KDD[i][j] # Term 2: for i in range(3): s_source_term += -(T4UU[0][0]*betaU[i] + T4UU[0][i+1])*alpha_dD[i] # Term 3: s_source_term *= alpha*sqrtgammaDET
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 5.b: Define source term on RHS of $\tilde{S}_i$ equation \[Back to [top](toc)\]$$\label{stildeisourceterm}$$Recall from above$$\partial_t \tilde{S}_i + \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$Our goal here will be to compute$$\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$ Step 5.b.i: Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives \[Back to [top](toc)\]$$\label{fourmetricderivs}$$To compute $g_{\mu\nu,i}$ we need to evaluate the first derivative of $g_{\mu\nu}$ in terms of ADM variables.We are given $\gamma_{ij}$, $\alpha$, and $\beta^i$, and the 4-metric is given in terms of these quantities via$$g_{\mu\nu} = \begin{pmatrix} -\alpha^2 + \beta^k \beta_k & \beta_i \\\beta_j & \gamma_{ij}\end{pmatrix}.$$Thus $$g_{\mu\nu,k} = \begin{pmatrix} -2 \alpha\alpha_{,i} + \beta^j_{,k} \beta_j + \beta^j \beta_{j,k} & \beta_{i,k} \\\beta_{j,k} & \gamma_{ij,k}\end{pmatrix},$$where $\beta_{i} = \gamma_{ij} \beta^j$, so$$\beta_{i,k} = \gamma_{ij,k} \beta^j + \gamma_{ij} \beta^j_{,k}$$
def compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD): global g4DD_zerotimederiv_dD # Eq. 2.121 in B&S betaD = ixp.zerorank1(DIM=3) for i in range(3): for j in range(3): betaD[i] += gammaDD[i][j]*betaU[j] betaDdD = ixp.zerorank2(DIM=3) for i in range(3): for j in range(3): for k in range(3): # Recall that betaD[i] = gammaDD[i][j]*betaU[j] (Eq. 2.121 in B&S) betaDdD[i][k] += gammaDD_dD[i][j][k]*betaU[j] + gammaDD[i][j]*betaU_dD[j][k] # Eq. 2.122 in B&S g4DD_zerotimederiv_dD = ixp.zerorank3(DIM=4) for k in range(3): # Recall that g4DD[0][0] = -alpha^2 + betaU[j]*betaD[j] g4DD_zerotimederiv_dD[0][0][k+1] += -2*alpha*alpha_dD[k] for j in range(3): g4DD_zerotimederiv_dD[0][0][k+1] += betaU_dD[j][k]*betaD[j] + betaU[j]*betaDdD[j][k] for i in range(3): for k in range(3): # Recall that g4DD[i][0] = g4DD[0][i] = betaD[i] g4DD_zerotimederiv_dD[i+1][0][k+1] = g4DD_zerotimederiv_dD[0][i+1][k+1] = betaDdD[i][k] for i in range(3): for j in range(3): for k in range(3): # Recall that g4DD[i][j] = gammaDD[i][j] g4DD_zerotimederiv_dD[i+1][j+1][k+1] = gammaDD_dD[i][j][k]
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 5.b.ii: Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$ \[Back to [top](toc)\]$$\label{stildeisource}$$Now that we've computed `g4DD_zerotimederiv_dD`$=g_{\mu\nu,i}$, the $\tilde{S}_i$ evolution equation source term may be quickly constructed.
# Step 5.b.ii: Compute S_tilde source term def compute_S_tilde_source_termD(alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU): global S_tilde_source_termD S_tilde_source_termD = ixp.zerorank1(DIM=3) for i in range(3): for mu in range(4): for nu in range(4): S_tilde_source_termD[i] += sp.Rational(1,2)*alpha*sqrtgammaDET*T4UU[mu][nu]*g4DD_zerotimederiv_dD[mu][nu][i+1]
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 6: Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson) \[Back to [top](toc)\]$$\label{convertvtou}$$According to Eqs. 9-11 of [the IllinoisGRMHD paper](https://arxiv.org/pdf/1501.07276.pdf), the Valencia 3-velocity $v^i_{(n)}$ is related to the 4-velocity $u^\mu$ via\begin{align}\alpha v^i_{(n)} &= \frac{u^i}{u^0} + \beta^i \\\implies u^i &= u^0 \left(\alpha v^i_{(n)} - \beta^i\right)\end{align}Defining $v^i = \frac{u^i}{u^0}$, we get$$v^i = \alpha v^i_{(n)} - \beta^i,$$and in terms of this variable we get\begin{align}g_{00} \left(u^0\right)^2 + 2 g_{0i} u^0 u^i + g_{ij} u^i u^j &= \left(u^0\right)^2 \left(g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j\right)\\\implies u^0 &= \pm \sqrt{\frac{-1}{g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{-1}{(-\alpha^2 + \beta^2) + 2 \beta_i v^i + \gamma_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{1}{\alpha^2 - \gamma_{ij}\left(\beta^i + v^i\right)\left(\beta^j + v^j\right)}}\\&= \pm \sqrt{\frac{1}{\alpha^2 - \alpha^2 \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\&= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\end{align}Generally speaking, numerical errors will occasionally drive expressions under the radical to either negative values or potentially enormous values (corresponding to enormous Lorentz factors). Thus a reliable approach for computing $u^0$ requires that we first rewrite the above expression in terms of the Lorentz factor squared: $\Gamma^2=\left(\alpha u^0\right)^2$:\begin{align}u^0 &= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\\implies \left(\alpha u^0\right)^2 &= \frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}} \\\implies \gamma_{ij}v^i_{(n)}v^j_{(n)} &= 1 - \frac{1}{\left(\alpha u^0\right)^2} \\&= 1 - \frac{1}{\Gamma^2}\end{align}In order for the bottom expression to hold true, the left-hand side must be between 0 and 1. Again, this is not guaranteed due to the appearance of numerical errors. In fact, a robust algorithm will not allow $\Gamma^2$ to become too large (which might contribute greatly to the stress-energy of a given gridpoint), so let's define the largest allowed Lorentz factor as $\Gamma_{\rm max}$.Then our algorithm for computing $u^0$ is as follows:If$$R=\gamma_{ij}v^i_{(n)}v^j_{(n)}>1 - \frac{1}{\Gamma_{\rm max}^2},$$ then adjust the 3-velocity $v^i$ as follows:$$v^i_{(n)} \to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}.$$After this rescaling, we are then guaranteed that if $R$ is recomputed, it will be set to its ceiling value $R=R_{\rm max} = 1 - \frac{1}{\Gamma_{\rm max}^2}$.Then, regardless of whether the ceiling on $R$ was applied, $u^0$ can be safely computed via$$u^0 = \frac{1}{\alpha \sqrt{1-R}},$$and the remaining components $u^i$ via$$u^i = u^0 v^i.$$In summary our algorithm for computing $u^{\mu}$ from $v^i = \frac{u^i}{u^0}$ is as follows:1. Choose a maximum Lorentz factor $\Gamma_{\rm max}$=`GAMMA_SPEED_LIMIT`, and define $v^i_{(n)} = \frac{1}{\alpha}\left( \frac{u^i}{u^0} + \beta^i\right)$.1. Compute $R=\gamma_{ij}v^i_{(n)}v^j_{(n)}=1 - \frac{1}{\Gamma^2}$1. If $R \le 1 - \frac{1}{\Gamma_{\rm max}^2}$, then skip the next step.1. Otherwise if $R > 1 - \frac{1}{\Gamma_{\rm max}^2}$ then adjust $v^i_{(n)}\to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}$, which will force $R=R_{\rm max}$.1. Given the $R$ computed in the above step, $u^0 = \frac{1}{\alpha \sqrt{1-R}}$, and $u^i=u^0 v^i$.While the above algorithm is quite robust, its `if()` statement in the fourth step is not very friendly to NRPy+ or an optimizing C compiler, as it would require NRPy+ to generate separate C kernels for each branch of the `if()`. Let's instead try the following trick, which Roland Haas taught us. Define $R^*$ as$$R^* = \frac{1}{2} \left(R_{\rm max} + R - |R_{\rm max} - R| \right).$$If $R>R_{\rm max}$, then $|R_{\rm max} - R|=R - R_{\rm max}$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R - R_{\rm max}) \right) = \frac{1}{2} \left(2 R_{\rm max}\right) = R_{\rm max}$$If $R\le R_{\rm max}$, then $|R_{\rm max} - R|=R_{\rm max} - R$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R_{\rm max} - R) \right) = \frac{1}{2} \left(2 R\right) = R$$Then we can rescale *all* $v^i_{(n)}$ via$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R}},$$though we must be very careful to carefully handle the case in which $R=0$. To avoid any problems in this case, we simply adjust the above rescaling by adding a tiny number [`TINYDOUBLE`](https://en.wikipedia.org/wiki/Tiny_Bubbles) to $R$ in the denominator, typically `1e-100`:$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R + {\rm TINYDOUBLE}}}.$$Finally, $u^0$ can be immediately and safely computed, via:$$u^0 = \frac{1}{\alpha \sqrt{1-R^*}},$$and $u^i$ via $$u^i = u^0 v^i = u^0 \left(\alpha v^i_{(n)} - \beta^i\right).$$
# Step 6.a: Convert Valencia 3-velocity v_{(n)}^i into u^\mu, and apply a speed limiter # Speed-limited ValenciavU is output to rescaledValenciavU global. def u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU): # Inputs: Metric lapse alpha, shift betaU, 3-metric gammaDD, Valencia 3-velocity ValenciavU # Outputs (as globals): u4U_ito_ValenciavU, rescaledValenciavU # R = gamma_{ij} v^i v^j R = sp.sympify(0) for i in range(3): for j in range(3): R += gammaDD[i][j]*ValenciavU[i]*ValenciavU[j] thismodule = "GRHD" # The default value isn't terribly important here, since we can overwrite in the main C code GAMMA_SPEED_LIMIT = par.Cparameters("REAL", thismodule, "GAMMA_SPEED_LIMIT", 10.0) # Default value based on # IllinoisGRMHD. # GiRaFFE default = 2000.0 Rmax = 1 - 1 / (GAMMA_SPEED_LIMIT * GAMMA_SPEED_LIMIT) # Now, we set Rstar = min(Rmax,R): # If R < Rmax, then Rstar = 0.5*(Rmax+R-Rmax+R) = R # If R >= Rmax, then Rstar = 0.5*(Rmax+R+Rmax-R) = Rmax Rstar = sp.Rational(1, 2) * (Rmax + R - nrpyAbs(Rmax - R)) # We add TINYDOUBLE to R below to avoid a 0/0, which occurs when # ValenciavU == 0 for all Valencia 3-velocity components. # "Those tiny *doubles* make me warm all over # with a feeling that I'm gonna love you till the end of time." # - Adapted from Connie Francis' "Tiny Bubbles" TINYDOUBLE = par.Cparameters("#define",thismodule,"TINYDOUBLE",1e-100) # The rescaled (speed-limited) Valencia 3-velocity # is given by, v_{(n)}^i = sqrt{Rstar/R} v^i global rescaledValenciavU rescaledValenciavU = ixp.zerorank1(DIM=3) for i in range(3): # If R == 0, then Rstar == 0, so sqrt( Rstar/(R+TINYDOUBLE) )=sqrt(0/1e-100) = 0 # If your velocities are of order 1e-100 and this is physically # meaningful, there must be something wrong with your unit conversion. rescaledValenciavU[i] = ValenciavU[i]*sp.sqrt(Rstar/(R + TINYDOUBLE)) # Finally compute u^mu in terms of Valenciav^i # u^0 = 1/(alpha-sqrt(1-R^*)) global u4U_ito_ValenciavU u4U_ito_ValenciavU = ixp.zerorank1(DIM=4) u4U_ito_ValenciavU[0] = 1/(alpha*sp.sqrt(1-Rstar)) # u^i = u^0 ( alpha v^i_{(n)} - beta^i ), where v^i_{(n)} is the Valencia 3-velocity for i in range(3): u4U_ito_ValenciavU[i+1] = u4U_ito_ValenciavU[0] * (alpha * rescaledValenciavU[i] - betaU[i]) # Step 6.b: Convert v^i into u^\mu, and apply a speed limiter. # Speed-limited vU is output to rescaledvU global. def u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, vU): ValenciavU = ixp.zerorank1(DIM=3) for i in range(3): ValenciavU[i] = (vU[i] + betaU[i])/alpha u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU) # Since ValenciavU is written in terms of vU, # u4U_ito_ValenciavU is actually u4U_ito_vU global u4U_ito_vU u4U_ito_vU = ixp.zerorank1(DIM=4) for mu in range(4): u4U_ito_vU[mu] = u4U_ito_ValenciavU[mu] # Finally compute the rescaled (speed-limited) vU global rescaledvU rescaledvU = ixp.zerorank1(DIM=3) for i in range(3): rescaledvU[i] = alpha * rescaledValenciavU[i] - betaU[i]
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 7: Declare ADM and hydrodynamical input variables, and construct GRHD equations \[Back to [top](toc)\]$$\label{declarevarsconstructgrhdeqs}$$
# First define hydrodynamical quantities u4U = ixp.declarerank1("u4U", DIM=4) rho_b,P,epsilon = sp.symbols('rho_b P epsilon',real=True) # Then ADM quantities gammaDD = ixp.declarerank2("gammaDD","sym01",DIM=3) KDD = ixp.declarerank2("KDD" ,"sym01",DIM=3) betaU = ixp.declarerank1("betaU", DIM=3) alpha = sp.symbols('alpha', real=True) # First compute stress-energy tensor T4UU and T4UD: compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U) compute_T4UD(gammaDD,betaU,alpha, T4UU) # Next sqrt(gamma) compute_sqrtgammaDET(gammaDD) # Compute conservative variables in terms of primitive variables compute_rho_star( alpha, sqrtgammaDET, rho_b,u4U) compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star) compute_S_tildeD( alpha, sqrtgammaDET, T4UD) # Then compute v^i from u^mu compute_vU_from_u4U__no_speed_limit(u4U) # Next compute fluxes of conservative variables compute_rho_star_fluxU( vU, rho_star) compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star) compute_S_tilde_fluxUD( alpha, sqrtgammaDET, T4UD) # Then declare derivatives & compute g4DD_zerotimederiv_dD gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3) betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3) alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3) compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD) # Then compute source terms on tau_tilde and S_tilde equations compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU) compute_S_tilde_source_termD( alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU) # Then compute the 4-velocities in terms of an input Valencia 3-velocity testValenciavU[i] testValenciavU = ixp.declarerank1("testValenciavU",DIM=3) u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, testValenciavU) # Finally compute the 4-velocities in terms of an input 3-velocity testvU[i] = u^i/u^0 testvU = ixp.declarerank1("testvU",DIM=3) u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, testvU)
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 8: Code Validation against `GRHD.equations` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the GRHD equations generated in1. this tutorial versus2. the NRPy+ [GRHD.equations](../edit/GRHD/equations.py) module.
import GRHD.equations as Ge # First compute stress-energy tensor T4UU and T4UD: Ge.compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U) Ge.compute_T4UD(gammaDD,betaU,alpha, Ge.T4UU) # Next sqrt(gamma) Ge.compute_sqrtgammaDET(gammaDD) # Compute conservative variables in terms of primitive variables Ge.compute_rho_star( alpha, Ge.sqrtgammaDET, rho_b,u4U) Ge.compute_tau_tilde(alpha, Ge.sqrtgammaDET, Ge.T4UU,Ge.rho_star) Ge.compute_S_tildeD( alpha, Ge.sqrtgammaDET, Ge.T4UD) # Then compute v^i from u^mu Ge.compute_vU_from_u4U__no_speed_limit(u4U) # Next compute fluxes of conservative variables Ge.compute_rho_star_fluxU ( Ge.vU, Ge.rho_star) Ge.compute_tau_tilde_fluxU(alpha, Ge.sqrtgammaDET, Ge.vU,Ge.T4UU, Ge.rho_star) Ge.compute_S_tilde_fluxUD (alpha, Ge.sqrtgammaDET, Ge.T4UD) # Then declare derivatives & compute g4DD_zerotimederiv_dD # gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3) # betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3) # alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3) Ge.compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD) # Finally compute source terms on tau_tilde and S_tilde equations Ge.compute_s_source_term(KDD,betaU,alpha, Ge.sqrtgammaDET,alpha_dD, Ge.T4UU) Ge.compute_S_tilde_source_termD( alpha, Ge.sqrtgammaDET,Ge.g4DD_zerotimederiv_dD,Ge.T4UU) GetestValenciavU = ixp.declarerank1("testValenciavU") Ge.u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha, betaU, gammaDD, GetestValenciavU) GetestvU = ixp.declarerank1("testvU") Ge.u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit( alpha, betaU, gammaDD, GetestvU) all_passed=True def comp_func(expr1,expr2,basename,prefixname2="Ge."): if str(expr1-expr2)!="0": print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2)) all_passed=False def gfnm(basename,idx1,idx2=None,idx3=None): if idx2 is None: return basename+"["+str(idx1)+"]" if idx3 is None: return basename+"["+str(idx1)+"]["+str(idx2)+"]" return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]" expr_list = [] exprcheck_list = [] namecheck_list = [] namecheck_list.extend(["sqrtgammaDET","rho_star","tau_tilde","s_source_term"]) exprcheck_list.extend([Ge.sqrtgammaDET,Ge.rho_star,Ge.tau_tilde,Ge.s_source_term]) expr_list.extend([sqrtgammaDET,rho_star,tau_tilde,s_source_term]) for mu in range(4): namecheck_list.extend([gfnm("u4_ito_ValenciavU",mu),gfnm("u4U_ito_vU",mu)]) exprcheck_list.extend([ Ge.u4U_ito_ValenciavU[mu], Ge.u4U_ito_vU[mu]]) expr_list.extend( [ u4U_ito_ValenciavU[mu], u4U_ito_vU[mu]]) for nu in range(4): namecheck_list.extend([gfnm("T4UU",mu,nu),gfnm("T4UD",mu,nu)]) exprcheck_list.extend([Ge.T4UU[mu][nu],Ge.T4UD[mu][nu]]) expr_list.extend([T4UU[mu][nu],T4UD[mu][nu]]) for delta in range(4): namecheck_list.extend([gfnm("g4DD_zerotimederiv_dD",mu,nu,delta)]) exprcheck_list.extend([Ge.g4DD_zerotimederiv_dD[mu][nu][delta]]) expr_list.extend([g4DD_zerotimederiv_dD[mu][nu][delta]]) for i in range(3): namecheck_list.extend([gfnm("S_tildeD",i),gfnm("vU",i),gfnm("rho_star_fluxU",i), gfnm("tau_tilde_fluxU",i),gfnm("S_tilde_source_termD",i), gfnm("rescaledValenciavU",i), gfnm("rescaledvU",i)]) exprcheck_list.extend([Ge.S_tildeD[i],Ge.vU[i],Ge.rho_star_fluxU[i], Ge.tau_tilde_fluxU[i],Ge.S_tilde_source_termD[i], Ge.rescaledValenciavU[i],Ge.rescaledvU[i]]) expr_list.extend([S_tildeD[i],vU[i],rho_star_fluxU[i], tau_tilde_fluxU[i],S_tilde_source_termD[i], rescaledValenciavU[i],rescaledvU[i]]) for j in range(3): namecheck_list.extend([gfnm("S_tilde_fluxUD",i,j)]) exprcheck_list.extend([Ge.S_tilde_fluxUD[i][j]]) expr_list.extend([S_tilde_fluxUD[i][j]]) for i in range(len(expr_list)): comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i]) import sys if all_passed: print("ALL TESTS PASSED!") else: print("ERROR: AT LEAST ONE TEST DID NOT PASS") sys.exit(1)
ALL TESTS PASSED!
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-GRHD_Equations-Cartesian.pdf](Tutorial-GRHD_Equations-Cartesian.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GRHD_Equations-Cartesian")
Created Tutorial-GRHD_Equations-Cartesian.tex, and compiled LaTeX file to PDF file Tutorial-GRHD_Equations-Cartesian.pdf
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
K-Means
class Kmeans: """K-Means Clustering Algorithm""" def __init__(self, k, centers=None, cost=None,iter=None, labels=None, max_iter = 1000): """Initialize Parameters""" self.max_iter = max_iter self.k = k self.centers = np.empty(1) self.cost = [] self.iter = 1 self.labels = np.empty(1) def calc_distances(self, data, centers, weights): """Distance Matrix""" distance = pairwise_distances(data, centers)**2 min_distance = np.min(distance, axis = 1) D = min_distance*weights return D def fit(self, data): """Clustering Process""" ## Initial centers if type(data) == pd.DataFrame: data = data.values nrow = data.shape[0] index = np.random.choice(range(nrow), self.k, False) self.centers = data[index] while (self.iter <= self.max_iter): distance = pairwise_distances(data, self.centers)**2 self.cost.append(sum(np.min(distance, axis=1))) self.labels = np.argmin(distance, axis=1) centers_new = np.array([np.mean(data[self.labels == i], axis=0) for i in np.unique(self.labels)]) ## sanity check if(np.all(self.centers == centers_new)): break self.centers = centers_new self.iter += 1 ## convergence check if (sum(np.min(pairwise_distances(data, self.centers)**2, axis=1)) != self.cost[-1]): warnings.warn("Algorithm Did Not Converge In {} Iterations".format(self.max_iter)) return self
_____no_output_____
MIT
Algorithms.ipynb
BeanHam/STA-663-Project
K-Means++
class Kmeanspp: """K-Means++ Clustering Algorithm""" def __init__(self, k, centers=None, cost=None,iter=None, labels=None, max_iter = 1000): """Initialize Parameters""" self.max_iter = max_iter self.k = k self.centers = np.empty(1) self.cost = [] self.iter = 1 self.labels = np.empty(1) def calc_distances(self, data, centers, weights): """Distance Matrix""" distance = pairwise_distances(data, centers)**2 min_distance = np.min(distance, axis = 1) D = min_distance*weights return D def initial_centers_Kmeansapp(self, data, k, weights): """Initialize centers for K-Means++""" centers = [] centers.append(random.choice(data)) while(len(centers) < k): distances = self.calc_distances(data, centers, weights) prob = distances/sum(distances) c = np.random.choice(range(data.shape[0]), 1, p=prob) centers.append(data[c[0]]) return centers def fit(self, data, weights=None): """Clustering Process""" if weights is None: weights = np.ones(len(data)) if type(data) == pd.DataFrame: data=data.values nrow = data.shape[0] self.centers = self.initial_centers_Kmeansapp(data, self.k, weights) while (self.iter <= self.max_iter): distance = pairwise_distances(data, self.centers)**2 self.cost.append(sum(np.min(distance, axis=1))) self.labels = np.argmin(distance, axis=1) centers_new = np.array([np.mean(data[self.labels == i], axis=0) for i in np.unique(self.labels)]) ## sanity check if(np.all(self.centers == centers_new)): break self.centers = centers_new self.iter += 1 ## convergence check if (sum(np.min(pairwise_distances(data, self.centers)**2, axis=1)) != self.cost[-1]): warnings.warn("Algorithm Did Not Converge In {} Iterations".format(self.max_iter)) return self
_____no_output_____
MIT
Algorithms.ipynb
BeanHam/STA-663-Project
K-Meansll
class Kmeansll: """K-Meansll Clustering Algorithm""" def __init__(self, k, omega, centers=None, cost=None,iter=None, labels=None, max_iter = 1000): """Initialize Parameters""" self.max_iter = max_iter self.k = k self.omega = omega self.centers = np.empty(1) self.cost = [] self.iter = 1 self.labels = np.empty(1) def calc_weight(self, data, centers): """Weight Calculation""" l = len(centers) distance = pairwise_distances(data, centers) labels = np.argmin(distance, axis=1) weights = [sum(labels == i) for i in range(l)] return (weights/sum(weights)) def calc_distances(self, data, centers, weights): """Distance Matrix""" distance = pairwise_distances(data, centers)**2 min_distance = np.min(distance, axis = 1) D = min_distance*weights return D def initial_centers_Kmeansll(self, data, k, omega, weights): """Initialize Centers for K-Meansll""" centers = [] centers.append(random.choice(data)) phi = np.int(np.round(np.log(sum(self.calc_distances(data, centers, weights))))) l = k*omega ## oversampling factor for i in range(phi): dist = self.calc_distances(data, centers, weights) prob = l*dist/sum(dist) for i in range(len(prob)): if prob[i] > np.random.uniform(): centers.append(data[i]) centers = np.array(centers) recluster_weight = self.calc_weight(data, centers) reclusters = kmeanspp.Kmeanspp(k).fit(centers, recluster_weight).labels initial_centers = [] for i in np.unique(reclusters): initial_centers.append(np.mean(centers[reclusters == i], axis = 0)) return initial_centers def fit(self, data, weights=None): """Clustering Process""" if weights is None: weights = np.ones(len(data)) if type(data) == pd.DataFrame: data=data.values nrow = data.shape[0] self.centers = self.initial_centers_Kmeansll(data, self.k, self.omega, weights) while (self.iter <= self.max_iter): distance = pairwise_distances(data, self.centers)**2 self.cost.append(sum(np.min(distance, axis=1))) self.labels = np.argmin(distance, axis=1) centers_new = np.array([np.mean(data[self.labels == i], axis=0) for i in np.unique(self.labels)]) ## sanity check if(np.all(self.centers == centers_new)): break self.centers = centers_new self.iter += 1 ## convergence check if (sum(np.min(pairwise_distances(data, self.centers)**2, axis=1)) != self.cost[-1]): warnings.warn("Algorithm Did Not Converge In {} Iterations".format(self.max_iter)) return self
_____no_output_____
MIT
Algorithms.ipynb
BeanHam/STA-663-Project
1. a)
def simetrica(A): "Verifică dacă matricea A este simetrică" return np.all(A == A.T) def pozitiv_definita(A): "Verifică dacă matricea A este pozitiv definită" for i in range(1, len(A) + 1): d_minor = np.linalg.det(A[:i, :i]) if d_minor < 0: return False return True def fact_ll(A): # Pasul 1 if not simetrica(A): raise Exception("Nu este simetrica") if not pozitiv_definita(A): raise Exception("Nu este pozitiv definită") N = A.shape[0] # Pasul 2 S = A.copy() L = np.zeros((N, N)) # Pasul 3 for i in range(N): # Actualizez coloana i din matricea L L[:, i] = S[:, i] / np.sqrt(S[i, i]) # Calculez noul complement Schur S_21 = S[i + 1:, i] S_nou = np.eye(N) S_nou[i + 1:, i + 1:] = S[i + 1:, i + 1:] - np.outer(S_21, S_21.T) / S[i, i] S = S_nou # Returnez matricea calculată return L A = np.array([ [25, 15, -5], [15, 18, 0], [-5, 0, 11] ], dtype=np.float64) L = fact_ll(A) print("L este:") print(L) print("Verificare:") print(L @ L.T)
L este: [[ 5. 0. 0.] [ 3. 3. 0.] [-1. 1. 3.]] Verificare: [[25. 15. -5.] [15. 18. 0.] [-5. 0. 11.]]
MIT
cn/examen/Examen - Proba practică.ipynb
GabrielMajeri/teme-fmi
b)
b = np.array([1, 2, 3], dtype=np.float64) y = np.zeros(3) x = np.zeros(3) # Substituție ascendentă for i in range(0, 3): coefs = L[i, :i + 1] values = y[:i + 1] y[i] = (b[i] - coefs @ values) / L[i, i] L_t = L.T # Substituție descendentă for i in range(2, -1, -1): coefs = L_t[i, i + 1:] values = x[i + 1:] x[i] = (y[i] - coefs @ values) / L_t[i, i] print("x =", x) print() print("Verificare: A @ x =", A @ x)
x = [0.06814815 0.05432099 0.3037037 ] Verificare: A @ x = [1. 2. 3.]
MIT
cn/examen/Examen - Proba practică.ipynb
GabrielMajeri/teme-fmi
2.
def step(x, f, df): "Calculează un pas din metoda Newton-Rhapson." return x - f(x) / df(x) def newton_rhapson(f, df, x0, eps): "Determină o soluție a f(x) = 0 plecând de la x_0" # Primul punct este cel primit ca parametru prev_x = x0 # Execut o iterație x = step(x0, f, df) N = 1 while True: # Verific condiția de oprire if abs(x - prev_x) / abs(prev_x) < eps: break # Execut încă un pas prev_x = x x = step(x, f, df) # Contorizez numărul de iterații N += 1 return x, N
_____no_output_____
MIT
cn/examen/Examen - Proba practică.ipynb
GabrielMajeri/teme-fmi
Funcția dată este$$f(x) = x^3 + 3 x^2 - 18 x - 40$$iar derivatele ei sunt$$f'(x) = 3x^2 + 6 x - 18$$$$f''(x) = 6x + 6$$
f = lambda x: (x ** 3) + 3 * (x ** 2) - 18 * x - 40 df = lambda x: 3 * (x ** 2) + 6 * x - 18 ddf = lambda x: 6 * x + 6 left = -8 right = +8 x_grafic = np.linspace(left, right, 500) def set_spines(ax): # Mut axele de coordonate ax.spines['bottom'].set_position('zero') ax.spines['top'].set_color('none') ax.spines['left'].set_position('zero') ax.spines['right'].set_color('none') fig, ax = plt.subplots(dpi=120) set_spines(ax) plt.plot(x_grafic, f(x_grafic), label='$f$') plt.plot(x_grafic, df(x_grafic), label="$f'$") plt.plot(x_grafic, ddf(x_grafic), label="$f''$") plt.legend() plt.show()
_____no_output_____
MIT
cn/examen/Examen - Proba practică.ipynb
GabrielMajeri/teme-fmi
Alegem subintervale astfel încât $f(a) f(b) < 0$:- $[-8, -4]$- $[-4, 0]$- $[2, 6]$Pentru fiecare dintre acestea, căutăm un punct $x_0$ astfel încât $f(x_0) f''(x_0) > 0$:- $-6$- $-1$- $5$
eps = 1e-3 x1, _ = newton_rhapson(f, df, -6, eps) x2, _ = newton_rhapson(f, df, -1, eps) x3, _ = newton_rhapson(f, df, 5, eps) fig, ax = plt.subplots(dpi=120) plt.suptitle('Soluțiile lui $f(x) = 0$') set_spines(ax) plt.plot(x_grafic, f(x_grafic)) plt.scatter(x1, 0) plt.scatter(x2, 0) plt.scatter(x3, 0) plt.show()
_____no_output_____
MIT
cn/examen/Examen - Proba practică.ipynb
GabrielMajeri/teme-fmi
Input data representation as 2D array of 3D blocks> An easy way to represent input data to neural networks or any other machine learning algorithm in the form of 2D array of 3D-blocks- toc: false- branch: master- badges: true- comments: true- categories: [machine learning, jupyter, graphviz]- image: images/array_visualiser/thumbnail.png- search_exclude: false ---Often while working with machine learning algorithms the developer has a good picture of how the input data looks like apart from knowing what the input data is. Also, most of the times the input data is usually represented or decribed with array terminology. Hence, this particular post is one such attempt to create simple 2D representations of 3D-blocks symbolising the arrays used for input.[Graphviz](https://graphviz.readthedocs.io/en/stable/) a highly versatile graphing library that creates graphs based on DOT language is used to create the 2D array representation of 3D blocks with annotation and color uniformity to create quick and concise graphs/pictures for good explanations of input data used in various machine learning/deep learning algorithms.In what follows is a script to create the 2D array representation og 3D blocks mainly intented for time-series data. The script facilitates some features which include-* Starting at time instant 0 or -1* counting backwards i.e. t-4 -> t-3 -> t-2 -> t-1 -> t-0 or counting forwards t-0 -> t-1 -> t-2 -> t-3 -> t-4 -> t-5 Imports and global constants
import graphviz as G # to create the required graphs import random # to generate random hex codes for colors FORWARDS = True # to visualise array from left to right BACKWARDS = False # to visualise array from right to left
_____no_output_____
Apache-2.0
_notebooks/2020-12-26-Array-Visualiser.ipynb
logicatcore/scratchpad
Properties of 2D representation of 3D array blocksMain features/properties of the array visualisation needed are defined gere before actually creating the graph/picture.1) Number of Rows: similar to rows in a matrix where each each row corresponds to one particular data type with data across different time instants arranged in columns2) Blocks: which corresponds to the number of time instants in each row (jagged arrays can also be graphed)3) Prefix: the annotation used to annotate each 3D block in the 2D array representation
ROW_NUMS = [1, 2] # Layer numbers corresponding to the number of rows of array data (must be contiguous) BLOCKS = [3, 3] # number of data fields in each row i.e., columns in each row diff = [x - ROW_NUMS[i] for i, x in enumerate(ROW_NUMS[1:])] assert diff == [1]*(len(ROW_NUMS) - 1), '"layer_num" should contain contiguous numbers only' assert len(ROW_NUMS) == len(BLOCKS), "'cells' list and 'layer_num' list should contain same number of entries" direction = BACKWARDS # control the direction of countdown of timesteps INCLUDE_ZERO = True # for time series based data START_AT = 0 if INCLUDE_ZERO else 1 # names = [['Softmax\nprobabilities', 'p1', 'p2', 'p3', 'p4', 'p5', 'p6', 'p7', 'p8', 'p9', 'p10'],['', ' +', ' +', ' +', ' +', ' +', ' +'],['GMM\nprobabilities', 'p1', 'p2', 'p3', 'p4', 'p5', 'p6']] # the trick to adding symbols like the "partial(dou)" i.e. '∂' is to write these symbols in a markdown cell using the $\partial$ utilising the mathjax support and # copying the content after being rendered and paste in the code as a string wherever needed prefix = ['∂(i)-', '∂(v)-'] r = lambda: random.randint(0,255) # to generate random colors for each row # intantiate a directed graph with intial properties dot = G.Digraph(comment='Matrix', graph_attr={'nodesep':'0.02', 'ranksep':'0.02', 'bgcolor':'transparent'}, node_attr={'shape':'box3d','fixedsize':'true', 'width':'1.1'}) for row_no in ROW_NUMS: if row_no != 1: dot.edge(str(row_no-1)+str(START_AT), str(row_no)+str(START_AT), style='invis') # invisible edges to contrain layout with dot.subgraph() as sg: sg.attr(rank='same') color = '#{:02x}{:02x}{:02x}'.format(r(),r(),r()) for block_no in range(START_AT, BLOCKS[row_no-1]+START_AT): if direction: sg.node(str(row_no)+str(block_no), 't-'+str(block_no), style='filled', fillcolor=color) else: if START_AT == 0: sg.node(str(row_no)+str(block_no), prefix[row_no-1]+str(BLOCKS[row_no-1]-block_no-1), style='filled', fillcolor=color) else: sg.node(str(row_no)+str(block_no), prefix[row_no-1]+str(BLOCKS[row_no-1]-block_no-1), style='filled', fillcolor=color)
_____no_output_____
Apache-2.0
_notebooks/2020-12-26-Array-Visualiser.ipynb
logicatcore/scratchpad
Render
dot
_____no_output_____
Apache-2.0
_notebooks/2020-12-26-Array-Visualiser.ipynb
logicatcore/scratchpad
Save/Export
# dot.format = 'jpeg' # or PDF, SVG, JPEG, PNG, etc. # to save the file, pdf is default dot.render('./lstm_input')
_____no_output_____
Apache-2.0
_notebooks/2020-12-26-Array-Visualiser.ipynb
logicatcore/scratchpad
Additional script to just show the breakdown of train-test data of the dataset being used
import random r = lambda: random.randint(0,255) # to generate random colors for each row folders = G.Digraph(node_attr={'style':'filled'}, graph_attr={'style':'invis', 'rankdir':'LR'},edge_attr={'color':'black', 'arrowsize':'.2'}) color = '#{:02x}{:02x}{:02x}'.format(r(),r(),r()) with folders.subgraph(name='cluster0') as f: f.node('root', 'Dataset \n x2000', shape='folder', fillcolor=color) color = '#{:02x}{:02x}{:02x}'.format(r(),r(),r()) with folders.subgraph(name='cluster1') as f: f.node('train', 'Train \n 1800', shape='note', fillcolor=color) f.node('test', 'Test \n x200', shape='note', fillcolor=color) folders.edge('root', 'train') folders.edge('root', 'test') folders folders.render('./dataset')
_____no_output_____
Apache-2.0
_notebooks/2020-12-26-Array-Visualiser.ipynb
logicatcore/scratchpad
Jupman TestsTests and cornercases.The page Title has one sharp, the Sections always have two sharps. Sezione 1bla bla Sezione 2Subsections always have three sharps Subsection 1bla bla Subsection 2bla bla Quotes > I'm quoted with **greater than** symbol> on multiple lines> Am I readable? I'm quoted with **spaces** on multiple lines Am I readable? Download linksFiles manually put in `_static` : * Download [trial.odt](_static/trial.odt)* Download [trial.pdf](_static/trial.pdf)Files in arbitrary folder position : * Download [requirements.txt](requirements.txt)NOTE: download links are messy, [see issue 8](https://github.com/DavidLeoni/jupman/issues/8) Info/Warning BoxesUntil there is an info/warning extension for Markdown/CommonMark (see this issue), such boxes can be created by using HTML elements like this: **Note:** This is an info! **Note:** This is a warn! For this to work reliably, you should obey the following guidelines:* The class attribute has to be either "alert alert-info" or "alert alert-warning", other values will not be converted correctly.* No further attributes are allowed.* For compatibility with CommonMark, you should add an empty line between the start tag and the beginning of the content. MathFor math stuff, [see npshpinx docs](https://nbsphinx.readthedocs.io/en/0.2.14/markdown-cells.htmlEquations)Here we put just some equation to show it behaves fine in JupmanThis is infinity: $\infty$ Unicode Unicode characters should display an HTML, but with latex you might have problems, and need to manually map characters in conf.pyYou should see a star in a black circle: ✪ You should see a check: ✓table characters: │ ├ └ ─ Image SVG ImagesSVG images work in notebook, but here it is commented since it breaks Latex, [see issue](https://github.com/DavidLeoni/jupman/issues/1)```![An image](img/cc-by.svg)```This one also doesn't works (and shows ugly code in the notebook anyway)```from IPython.display import SVGSVG(filename='img/cc-by.svg')``` PNG Images![A PNG image](_static/img/notebook_icon.png) Inline images - pure markdown Bla ![A PNG image](_static/img/notebook_icon.png) bli blo Bla ![A PNG image](_static/img/notebook_icon.png) bli blo Inline images - markdown and img bla bli blo bla bli blo Img classIf we pass a class, it will to be present in the website: This should be inline Expressions listHighlighting **does** work both in Jupyter and SphinxThree quotes, multiple lines - Careful: put **exactly 4 spaces** indentation1. ```python [2,3,1] != "[2,3,1]" ```1. ```python [4,8,12] == [2*2,"4*2",6*2] ```1. ```python [][:] == [] ``` Three quotes, multiple lines, more compact - works in Jupyter, **doesn't** in Sphinx1. ```python [2,3,1] != "[2,3,1]"```1. ```python [4,8,12] == [2*2,"4*2",6*2]```1. ```python [][:] == []``` Highlighting **doesn't** work in Jupyter neither in Sphinx:Three quotes, single line1. ```python [2,3,1] != ["2",3,1]```1. ```python [4,8,12] == [2*2,"4*2",6*2]```1. ```python [][:] == "[]"```Single quote, single line1. `python [2,3,1] != ["2",3,1]`1. `python [4,8,12] == [2*2,"4*2",6*2]`1. `python [][:] == "[]"` Togglable cellsThere are various ways to have togglable cells. Show/hide exercises (PREFERRED)If you need clickable show/hide buttons for exercise solutions , see here: [Usage - Exercise types](https://jupman.softpython.org/en/latest/usage.htmlType-of-exercises). It manages comprehensively use cases for display in website, student zips, exams, etcIf you have other needs, we report here some test we made, but keep in mind this sort of hacks tend to change behaviour with different versions of jupyter. Toggling with Javascript* Works in MarkDown* Works while in Jupyter* Works in HTML* Does not show in Latex (which might be a good point, if you intend to put somehow solutions at the end of the document)* NOTE: after creating the text to see the results you have to run the initial cell with jupman.init (as for the toc) * NOTE: you can't use Markdown block code since of Sept 2017 doesn't show well in HTML output SOME CODE color = raw_input("What's your eyes' color?") if color == "": sys.exit()<div class="jupman-togglable" data-jupman-show="Customized show msg" data-jupman-hide="Customized hide msg"> SOME OTHER CODE how_old = raw_input("How old are you?") x = random.randint(1,8) if question == "": sys.exit() HTML details in Markdown, code tag * Works while in Jupyter* Doesn't work in HTML output* as of Sept Oct 2017, not yet supported in Microsoft browsersClick here to see the code question = raw_input("What?") answers = random.randint(1,8) if question == "": sys.exit() HTML details in Markdown, Markdown mixed code* Works while in Jupyter * Doesn't work in HTML output* as of Sept Oct 2017, not yet supported in Microsoft browsersClick here to see the code```python question = raw_input("What?") answers = random.randint(1,8) if question == "": sys.exit()``` HTML details in HTML, raw NBConvert Format * Doesn't work in Jupyter* Works in HTML output * NOTE: as of Sept Oct 2017, not yet supported in Microsoft browsers* Doesn't show at all in PDF output
<details> <summary>Click here to see the code</summary> <code> <pre> question = raw_input("What?") answers = random.randint(1,8) if question == "": sys.exit() </pre> </code> </details>
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
Some other Markdown cell afterwards .... Files in templatesSince Dec 2019 they are not accessible [see issue 10](https://github.com/DavidLeoni/jupman/issues/10), but it is not a great problem, you can always put a link to Github, see for example [exam-yyyy-mm-dd.ipynb](https://github.com/DavidLeoni/jupman/tree/master/_templates/exam/exam-yyyy-mm-dd.ipynb) Python tutorThere are various ways to embed Python tutor, first we put the recommended one. jupman.pytut **RECOMMENDED**: You can put a call to `jupman.pytut()` at the end of a cell, and the cell code will magically appear in python tutor in the output (except the call to `pytut()` of course). Does not need internet connection.
x = [5,8,4,10,30,20,40,50,60,70,20,30] y= {3:9} z = [x] jupman.pytut()
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**jupman.pytut scope**: BEWARE of variables which were initialized in previous cells, they WILL NOT be available in Python Tutor:
w = 8 x = w + 5 jupman.pytut()
Traceback (most recent call last): File "/home/da/Da/prj/jupman/prj/jupman.py", line 2305, in _runscript self.run(script_str, user_globals, user_globals) File "/usr/lib/python3.5/bdb.py", line 431, in run exec(cmd, globals, locals) File "<string>", line 2, in <module> NameError: name 'w' is not defined
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**jupman.pytut window overflow**: When too much right space is taken, it might be difficult to scroll:
x = [3,2,5,2,42,34,2,4,34,2,3,4,23,4,23,4,2,34,23,4,23,4,23,4,234,34,23,4,23,4,23,4,2] jupman.pytut() x = w + 5 jupman.pytut()
Traceback (most recent call last): File "/home/da/Da/prj/jupman/prj/jupman.py", line 2305, in _runscript self.run(script_str, user_globals, user_globals) File "/usr/lib/python3.5/bdb.py", line 431, in run exec(cmd, globals, locals) File "<string>", line 2, in <module> NameError: name 'w' is not defined
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**jupman.pytut execution:** Some cells might execute in Jupyter but not so well in Python Tutor, due to [its inherent limitations](https://github.com/pgbovine/OnlinePythonTutor/blob/master/unsupported-features.md):
x = 0 for i in range(10000): x += 1 print(x) jupman.pytut()
10000
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**jupman.pytut infinite loops**: Since execution occurs first in Jupyter and then in Python tutor, if you have an infinite loop no Python Tutor instance will be spawned: ```pythonwhile True: passjupman.pytut()``` **jupman.pytut() resizability:** long vertical and horizontal expansion should work:
x = {0:'a'} for i in range(1,30): x[i] = x[i-1]+str(i*10000) jupman.pytut()
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**jupman.pytut cross arrows**: With multiple visualizations, arrows shouldn't cross from one to the other even if underlying script is loaded multiple times (relates to visualizerIdOverride)
x = [1,2,3] jupman.pytut()
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**jupman.pytut print output**: With only one line of print, Print output panel shouldn't be too short:
print("hello") jupman.pytut() y = [1,2,3,4] jupman.pytut()
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
HTML magicsAnother option is to directly paste Python Tutor iframe in the cells, and use Jupyter `%%HTML` magics command. HTML should be available both in notebook and website - of course, requires an internet connection.Beware: you need the HTTP**S** !
%%HTML <iframe width="800" height="300" frameborder="0" src="https://pythontutor.com/iframe-embed.html#code=x+%3D+5%0Ay+%3D+10%0Az+%3D+x+%2B+y&cumulative=false&py=2&curInstr=3"> </iframe>
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
NBTutor To show Python Tutor in notebooks, there is already a jupyter extension called [NBTutor](https://github.com/lgpage/nbtutor) , afterwards you can use magic `%%nbtutor` to show the interpreter.Unfortunately, it doesn't show in the generated HTML :-/
%reload_ext nbtutor %%nbtutor for x in range(1,4): print("ciao") x=5 y=7 x +y
ciao ciao ciao
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
Stripping answersFor stripping answers examples, see [jupyter-example/jupyter-example-sol](jupyter-example/jupyter-example-sol.ipynb). For explanation, see [usage](usage.ipynbTags-to-strip) Metadata to HTML classes Formatting problems Characters per linePython standard for code has limit to 79, many styles have 80 (see [Wikipedia](https://en.wikipedia.org/wiki/Characters_per_line))We can keep 80:```--------------------------------------------------------------------------------``````python--------------------------------------------------------------------------------```Errors hold 75 dashes:Plain:```---------------------------------------------------------------------------ZeroDivisionError Traceback (most recent call last) in ()----> 1 1/0ZeroDivisionError: division by zero```As Python markup:```python---------------------------------------------------------------------------ZeroDivisionError Traceback (most recent call last) in ()----> 1 1/0ZeroDivisionError: division by zero```
len('---------------------------------------------------------------------------')
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
On website this **may** display a scroll bar, because it will actually print `'` apexes plus the dashes
'-'*80
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
This should **not** display a scrollbar:
'-'*78
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
This should **not** display a scrollbar:
print('-'*80)
--------------------------------------------------------------------------------
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
Very large inputIn Jupyter: default behaviour, show scrollbarOn the website: should expand in horizontal as much as it wants, the rationale is that for input code since it may be printed to PDF you should always manually put line breaks.
# line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment # line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**Very long HTML** (and long code line)Should expand in vertical as much as it wants.
%%HTML <iframe width="100%" height="1300px" frameBorder="0" src="https://umap.openstreetmap.fr/en/map/mia-mappa-agritur_182055?scaleControl=false&miniMap=false&scrollWheelZoom=false&zoomControl=true&allowEdit=false&moreControl=true&searchControl=null&tilelayersControl=null&embedControl=null&datalayersControl=true&onLoadPanel=undefined&captionBar=false#11/46.0966/11.4024"></iframe><p><a href="http://umap.openstreetmap.fr/en/map/mia-mappa-agritur_182055">See full screen</a></p>
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
Very long outputIn Jupyter: by clicking, you can collapseOn the website: a scrollbar should appear
for x in range(150): print('long output ...', x)
long output ... 0 long output ... 1 long output ... 2 long output ... 3 long output ... 4 long output ... 5 long output ... 6 long output ... 7 long output ... 8 long output ... 9 long output ... 10 long output ... 11 long output ... 12 long output ... 13 long output ... 14 long output ... 15 long output ... 16 long output ... 17 long output ... 18 long output ... 19 long output ... 20 long output ... 21 long output ... 22 long output ... 23 long output ... 24 long output ... 25 long output ... 26 long output ... 27 long output ... 28 long output ... 29 long output ... 30 long output ... 31 long output ... 32 long output ... 33 long output ... 34 long output ... 35 long output ... 36 long output ... 37 long output ... 38 long output ... 39 long output ... 40 long output ... 41 long output ... 42 long output ... 43 long output ... 44 long output ... 45 long output ... 46 long output ... 47 long output ... 48 long output ... 49 long output ... 50 long output ... 51 long output ... 52 long output ... 53 long output ... 54 long output ... 55 long output ... 56 long output ... 57 long output ... 58 long output ... 59 long output ... 60 long output ... 61 long output ... 62 long output ... 63 long output ... 64 long output ... 65 long output ... 66 long output ... 67 long output ... 68 long output ... 69 long output ... 70 long output ... 71 long output ... 72 long output ... 73 long output ... 74 long output ... 75 long output ... 76 long output ... 77 long output ... 78 long output ... 79 long output ... 80 long output ... 81 long output ... 82 long output ... 83 long output ... 84 long output ... 85 long output ... 86 long output ... 87 long output ... 88 long output ... 89 long output ... 90 long output ... 91 long output ... 92 long output ... 93 long output ... 94 long output ... 95 long output ... 96 long output ... 97 long output ... 98 long output ... 99 long output ... 100 long output ... 101 long output ... 102 long output ... 103 long output ... 104 long output ... 105 long output ... 106 long output ... 107 long output ... 108 long output ... 109 long output ... 110 long output ... 111 long output ... 112 long output ... 113 long output ... 114 long output ... 115 long output ... 116 long output ... 117 long output ... 118 long output ... 119 long output ... 120 long output ... 121 long output ... 122 long output ... 123 long output ... 124 long output ... 125 long output ... 126 long output ... 127 long output ... 128 long output ... 129 long output ... 130 long output ... 131 long output ... 132 long output ... 133 long output ... 134 long output ... 135 long output ... 136 long output ... 137 long output ... 138 long output ... 139 long output ... 140 long output ... 141 long output ... 142 long output ... 143 long output ... 144 long output ... 145 long output ... 146 long output ... 147 long output ... 148 long output ... 149
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
Load Dataset
import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns # 노트북 안에 그래프를 그리기 위해 %matplotlib inline # 그래프에서 마이너스 폰트 깨지는 문제에 대한 대처 mpl.rcParams['axes.unicode_minus'] = False import warnings warnings.filterwarnings('ignore') train = pd.read_csv("data/train.csv", parse_dates=["datetime"]) train.shape test = pd.read_csv("data/test.csv", parse_dates=["datetime"]) test.shape
_____no_output_____
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
Feature Engineering
train["year"] = train["datetime"].dt.year train["month"] = train["datetime"].dt.month train["day"] = train["datetime"].dt.day train["hour"] = train["datetime"].dt.hour train["minute"] = train["datetime"].dt.minute train["second"] = train["datetime"].dt.second train["dayofweek"] = train["datetime"].dt.dayofweek train.shape test["year"] = test["datetime"].dt.year test["month"] = test["datetime"].dt.month test["day"] = test["datetime"].dt.day test["hour"] = test["datetime"].dt.hour test["minute"] = test["datetime"].dt.minute test["second"] = test["datetime"].dt.second test["dayofweek"] = test["datetime"].dt.dayofweek test.shape # widspeed 풍속에 0 값이 가장 많다. => 잘못 기록된 데이터를 고쳐 줄 필요가 있음 fig, axes = plt.subplots(nrows=2) fig.set_size_inches(18,10) plt.sca(axes[0]) plt.xticks(rotation=30, ha='right') axes[0].set(ylabel='Count',title="train windspeed") sns.countplot(data=train, x="windspeed", ax=axes[0]) plt.sca(axes[1]) plt.xticks(rotation=30, ha='right') axes[1].set(ylabel='Count',title="test windspeed") sns.countplot(data=test, x="windspeed", ax=axes[1]) # 풍속의 0값에 특정 값을 넣어준다. # 평균을 구해 일괄적으로 넣어줄 수도 있지만, 예측의 정확도를 높이는 데 도움이 될것 같진 않다. # train.loc[train["windspeed"] == 0, "windspeed"] = train["windspeed"].mean() # test.loc[train["windspeed"] == 0, "windspeed"] = train["windspeed"].mean() # 풍속이 0인것과 아닌 것의 세트를 나누어 준다. trainWind0 = train.loc[train['windspeed'] == 0] trainWindNot0 = train.loc[train['windspeed'] != 0] print(trainWind0.shape) print(trainWindNot0.shape) # 그래서 머신러닝으로 예측을 해서 풍속을 넣어주도록 한다. from sklearn.ensemble import RandomForestClassifier def predict_windspeed(data): # 풍속이 0인것과 아닌 것을 나누어 준다. dataWind0 = data.loc[data['windspeed'] == 0] dataWindNot0 = data.loc[data['windspeed'] != 0] # 풍속을 예측할 피처를 선택한다. wCol = ["season", "weather", "humidity", "month", "temp", "year", "atemp"] # 풍속이 0이 아닌 데이터들의 타입을 스트링으로 바꿔준다. dataWindNot0["windspeed"] = dataWindNot0["windspeed"].astype("str") # 랜덤포레스트 분류기를 사용한다. rfModel_wind = RandomForestClassifier() # wCol에 있는 피처의 값을 바탕으로 풍속을 학습시킨다. rfModel_wind.fit(dataWindNot0[wCol], dataWindNot0["windspeed"]) # 학습한 값을 바탕으로 풍속이 0으로 기록 된 데이터의 풍속을 예측한다. wind0Values = rfModel_wind.predict(X = dataWind0[wCol]) # 값을 다 예측 후 비교해 보기 위해 # 예측한 값을 넣어 줄 데이터 프레임을 새로 만든다. predictWind0 = dataWind0 predictWindNot0 = dataWindNot0 # 값이 0으로 기록 된 풍속에 대해 예측한 값을 넣어준다. predictWind0["windspeed"] = wind0Values # dataWindNot0 0이 아닌 풍속이 있는 데이터프레임에 예측한 값이 있는 데이터프레임을 합쳐준다. data = predictWindNot0.append(predictWind0) # 풍속의 데이터타입을 float으로 지정해 준다. data["windspeed"] = data["windspeed"].astype("float") data.reset_index(inplace=True) data.drop('index', inplace=True, axis=1) return data # 0값을 조정한다. train = predict_windspeed(train) # test = predict_windspeed(test) # widspeed 의 0값을 조정한 데이터를 시각화 fig, ax1 = plt.subplots() fig.set_size_inches(18,6) plt.sca(ax1) plt.xticks(rotation=30, ha='right') ax1.set(ylabel='Count',title="train windspeed") sns.countplot(data=train, x="windspeed", ax=ax1)
_____no_output_____
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
Feature Selection* 신호와 잡음을 구분해야 한다.* 피처가 많다고 해서 무조건 좋은 성능을 내지 않는다.* 피처를 하나씩 추가하고 변경해 가면서 성능이 좋지 않은 피처는 제거하도록 한다.
# 연속형 feature와 범주형 feature # 연속형 feature = ["temp","humidity","windspeed","atemp"] # 범주형 feature의 type을 category로 변경 해 준다. categorical_feature_names = ["season","holiday","workingday","weather", "dayofweek","month","year","hour"] for var in categorical_feature_names: train[var] = train[var].astype("category") test[var] = test[var].astype("category") feature_names = ["season", "weather", "temp", "atemp", "humidity", "windspeed", "year", "hour", "dayofweek", "holiday", "workingday"] feature_names X_train = train[feature_names] print(X_train.shape) X_train.head() X_test = test[feature_names] print(X_test.shape) X_test.head() label_name = "count" y_train = train[label_name] print(y_train.shape) y_train.head()
(10886,)
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
Score RMSLE과대평가 된 항목보다는 과소평가 된 항목에 패널티를 준다.오차(Error)를 제곱(Square)해서 평균(Mean)한 값의 제곱근(Root) 으로 값이 작을 수록 정밀도가 높다. 0에 가까운 값이 나올 수록 정밀도가 높은 값이다.Submissions are evaluated one the Root Mean Squared Logarithmic Error (RMSLE) $$ \sqrt{\frac{1}{n} \sum_{i=1}^n (\log(p_i + 1) - \log(a_i+1))^2 } $$ * \\({n}\\) is the number of hours in the test set* \\(p_i\\) is your predicted count* \\(a_i\\) is the actual count* \\(\log(x)\\) is the natural logarithm* 좀 더 자세한 설명은 : [RMSLE cost function](https://www.slideshare.net/KhorSoonHin/rmsle-cost-function)* 잔차(residual)에 대한 평균에 로그를 씌운 값이다. => 과대평가 된 항목보다 과소 평가 된 항목에 패널티를 주기위해* 정답에 대한 오류를 숫자로 나타낸 값으로 값이 클 수록 오차가 크다는 의미다.* 값이 작을 수록 오류가 적다는 의미를 나타낸다.![image.png](https://upload.wikimedia.org/wikipedia/commons/thumb/7/73/Logarithms.svg/456px-Logarithms.svg.png)이미지 출처 : 위키피디아 https://ko.wikipedia.org/wiki/로그
from sklearn.metrics import make_scorer def rmsle(predicted_values, actual_values): # 넘파이로 배열 형태로 바꿔준다. predicted_values = np.array(predicted_values) actual_values = np.array(actual_values) # 예측값과 실제 값에 1을 더하고 로그를 씌워준다. log_predict = np.log(predicted_values + 1) log_actual = np.log(actual_values + 1) # 위에서 계산한 예측값에서 실제값을 빼주고 제곱을 해준다. difference = log_predict - log_actual # difference = (log_predict - log_actual) ** 2 difference = np.square(difference) # 평균을 낸다. mean_difference = difference.mean() # 다시 루트를 씌운다. score = np.sqrt(mean_difference) return score rmsle_scorer = make_scorer(rmsle) rmsle_scorer
_____no_output_____
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
Cross Validation 교차 검증* 일반화 성능을 측정하기 위해 데이터를 여러 번 반복해서 나누고 여러 모델을 학습한다.![image.png](https://www.researchgate.net/profile/Halil_Bisgin/publication/228403467/figure/fig2/AS:302039595798534@1449023259454/Figure-4-k-fold-cross-validation-scheme-example.png)이미지 출처 : https://www.researchgate.net/figure/228403467_fig2_Figure-4-k-fold-cross-validation-scheme-example* KFold 교차검증 * 데이터를 폴드라 부르는 비슷한 크기의 부분집합(n_splits)으로 나누고 각각의 폴드 정확도를 측정한다. * 첫 번째 폴드를 테스트 세트로 사용하고 나머지 폴드를 훈련세트로 사용하여 학습한다. * 나머지 훈련세트로 만들어진 세트의 정확도를 첫 번째 폴드로 평가한다. * 다음은 두 번째 폴드가 테스트 세트가 되고 나머지 폴드의 훈련세트를 두 번째 폴드로 정확도를 측정한다. * 이 과정을 마지막 폴드까지 반복한다. * 이렇게 훈련세트와 테스트세트로 나누는 N개의 분할마다 정확도를 측정하여 평균 값을 낸게 정확도가 된다.
from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score k_fold = KFold(n_splits=10, shuffle=True, random_state=0)
_____no_output_____
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
RandomForest
from sklearn.ensemble import RandomForestRegressor max_depth_list = [] model = RandomForestRegressor(n_estimators=100, # 높을수록 좋지만, 느려짐. n_jobs=-1, random_state=0) model %time score = cross_val_score(model, X_train, y_train, cv=k_fold, scoring=rmsle_scorer) score = score.mean() # 0에 근접할수록 좋은 데이터 print("Score= {0:.5f}".format(score))
Wall time: 19.9 s Score= 0.33110
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
Train
# 학습시킴, 피팅(옷을 맞출 때 사용하는 피팅을 생각함) - 피처와 레이블을 넣어주면 알아서 학습을 함 model.fit(X_train, y_train) # 예측 predictions = model.predict(X_test) print(predictions.shape) predictions[0:10] # 예측한 데이터를 시각화 해본다. fig,(ax1,ax2)= plt.subplots(ncols=2) fig.set_size_inches(12,5) sns.distplot(y_train,ax=ax1,bins=50) ax1.set(title="train") sns.distplot(predictions,ax=ax2,bins=50) ax2.set(title="test")
_____no_output_____
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
Submit
submission = pd.read_csv("data/sampleSubmission.csv") submission submission["count"] = predictions print(submission.shape) submission.head() submission.to_csv("data/Score_{0:.5f}_submission.csv".format(score), index=False)
_____no_output_____
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
After separating tweets out as male and female tweets, try to find topic
with open('her_list.txt', 'r') as filename: her_list=json.load(filename) with open('his_list.txt','r') as filename: his_list=json.load(filename) cv_tfidf = TfidfVectorizer(stop_words='english') X_tfidf = cv_tfidf.fit_transform(her_list) nmf_model = NMF(3) topic_matrix = nmf_model.fit_transform(X_tfidf) def display_topics(model, feature_names, no_top_words, topic_names=None): for ix, topic in enumerate(model.components_): if not topic_names or not topic_names[ix]: print("\nTopic ", ix) else: print("\nTopic: '",topic_names[ix],"'") print(", ".join([feature_names[i] for i in topic.argsort()[:-no_top_words - 1:-1]])) display_topics(nmf_model, cv_tfidf.get_feature_names(), 25) lsa = TruncatedSVD(2) doc_topic = lsa.fit_transform(X_tfidf) lsa.explained_variance_ratio_ display_topics(lsa, cv_tfidf.get_feature_names(), 20) for tweet in her_list: if ' men ' in tweet: print(tweet) X_tfidf_2 = cv_tfidf.fit_transform(his_list) nmf_model_2 = NMF(3) topic_matrix_2 = nmf_model_2.fit_transform(X_tfidf_2) display_topics(nmf_model_2, cv_tfidf.get_feature_names(), 25) lsa_2 = TruncatedSVD(4) doc_topic = lsa_2.fit_transform(X_tfidf_2) lsa.explained_variance_ratio_ display_topics(lsa_2, cv_tfidf.get_feature_names(), 10)
Topic 0 hes, man, trump, good, men, said, oh, great, got, work Topic 1 man, oh, boy, thanks, old, life, thankyou, work, sorry, young Topic 2 hes, man, andrew___baker, oh, talking, boy, gelbach, modeledbehavior, wearing, saying Topic 3 men, women, white, work, good, economics, macro, great, read, labor
MIT
code/model_his_her_tfidf_nmf.ipynb
my321/project4_econtwitter
_____no_output_____
MIT
src/02_loops_condicionais_metodos_funcoes/06_funcao_range.ipynb
ralsouza/python_fundamentos
Range Imprimir os números pares entre 50 e 101. Usar a técnica da função range que executa saltos entre os números.
for i in range(50,101,2): print(i)
50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 100
MIT
src/02_loops_condicionais_metodos_funcoes/06_funcao_range.ipynb
ralsouza/python_fundamentos
Imprimir de 3 à 6, lembrando que o último número é exclusivo.
for i in range(3,6): print(i)
3 4 5
MIT
src/02_loops_condicionais_metodos_funcoes/06_funcao_range.ipynb
ralsouza/python_fundamentos
Gerar uma lista negativa, iniciando de 0 até -20, saltando de dois em dois números.Lembrando novamente que o valor máximo é exclusivo.
for i in range(0,-20,-2): print(i)
0 -2 -4 -6 -8 -10 -12 -14 -16 -18
MIT
src/02_loops_condicionais_metodos_funcoes/06_funcao_range.ipynb
ralsouza/python_fundamentos
Configurar o valor máximo do range, conforme o tamanho de um objeto em um loop for.
lista = ['morango','abacaxi','banana','melão'] for i in range(0, len(lista)): print(lista[i]) # Checar o tipo do objeto range type(range(0,5))
_____no_output_____
MIT
src/02_loops_condicionais_metodos_funcoes/06_funcao_range.ipynb
ralsouza/python_fundamentos
From Variables to Classes A short Introduction Python - as any programming language - has many extensions and libraries at its disposal. Basically, there exist libraries for everything. But what are **libraries**? Basically, **libraries** are a collection of methods (_small pieces of code where you put sth in and get sth else out_) which you can use to analyse your data, visualise your data, run models ... do anything you like. As said, methods usually take _something_ as input. That _something_ is usually a **variable**. In the following, we will work our way from **variables** to **libraries**. Variables Variables are one of the simplest types of objects in a programming language. An [object](https://en.wikipedia.org/wiki/Object_(computer_science) is a value stored in the memory of your computer, marked by a specific identifyer. Variables can have different types, such as [strings, numbers, and booleans](https://www.learnpython.org/en/Variables_and_Types). Differently to other programming languages, you do not need to declare the type of a variable, as variables are handled as objects in Python. ```pythonx = 4.2 floating point numbery = 'Hello World!' stringz = True boolean```
x = 4.24725723 print(type(x)) y = 'Hello World! Hello universe' print(y) z = True print(type(z))
<class 'float'> Hello World! Hello universe <class 'bool'>
MIT
00_Variables_to_Classes.ipynb
Zqs0527/geothermics
We can use operations (normal arithmetic operations) to use variables for getting results we want. With numbers, you can add, substract, multiply, divide, basically taking the values from the memory assigned to the variable name and performing calculations. Let's have a look at operations with numbers and strings. We leave booleans to the side for the moment. We will simply add the variables below. ```python n1 = 7 n2 = 42s1 = 'Looking good, 's2 = 'you are.'```
n1 = 7 n2 = 42 s1 = 'Looking good, ' s2 = 'you are.' first_sum = n1 + n2 print(first_sum) first_conc = s1 + s2 print(first_conc)
49 Looking good, you are.
MIT
00_Variables_to_Classes.ipynb
Zqs0527/geothermics
Variables can be more than just a number. If you think of an Excel-Spreadsheet, a variable can be the content of a single cell, or multiple cells can be combined in one variable (e.g. one column of an Excel table). So let's create a list -_a collection of variables_ - from `x`, `n1`, and `n2`. Lists in python are created using [ ]. Now, if you want to calculate the sum of this list, it is really exhausting to sum up every item of this list manually. ```pythonfirst_list = [x, n1, n2] a sum of a list could look likesecond_sum = some_list[0] + some_list[1] + ... + some_list[n] where n is the last item of the list, e.g. 2 for first_list. ``` Actually, writing the second sum like this is the same as before. It would be great, if this step of calculating the sum could be used many times without writing it out. And this is, what functions are for. For example, there already exists a sum function: ```python sum(first_list)```
first_list = [x, n1, n2] second_sum = first_list[0] + first_list[1] + first_list[2] print('manual sum {}'.format(second_sum)) # This can also be done with a function print('sum function {}'.format(sum(first_list)))
manual sum 53.2 sum function 53.2
MIT
00_Variables_to_Classes.ipynb
Zqs0527/geothermics
Functions The `sum()` method we used above is a **function**. Functions (later we will call them methods) are pieces of code, which take an input, perform some kind of operation, and (_optionally_) return an output. In Python, functions are written like: ```pythondef func(input): """ Description of the functions content called the function header """ some kind of operation on input called the function body return output```As an example, we write a `sumup` function which sums up a list.
def sumup(inp): """ input: inp - list/array with floating point or integer numbers return: sumd - scalar value of the summed up list """ val = 0 for i in inp: val = val + i return val # let's compare the implemented standard sum function with the new sumup function sum1 = sum(first_list) sum2 = sumup(first_list) print("The python sum function yields {}, \nand our sumup function yields {}.".format(*(sum1,sum2))) # summing up the numbers from 1 to 100 import numpy as np ar_2_sum = np.linspace(1,100,100, dtype='i') print("the sum of the array is: {}".format(sumup(ar_2_sum)))
the sum of the array is: 5050
MIT
00_Variables_to_Classes.ipynb
Zqs0527/geothermics
As we see above, functions are quite practical and save a lot of time. Further, they help structuring your code. Some functions are directly available in python without any libraries or other external software. In the example above however, you might have noticed, that we `import`ed a library called `numpy`. In those libraries, functions are merged to one package, having the advantage that you don't need to import each single function at a time. Imagine you move and have to pack all your belongings. You can think of libraries as packing things with similar purpose in the same box (= library). Functions to Methods as part of classesWhen we talk about functions in the environment of classes, we usually call them methods. But what are **classes**? [Classes](https://docs.python.org/3/tutorial/classes.html) are ways to bundle functionality together. Logically, functionality with similar purpose (or different kind of similarity). One example could be: think of **apples**. Apples are now a class. You can apply methods to this class, such as `eat()` or `cut()`. Or more sophisticated methods including various recipes using apples comprised in a cookbook. The `eat()` method is straight forward. But the `cut()` method may be more interesting, since there are various ways to cut an apple. Let's assume there are two apples to be cut differently. In python, once you have assigned a class to a variable, you have created an **instance** of that class. Then, methods of are applied to that instance by using a . notation.```pythonGolden_Delicious = apple()Yoya = apple()Golden_Delicious.cut(4)Yoya.cut(8)```The two apples Golden Delicious and Yoya are _instances_ of the class apple. Real _incarnations_ of the abstract concept _apple_. The Golden Delicious is cut into 4 pieces, while the Yoya is cut into 8 pieces. This is similar to more complex libraries, such as the `scikit-learn`. In one exercise, you used the command: ```pythonfrom sklearn.cluster import KMeans```which simply imports the **class** `KMeans` from the library part `sklearn.cluster`. `KMeans` comprises several methods for clustering, which you can use by calling them similar to the apple example before. For this, you need to create an _instance_ of the `KMeans` class. ```python...kmeans_inst = KMeans(n_clusters=n_clusters) first we create the instance of the KMeans class called kmeans_instkmeans_inst.fit(data) then we apply a method to the instance kmeans_inst...```An example:
# here we just create the data for clustering from sklearn.datasets.samples_generator import make_blobs import matplotlib.pyplot as plt %matplotlib inline X, y = make_blobs(n_samples=100, centers=3, cluster_std= 0.5, random_state=0) plt.scatter(X[:,0], X[:,1], s=70) # now we create an instance of the KMeans class from sklearn.cluster import KMeans nr_of_clusters = 3 # because we see 3 clusters in the plot above kmeans_inst = KMeans(n_clusters= nr_of_clusters) # create the instance kmeans_inst kmeans_inst.fit(X) # apply a method to the instance y_predict = kmeans_inst.predict(X) # apply another method to the instance and save it in another variable # lets plot the predicted cluster centers colored in the cluster color plt.scatter(X[:, 0], X[:, 1], c=y_predict, s=50, cmap='Accent') centers = kmeans_inst.cluster_centers_ # apply the method to find the new centers of the determined clusters plt.scatter(centers[:, 0], centers[:, 1], c='red', s=200, alpha=0.6); # plot the cluster centers
_____no_output_____
MIT
00_Variables_to_Classes.ipynb
Zqs0527/geothermics
Visualize Counts for the three classes The number of volume-wise predictions for each of the three classes can be visualized in a 2D-space (with two classes as the axes and the remained or class1-class2 as the value of the third class). Also, the percentage of volume-wise predictions can be shown in a modified pie-chart, i.e. a doughnut plot. import modules
import os import pickle import numpy as np import pandas as pd from sklearn import preprocessing from sklearn import svm import scipy.misc from scipy import ndimage from scipy.stats import beta from PIL import Image import matplotlib import matplotlib.pyplot as plt import seaborn as sns sns.set_context('poster') sns.set_style('ticks') # after converstion to .py, we can use __file__ to get the module folder try: thisDir = os.path.realpath(__file__) # in notebook form, we take the current working directory (we need to be in 'notebooks/' for this!) except: thisDir = '.' # convert relative path into absolute path, so this will work with notebooks and py modules supDir = os.path.abspath(os.path.join(os.path.dirname(thisDir), '..')) supDir
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
Outline the WTA prediction model make all possible values
def make_all_dummy(): my_max = 100 d = {} count = 0 for bi in np.arange(0,my_max+(10**-10),0.5): left_and_right = my_max - bi for left in np.arange(0,left_and_right+(10**-10),0.5): right = left_and_right-left d[count] = {'left':left,'bilateral':bi,'right':right} count+=1 df = pd.DataFrame(d).T assert np.unique(df.sum(axis=1))[-1] == my_max df['pred'] = df.idxmax(axis=1) return df dummy_df = make_all_dummy() dummy_df.tail()
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
transform labels into numbers
my_labeler = preprocessing.LabelEncoder() my_labeler.fit(['left','bilateral','right','inconclusive']) my_labeler.classes_
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
2d space where highest number indiciates class membership (WTA)
def make_dummy_space(dummy_df): space_df = dummy_df.copy() space_df['pred'] = my_labeler.transform(dummy_df['pred']) space_df.index = [space_df.left, space_df.right] space_df = space_df[['pred']] space_df = space_df.unstack(1)['pred'] return space_df dummy_space_df = make_dummy_space(dummy_df) dummy_space_df.tail()
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
define color map
colors_file = os.path.join(supDir,'models','colors.p') with open(colors_file, 'rb') as f: color_dict = pickle.load(f) my_cols = {} for i, j in zip(['red','yellow','blue','trans'], ['left','bilateral','right','inconclusive']): my_cols[j] = color_dict[i] my_col_order = np.array([my_cols[g] for g in my_labeler.classes_]) cmap = matplotlib.colors.LinearSegmentedColormap.from_list("", my_col_order)
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
plot WTA predictions
plt.figure(figsize=(6,6)) plt.imshow(dummy_space_df, origin='image',cmap=cmap,extent=(0,100,0,100),alpha=0.8) plt.contour(dummy_space_df[::-1],colors='white',alpha=1,origin='image',extent=(0,100,0,100),antialiased=True) plt.xlabel('right',fontsize=32) plt.xticks(range(0,101,25),np.arange(0,1.01,.25),fontsize=28) plt.yticks(range(0,101,25),np.arange(0,1.01,.25),fontsize=28) plt.ylabel('left',fontsize=32) sns.despine() plt.show()
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
load data
groupdata_filename = '../data/processed/csv/withinconclusive_prediction_df.csv' prediction_df = pd.read_csv(groupdata_filename,index_col=[0,1],header=0)
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
toolbox use
#groupdata_filename = os.path.join(supDir,'models','withinconclusive_prediction_df.csv') #prediction_df = pd.read_csv(groupdata_filename,index_col=[0,1],header=0) prediction_df.tail()
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
show data and WTA space
plt.figure(figsize=(6,6)) plt.imshow(dummy_space_df, origin='image',cmap=cmap,extent=(0,100,0,100),alpha=0.8) plt.contour(dummy_space_df[::-1],colors='white',alpha=1,origin='image',extent=(0,100,0,100),antialiased=True) for c in ['left','right','bilateral']: a_df = prediction_df.loc[c,['left','right']] * 100 plt.scatter(a_df['right'],a_df['left'],c=[my_cols[c]],edgecolor='k',linewidth=2,s=200,alpha=0.6) plt.xlabel('right',fontsize=32) plt.xticks(range(0,101,25),np.arange(0,1.01,.25),fontsize=28) plt.yticks(range(0,101,25),np.arange(0,1.01,.25),fontsize=28) plt.ylabel('left',fontsize=32) sns.despine() plt.savefig('../reports/figures/14-prediction-space.png',dpi=300,bbox_inches='tight') plt.show()
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
show one patient's data doughnut plot
p_name = 'pat###' p_count_df = pd.read_csv('../data/processed/csv/%s_counts_df.csv'%p_name,index_col=[0,1],header=0) p_count_df def make_donut(p_count_df, ax, my_cols=my_cols): """show proportion of the number of volumes correlating highest with one of the three groups""" percentages = p_count_df/p_count_df.sum(axis=1).values[-1] * 100 ## donut plot visualization adapted from https://gist.github.com/krishnakummar/ad00d05311977732764f#file-donut-example-py ax.pie( percentages.values[-1], pctdistance=0.75, colors=[my_cols[x] for x in percentages.columns], autopct='%0.0f%%', shadow=False, textprops={'fontsize': 40}) centre_circle = plt.Circle((0, 0), 0.55, fc='white') ax.add_artist(centre_circle) ax.set_aspect('equal') return ax fig,ax = plt.subplots(1,1,figsize=(8,8)) ax = make_donut(p_count_df,ax) plt.savefig('../examples/%s_donut.png'%p_name,dpi=300,bbox_inches='tight') plt.show()
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
prediction space
def make_pred_space(p_count_df, prediction_df, ax, dummy_space_df=dummy_space_df): ax.imshow(dummy_space_df, origin='image',cmap=cmap,extent=(0,100,0,100),alpha=0.8) ax.contour(dummy_space_df[::-1],colors='white',alpha=1,origin='image',extent=(0,100,0,100),antialiased=True) for c in ['left','right','bilateral']: a_df = prediction_df.loc[c,['left','right']] * 100 ax.scatter(a_df['right'],a_df['left'],c=[my_cols[c]],edgecolor='k',linewidth=2,s=200,alpha=0.6) percentages = p_count_df/p_count_df.sum(axis=1).values[-1] * 100 y_pred = percentages.idxmax(axis=1).values[-1] ax.scatter(percentages['right'],percentages['left'],c=[my_cols[y_pred]],edgecolor='white',linewidth=4,s=1500,alpha=1) plt.xlabel('right',fontsize=32) plt.xticks(range(0,101,25),np.arange(0,1.01,.25),fontsize=28) plt.yticks(range(0,101,25),np.arange(0,1.01,.25),fontsize=28) plt.ylabel('left',fontsize=32) sns.despine() return ax fig,ax = plt.subplots(1,1,figsize=(8,8)) ax = make_pred_space(p_count_df,prediction_df,ax) plt.savefig('../examples/%s_predSpace.png'%p_name,dpi=300,bbox_inches='tight') plt.show()
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
toolbox use
#def make_p(pFolder,pName,prediction_df=prediction_df): # # count_filename = os.path.join(pFolder,''.join([pName,'_counts_df.csv'])) # p_count_df = pd.read_csv(count_filename,index_col=[0,1],header=0) # # fig = plt.figure(figsize=(8,8)) # ax = plt.subplot(111) # ax = make_donut(p_count_df,ax) # out_name_donut = os.path.join(pFolder,''.join([pName,'_donut.png'])) # plt.savefig(out_name_donut,dpi=300,bbox_inches='tight') # plt.close() # # fig = plt.figure(figsize=(8,8)) # with sns.axes_style("ticks"): # ax = plt.subplot(111) # ax = make_pred_space(p_count_df,prediction_df,ax) # out_name_space = os.path.join(pFolder,''.join([pName,'_predSpace.png'])) # plt.savefig(out_name_space,dpi=300,bbox_inches='tight') # plt.close() # # return out_name_donut, out_name_space
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
Torch Core> Basic pytorch functions used in the fastai library Arrays and show
#export @delegates(plt.subplots, keep=True) def subplots(nrows=1, ncols=1, figsize=None, imsize=3, add_vert=0, **kwargs): if figsize is None: figsize=(ncols*imsize, nrows*imsize+add_vert) fig,ax = plt.subplots(nrows, ncols, figsize=figsize, **kwargs) if nrows*ncols==1: ax = array([ax]) return fig,ax #hide _,axs = subplots() test_eq(axs.shape,[1]) plt.close() _,axs = subplots(2,3) test_eq(axs.shape,[2,3]) plt.close() #export def _fig_bounds(x): r = x//32 return min(5, max(1,r)) #export def show_image(im, ax=None, figsize=None, title=None, ctx=None, **kwargs): "Show a PIL or PyTorch image on `ax`." # Handle pytorch axis order if hasattrs(im, ('data','cpu','permute')): im = im.data.cpu() if im.shape[0]<5: im=im.permute(1,2,0) elif not isinstance(im,np.ndarray): im=array(im) # Handle 1-channel images if im.shape[-1]==1: im=im[...,0] ax = ifnone(ax,ctx) if figsize is None: figsize = (_fig_bounds(im.shape[0]), _fig_bounds(im.shape[1])) if ax is None: _,ax = plt.subplots(figsize=figsize) ax.imshow(im, **kwargs) if title is not None: ax.set_title(title) ax.axis('off') return ax
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
`show_image` can show PIL images...
im = Image.open(TEST_IMAGE_BW) ax = show_image(im, cmap="Greys")
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
...and color images with standard `CHW` dim order...
im2 = np.array(Image.open(TEST_IMAGE)) ax = show_image(im2, figsize=(2,2))
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
...and color images with `HWC` dim order...
im3 = torch.as_tensor(im2).permute(2,0,1) ax = show_image(im3, figsize=(2,2)) #export def show_titled_image(o, **kwargs): "Call `show_image` destructuring `o` to `(img,title)`" show_image(o[0], title=str(o[1]), **kwargs) show_titled_image((im3,'A puppy'), figsize=(2,2)) #export @delegates(subplots) def show_images(ims, nrows=1, ncols=None, titles=None, **kwargs): "Show all images `ims` as subplots with `rows` using `titles`" if ncols is None: ncols = int(math.ceil(len(ims)/nrows)) if titles is None: titles = [None]*len(ims) axs = subplots(nrows, ncols, **kwargs)[1].flat for im,t,ax in zip(ims, titles, axs): show_image(im, ax=ax, title=t) show_images((im,im3), titles=('number','puppy'), imsize=2)
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
`ArrayImage`, `ArrayImageBW` and `ArrayMask` are subclasses of `ndarray` that know how to show themselves.
#export class ArrayBase(ndarray): @classmethod def _before_cast(cls, x): return x if isinstance(x,ndarray) else array(x) #export class ArrayImageBase(ArrayBase): _show_args = {'cmap':'viridis'} def show(self, ctx=None, **kwargs): return show_image(self, ctx=ctx, **{**self._show_args, **kwargs}) #export class ArrayImage(ArrayImageBase): pass #export class ArrayImageBW(ArrayImage): _show_args = {'cmap':'Greys'} #export class ArrayMask(ArrayImageBase): _show_args = {'alpha':0.5, 'cmap':'tab20', 'interpolation':'nearest'} im = Image.open(TEST_IMAGE) im_t = cast(im, ArrayImage) test_eq(type(im_t), ArrayImage) ax = im_t.show(figsize=(2,2)) test_fig_exists(ax)
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Basics
#export @patch def __array_eq__(self:Tensor,b): return torch.equal(self,b) if self.dim() else self==b #export def _array2tensor(x): if x.dtype==np.uint16: x = x.astype(np.float32) return torch.from_numpy(x) #export def tensor(x, *rest, **kwargs): "Like `torch.as_tensor`, but handle lists too, and can pass multiple vector elements directly." if len(rest): x = (x,)+rest # There was a Pytorch bug in dataloader using num_workers>0. Haven't confirmed if fixed # if isinstance(x, (tuple,list)) and len(x)==0: return tensor(0) res = (x if isinstance(x, Tensor) else torch.tensor(x, **kwargs) if isinstance(x, (tuple,list)) else _array2tensor(x) if isinstance(x, ndarray) else as_tensor(x.values, **kwargs) if isinstance(x, (pd.Series, pd.DataFrame)) else as_tensor(x, **kwargs) if hasattr(x, '__array__') or is_iter(x) else _array2tensor(array(x), **kwargs)) if res.dtype is torch.float64: return res.float() return res test_eq(tensor(torch.tensor([1,2,3])), torch.tensor([1,2,3])) test_eq(tensor(array([1,2,3])), torch.tensor([1,2,3])) test_eq(tensor(1,2,3), torch.tensor([1,2,3])) test_eq_type(tensor(1.0), torch.tensor(1.0)) #export def set_seed(s): "Set random seed for `random`, `torch`, and `numpy` (where available)" try: torch.manual_seed(s) except NameError: pass try: np.random.seed(s%(2**32-1)) except NameError: pass random.seed(s) set_seed(2*33) a1 = np.random.random() a2 = torch.rand(()) a3 = random.random() set_seed(2*33) b1 = np.random.random() b2 = torch.rand(()) b3 = random.random() test_eq(a1,b1) test_eq(a2,b2) test_eq(a3,b3) #export def unsqueeze(x, dim=-1, n=1): "Same as `torch.unsqueeze` but can add `n` dims" for _ in range(n): x = x.unsqueeze(dim) return x t = tensor([1]) t2 = unsqueeze(t, n=2) test_eq(t2,t[:,None,None]) #export def unsqueeze_(x, dim=-1, n=1): "Same as `torch.unsqueeze_` but can add `n` dims" for _ in range(n): x.unsqueeze_(dim) return x t = tensor([1]) unsqueeze_(t, n=2) test_eq(t, tensor([1]).view(1,1,1)) #export def _fa_rebuild_tensor (cls, *args, **kwargs): return cls(torch._utils._rebuild_tensor_v2(*args, **kwargs)) def _fa_rebuild_qtensor(cls, *args, **kwargs): return cls(torch._utils._rebuild_qtensor (*args, **kwargs)) #export def apply(func, x, *args, **kwargs): "Apply `func` recursively to `x`, passing on args" if is_listy(x): return type(x)([apply(func, o, *args, **kwargs) for o in x]) if isinstance(x,dict): return {k: apply(func, v, *args, **kwargs) for k,v in x.items()} res = func(x, *args, **kwargs) return res if x is None else retain_type(res, x) #export def maybe_gather(x, axis=0): "Gather copies of `x` on `axis` (if training is distributed)" if num_distrib()<=1: return x ndim = x.ndim res = [x.new_zeros(*x.shape if ndim > 0 else (1,)) for _ in range(num_distrib())] torch.distributed.all_gather(res, x if ndim > 0 else x[None]) return torch.cat(res, dim=axis) if ndim > 0 else torch.cat(res, dim=axis).mean() #export def to_detach(b, cpu=True, gather=True): "Recursively detach lists of tensors in `b `; put them on the CPU if `cpu=True`." def _inner(x, cpu=True, gather=True): if not isinstance(x,Tensor): return x x = x.detach() if gather: x = maybe_gather(x) return x.cpu() if cpu else x return apply(_inner, b, cpu=cpu, gather=gather)
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
`gather` only applies during distributed training and the result tensor will be the one gathered accross processes if `gather=True` (as a result, the batch size will be multiplied by the number of processes).
#export def to_half(b): "Recursively map lists of tensors in `b ` to FP16." return apply(lambda x: x.half() if torch.is_floating_point(x) else x, b) #export def to_float(b): "Recursively map lists of int tensors in `b ` to float." return apply(lambda x: x.float() if torch.is_floating_point(x) else x, b) #export # None: True if available; True: error if not availabe; False: use CPU defaults.use_cuda = None #export def default_device(use_cuda=-1): "Return or set default device; `use_cuda`: None - CUDA if available; True - error if not availabe; False - CPU" if use_cuda != -1: defaults.use_cuda=use_cuda use = defaults.use_cuda or (torch.cuda.is_available() and defaults.use_cuda is None) assert torch.cuda.is_available() or not use return torch.device(torch.cuda.current_device()) if use else torch.device('cpu') #cuda _td = torch.device(torch.cuda.current_device()) test_eq(default_device(None), _td) test_eq(default_device(True), _td) test_eq(default_device(False), torch.device('cpu')) default_device(None); #export def to_device(b, device=None): "Recursively put `b` on `device`." if defaults.use_cuda==False: device='cpu' elif device is None: device=default_device() def _inner(o): return o.to(device, non_blocking=True) if isinstance(o,Tensor) else o.to_device(device) if hasattr(o, "to_device") else o return apply(_inner, b) t = to_device((3,(tensor(3),tensor(2)))) t1,(t2,t3) = t test_eq_type(t,(3,(tensor(3).cuda(),tensor(2).cuda()))) test_eq(t2.type(), "torch.cuda.LongTensor") test_eq(t3.type(), "torch.cuda.LongTensor") #export def to_cpu(b): "Recursively map lists of tensors in `b ` to the cpu." return to_device(b,'cpu') t3 = to_cpu(t3) test_eq(t3.type(), "torch.LongTensor") test_eq(t3, 2) #export def to_np(x): "Convert a tensor to a numpy array." return apply(lambda o: o.data.cpu().numpy(), x) t3 = to_np(t3) test_eq(type(t3), np.ndarray) test_eq(t3, 2) #export def to_concat(xs, dim=0): "Concat the element in `xs` (recursively if they are tuples/lists of tensors)" if is_listy(xs[0]): return type(xs[0])([to_concat([x[i] for x in xs], dim=dim) for i in range_of(xs[0])]) if isinstance(xs[0],dict): return {k: to_concat([x[k] for x in xs], dim=dim) for k in xs[0].keys()} #We may receives xs that are not concatenatable (inputs of a text classifier for instance), # in this case we return a big list try: return retain_type(torch.cat(xs, dim=dim), xs[0]) except: return sum([L(retain_type(o_.index_select(dim, tensor(i)).squeeze(dim), xs[0]) for i in range_of(o_)) for o_ in xs], L()) test_eq(to_concat([tensor([1,2]), tensor([3,4])]), tensor([1,2,3,4])) test_eq(to_concat([tensor([[1,2]]), tensor([[3,4]])], dim=1), tensor([[1,2,3,4]])) test_eq_type(to_concat([(tensor([1,2]), tensor([3,4])), (tensor([3,4]), tensor([5,6]))]), (tensor([1,2,3,4]), tensor([3,4,5,6]))) test_eq_type(to_concat([[tensor([1,2]), tensor([3,4])], [tensor([3,4]), tensor([5,6])]]), [tensor([1,2,3,4]), tensor([3,4,5,6])]) test_eq_type(to_concat([(tensor([1,2]),), (tensor([3,4]),)]), (tensor([1,2,3,4]),)) test_eq(to_concat([tensor([[1,2]]), tensor([[3,4], [5,6]])], dim=1), [tensor([1]),tensor([3, 5]),tensor([4, 6])]) test_eq(type(to_concat([dict(foo=tensor([1,2]), bar=tensor(3,4))])), dict)
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Tensor subtypes
#export @patch def set_meta(self:Tensor, x): "Set all metadata in `__dict__`" if hasattr(x,'__dict__'): self.__dict__ = x.__dict__ #export @patch def get_meta(self:Tensor, n, d=None): "Set `n` from `self._meta` if it exists and returns default `d` otherwise" return getattr(self, '_meta', {}).get(n, d) #export @patch def as_subclass(self:Tensor, typ): "Cast to `typ` (should be in future PyTorch version, so remove this then)" res = torch.Tensor._make_subclass(typ, self) return retain_meta(self, res)
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
`Tensor.set_meta` and `Tensor.as_subclass` work together to maintain `_meta` after casting.
class _T(Tensor): pass t = tensor(1) t._meta = {'img_size': 1} t2 = t.as_subclass(_T) test_eq(t._meta, t2._meta) test_eq(t2.get_meta('img_size'), 1) #export class TensorBase(Tensor): def __new__(cls, x, **kwargs): res = cast(tensor(x), cls) res._meta = kwargs return res @classmethod def _before_cast(cls, x): return x if isinstance(x,Tensor) else tensor(x) def __reduce_ex__(self,proto): torch.utils.hooks.warn_if_has_hooks(self) args = (type(self), self.storage(), self.storage_offset(), tuple(self.size()), self.stride()) if self.is_quantized: args = args + (self.q_scale(), self.q_zero_point()) f = _fa_rebuild_qtensor if self.is_quantized else _fa_rebuild_tensor return (f, args + (self.requires_grad, OrderedDict())) def gi(self, i): res = self[i] return res.as_subclass(type(self)) if isinstance(res,Tensor) else res def __repr__(self): return re.sub('tensor', self.__class__.__name__, super().__repr__()) #export def _patch_tb(): if getattr(TensorBase,'_patched',False): return TensorBase._patched = True def get_f(fn): def _f(self, *args, **kwargs): cls = self.__class__ res = getattr(super(TensorBase, self), fn)(*args, **kwargs) return retain_type(res, self) return _f t = tensor([1]) skips = 'as_subclass __getitem__ __class__ __deepcopy__ __delattr__ __dir__ __doc__ __getattribute__ __hash__ __init__ \ __init_subclass__ __new__ __reduce__ __reduce_ex__ __repr__ __module__ __setstate__'.split() for fn in dir(t): if fn in skips: continue f = getattr(t, fn) if isinstance(f, (MethodWrapperType, BuiltinFunctionType, BuiltinMethodType, MethodType, FunctionType)): setattr(TensorBase, fn, get_f(fn)) _patch_tb() #export class TensorCategory(TensorBase): pass #export class TensorMultiCategory(TensorCategory): pass class _T(TensorBase): pass t = _T(range(5)) test_eq(t[0], 0) test_eq_type(t.gi(0), _T(0)) test_eq_type(t.gi(slice(2)), _T([0,1])) test_eq_type(t+1, _T(range(1,6))) test_eq(repr(t), '_T([0, 1, 2, 3, 4])') test_eq(type(pickle.loads(pickle.dumps(t))), _T) t = tensor([1,2,3]) m = TensorBase([False,True,True]) test_eq(t[m], tensor([2,3])) t = tensor([[1,2,3],[1,2,3]]) m = cast(tensor([[False,True,True], [False,True,True]]), TensorBase) test_eq(t[m], tensor([2,3,2,3])) t = tensor([[1,2,3],[1,2,3]]) t._meta = {'img_size': 1} t2 = cast(t, TensorBase) test_eq(t2._meta, t._meta) x = retain_type(tensor([4,5,6]), t2) test_eq(x._meta, t._meta) t3 = TensorBase([[1,2,3],[1,2,3]], img_size=1) test_eq(t3._meta, t._meta) #export class TensorImageBase(TensorBase): _show_args = ArrayImageBase._show_args def show(self, ctx=None, **kwargs): return show_image(self, ctx=ctx, **{**self._show_args, **kwargs}) #export class TensorImage(TensorImageBase): pass #export class TensorImageBW(TensorImage): _show_args = ArrayImageBW._show_args #export class TensorMask(TensorImageBase): _show_args = ArrayMask._show_args def show(self, ctx=None, **kwargs): codes = self.get_meta('codes') if codes is not None: kwargs = merge({'vmin': 1, 'vmax': len(codes)}, kwargs) return super().show(ctx=ctx, **kwargs) im = Image.open(TEST_IMAGE) im_t = cast(array(im), TensorImage) test_eq(type(im_t), TensorImage) im_t2 = cast(tensor(1), TensorMask) test_eq(type(im_t2), TensorMask) test_eq(im_t2, tensor(1)) ax = im_t.show(figsize=(2,2)) test_fig_exists(ax) #hide (last test of to_concat) test_eq_type(to_concat([TensorImage([1,2]), TensorImage([3,4])]), TensorImage([1,2,3,4])) #export class TitledTensorScalar(TensorBase): "A tensor containing a scalar that has a `show` method" def show(self, **kwargs): show_title(self.item(), **kwargs)
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
L -
#export @patch def tensored(self:L): "`mapped(tensor)`" return self.map(tensor) @patch def stack(self:L, dim=0): "Same as `torch.stack`" return torch.stack(list(self.tensored()), dim=dim) @patch def cat (self:L, dim=0): "Same as `torch.cat`" return torch.cat (list(self.tensored()), dim=dim) show_doc(L.tensored)
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
There are shortcuts for `torch.stack` and `torch.cat` if your `L` contains tensors or something convertible. You can manually convert with `tensored`.
t = L(([1,2],[3,4])) test_eq(t.tensored(), [tensor(1,2),tensor(3,4)]) show_doc(L.stack) test_eq(t.stack(), tensor([[1,2],[3,4]])) show_doc(L.cat) test_eq(t.cat(), tensor([1,2,3,4]))
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Chunks
#export def concat(*ls): "Concatenate tensors, arrays, lists, or tuples" if not len(ls): return [] it = ls[0] if isinstance(it,torch.Tensor): res = torch.cat(ls) elif isinstance(it,ndarray): res = np.concatenate(ls) else: res = itertools.chain.from_iterable(map(L,ls)) if isinstance(it,(tuple,list)): res = type(it)(res) else: res = L(res) return retain_type(res, it) a,b,c = [1],[1,2],[1,1,2] test_eq(concat(a,b), c) test_eq_type(concat(tuple (a),tuple (b)), tuple (c)) test_eq_type(concat(array (a),array (b)), array (c)) test_eq_type(concat(tensor(a),tensor(b)), tensor(c)) test_eq_type(concat(TensorBase(a),TensorBase(b)), TensorBase(c)) test_eq_type(concat([1,1],1), [1,1,1]) test_eq_type(concat(1,1,1), L(1,1,1)) test_eq_type(concat(L(1,2),1), L(1,2,1)) #export class Chunks: "Slice and int indexing into a list of lists" def __init__(self, chunks, lens=None): self.chunks = chunks self.lens = L(map(len,self.chunks) if lens is None else lens) self.cumlens = np.cumsum(0+self.lens) self.totlen = self.cumlens[-1] def __getitem__(self,i): if isinstance(i,slice): return retain_type(self.getslice(i), old=self.chunks[0]) di,idx = self.doc_idx(i) return retain_type(self.chunks[di][idx], old=self.chunks[0]) def getslice(self, i): st_d,st_i = self.doc_idx(ifnone(i.start,0)) en_d,en_i = self.doc_idx(ifnone(i.stop,self.totlen+1)) res = [self.chunks[st_d][st_i:(en_i if st_d==en_d else sys.maxsize)]] for b in range(st_d+1,en_d): res.append(self.chunks[b]) if st_d!=en_d and en_d<len(self.chunks): res.append(self.chunks[en_d][:en_i]) return concat(*res) def doc_idx(self, i): if i<0: i=self.totlen+i # count from end docidx = np.searchsorted(self.cumlens, i+1)-1 cl = self.cumlens[docidx] return docidx,i-cl docs = L(list(string.ascii_lowercase[a:b]) for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26))) b = Chunks(docs) test_eq([b[ o] for o in range(0,5)], ['a','b','c','d','e']) test_eq([b[-o] for o in range(1,6)], ['z','y','x','w','v']) test_eq(b[6:13], 'g,h,i,j,k,l,m'.split(',')) test_eq(b[20:77], 'u,v,w,x,y,z'.split(',')) test_eq(b[:5], 'a,b,c,d,e'.split(',')) test_eq(b[:2], 'a,b'.split(',')) t = torch.arange(26) docs = L(t[a:b] for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26))) b = Chunks(docs) test_eq([b[ o] for o in range(0,5)], range(0,5)) test_eq([b[-o] for o in range(1,6)], [25,24,23,22,21]) test_eq(b[6:13], torch.arange(6,13)) test_eq(b[20:77], torch.arange(20,26)) test_eq(b[:5], torch.arange(5)) test_eq(b[:2], torch.arange(2)) docs = L(TensorBase(t[a:b]) for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26))) b = Chunks(docs) test_eq_type(b[:2], TensorBase(range(2))) test_eq_type(b[:5], TensorBase(range(5))) test_eq_type(b[9:13], TensorBase(range(9,13)))
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Simple types
#export def show_title(o, ax=None, ctx=None, label=None, color='black', **kwargs): "Set title of `ax` to `o`, or print `o` if `ax` is `None`" ax = ifnone(ax,ctx) if ax is None: print(o) elif hasattr(ax, 'set_title'): t = ax.title.get_text() if len(t) > 0: o = t+'\n'+str(o) ax.set_title(o, color=color) elif isinstance(ax, pd.Series): while label in ax: label += '_' ax = ax.append(pd.Series({label: o})) return ax test_stdout(lambda: show_title("title"), "title") # ensure that col names are unique when showing to a pandas series assert show_title("title", ctx=pd.Series(dict(a=1)), label='a').equals(pd.Series(dict(a=1,a_='title'))) #export class ShowTitle: "Base class that adds a simple `show`" _show_args = {'label': 'text'} def show(self, ctx=None, **kwargs): "Show self" return show_title(str(self), ctx=ctx, **merge(self._show_args, kwargs)) class TitledInt(Int, ShowTitle): _show_args = {'label': 'text'} def show(self, ctx=None, **kwargs): "Show self" return show_title(str(self), ctx=ctx, **merge(self._show_args, kwargs)) class TitledFloat(Float, ShowTitle): _show_args = {'label': 'text'} def show(self, ctx=None, **kwargs): "Show self" return show_title(str(self), ctx=ctx, **merge(self._show_args, kwargs)) class TitledStr(Str, ShowTitle): _show_args = {'label': 'text'} def show(self, ctx=None, **kwargs): "Show self" return show_title(str(self), ctx=ctx, **merge(self._show_args, kwargs)) class TitledTuple(Tuple, ShowTitle): _show_args = {'label': 'text'} def show(self, ctx=None, **kwargs): "Show self" return show_title(str(self), ctx=ctx, **merge(self._show_args, kwargs)) add_docs(TitledInt, "An `int` with `show`"); add_docs(TitledStr, "An `str` with `show`"); add_docs(TitledFloat, "A `float` with `show`"); add_docs(TitledTuple, "A `Tuple` with `show`") show_doc(TitledInt, title_level=3) show_doc(TitledStr, title_level=3) show_doc(TitledFloat, title_level=3) test_stdout(lambda: TitledStr('s').show(), 's') test_stdout(lambda: TitledInt(1).show(), '1') show_doc(TitledTuple, title_level=3) #hide df = pd.DataFrame(index = range(1)) row = df.iloc[0] x = TitledFloat(2.56) row = x.show(ctx=row, label='lbl') test_eq(float(row.lbl), 2.56) #export @patch def truncate(self:TitledStr, n): "Truncate self to `n`" words = self.split(' ')[:n] return TitledStr(' '.join(words))
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Other functions
#export if not hasattr(pd.DataFrame,'_old_init'): pd.DataFrame._old_init = pd.DataFrame.__init__ #export @patch def __init__(self:pd.DataFrame, data=None, index=None, columns=None, dtype=None, copy=False): if data is not None and isinstance(data, Tensor): data = to_np(data) self._old_init(data, index=index, columns=columns, dtype=dtype, copy=copy) #export def get_empty_df(n): "Return `n` empty rows of a dataframe" df = pd.DataFrame(index = range(n)) return [df.iloc[i] for i in range(n)] #export def display_df(df): "Display `df` in a notebook or defaults to print" try: from IPython.display import display, HTML except: return print(df) display(HTML(df.to_html())) #export def get_first(c): "Get the first element of c, even if c is a dataframe" return getattr(c, 'iloc', c)[0] #export def one_param(m): "First parameter in `m`" return first(m.parameters()) #export def item_find(x, idx=0): "Recursively takes the `idx`-th element of `x`" if is_listy(x): return item_find(x[idx]) if isinstance(x,dict): key = list(x.keys())[idx] if isinstance(idx, int) else idx return item_find(x[key]) return x #export def find_device(b): "Recursively search the device of `b`." return item_find(b).device t2 = to_device(tensor(0)) dev = default_device() test_eq(find_device(t2), dev) test_eq(find_device([t2,t2]), dev) test_eq(find_device({'a':t2,'b':t2}), dev) test_eq(find_device({'a':[[t2],[t2]],'b':t2}), dev) #export def find_bs(b): "Recursively search the batch size of `b`." return item_find(b).shape[0] x = torch.randn(4,5) test_eq(find_bs(x), 4) test_eq(find_bs([x, x]), 4) test_eq(find_bs({'a':x,'b':x}), 4) test_eq(find_bs({'a':[[x],[x]],'b':x}), 4) def np_func(f): "Convert a function taking and returning numpy arrays to one taking and returning tensors" def _inner(*args, **kwargs): nargs = [to_np(arg) if isinstance(arg,Tensor) else arg for arg in args] return tensor(f(*nargs, **kwargs)) functools.update_wrapper(_inner, f) return _inner
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
This decorator is particularly useful for using numpy functions as fastai metrics, for instance:
from sklearn.metrics import f1_score @np_func def f1(inp,targ): return f1_score(targ, inp) a1,a2 = array([0,1,1]),array([1,0,1]) t = f1(tensor(a1),tensor(a2)) test_eq(f1_score(a1,a2), t) assert isinstance(t,Tensor) #export class Module(nn.Module, metaclass=PrePostInitMeta): "Same as `nn.Module`, but no need for subclasses to call `super().__init__`" def __pre_init__(self, *args, **kwargs): super().__init__() def __init__(self): pass show_doc(Module, title_level=3) class _T(Module): def __init__(self): self.f = nn.Linear(1,1) def forward(self,x): return self.f(x) t = _T() t(tensor([1.])) # export from torch.nn.parallel import DistributedDataParallel def get_model(model): "Return the model maybe wrapped inside `model`." return model.module if isinstance(model, (DistributedDataParallel, nn.DataParallel)) else model # export def one_hot(x, c): "One-hot encode `x` with `c` classes." res = torch.zeros(c, dtype=torch.uint8) if isinstance(x, Tensor) and x.numel()>0: res[x] = 1. else: res[list(L(x, use_list=None))] = 1. return res test_eq(one_hot([1,4], 5), tensor(0,1,0,0,1).byte()) test_eq(one_hot(torch.tensor([]), 5), tensor(0,0,0,0,0).byte()) test_eq(one_hot(2, 5), tensor(0,0,1,0,0).byte()) #export def one_hot_decode(x, vocab=None): return L(vocab[i] if vocab else i for i,x_ in enumerate(x) if x_==1) test_eq(one_hot_decode(tensor(0,1,0,0,1)), [1,4]) test_eq(one_hot_decode(tensor(0,0,0,0,0)), [ ]) test_eq(one_hot_decode(tensor(0,0,1,0,0)), [2 ]) #export def params(m): "Return all parameters of `m`" return [p for p in m.parameters()] #export def trainable_params(m): "Return all trainable parameters of `m`" return [p for p in m.parameters() if p.requires_grad] m = nn.Linear(4,5) test_eq(trainable_params(m), [m.weight, m.bias]) m.weight.requires_grad_(False) test_eq(trainable_params(m), [m.bias]) #export norm_types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d, nn.InstanceNorm1d, nn.InstanceNorm2d, nn.InstanceNorm3d) #export def bn_bias_params(m, with_bias=True): # TODO: Rename to `norm_bias_params` "Return all bias and BatchNorm parameters" if isinstance(m, norm_types): return L(m.parameters()) res = L(m.children()).map(bn_bias_params, with_bias=with_bias).concat() if with_bias and getattr(m, 'bias', None) is not None: res.append(m.bias) return res for norm_func in [nn.BatchNorm1d, partial(nn.InstanceNorm1d, affine=True)]: model = nn.Sequential(nn.Linear(10,20), norm_func(20), nn.Conv1d(3,4, 3)) test_eq(bn_bias_params(model), [model[0].bias, model[1].weight, model[1].bias, model[2].bias]) model = nn.ModuleList([nn.Linear(10,20, bias=False), nn.Sequential(norm_func(20), nn.Conv1d(3,4,3))]) test_eq(bn_bias_params(model), [model[1][0].weight, model[1][0].bias, model[1][1].bias]) model = nn.ModuleList([nn.Linear(10,20), nn.Sequential(norm_func(20), nn.Conv1d(3,4,3))]) test_eq(bn_bias_params(model, with_bias=False), [model[1][0].weight, model[1][0].bias]) #export def batch_to_samples(b, max_n=10): "'Transposes' a batch to (at most `max_n`) samples" if isinstance(b, Tensor): return retain_types(list(b[:max_n]), [b]) else: res = L(b).map(partial(batch_to_samples,max_n=max_n)) return retain_types(res.zip(), [b]) t = tensor([1,2,3]) test_eq(batch_to_samples([t,t+1], max_n=2), ([1,2],[2,3])) test_eq(batch_to_samples(tensor([1,2,3]), 10), [1, 2, 3]) test_eq(batch_to_samples([tensor([1,2,3]), tensor([4,5,6])], 10), [(1, 4), (2, 5), (3, 6)]) test_eq(batch_to_samples([tensor([1,2,3]), tensor([4,5,6])], 2), [(1, 4), (2, 5)]) test_eq(batch_to_samples([tensor([1,2,3]), [tensor([4,5,6]),tensor([7,8,9])]], 10), [(1, (4, 7)), (2, (5, 8)), (3, (6, 9))]) test_eq(batch_to_samples([tensor([1,2,3]), [tensor([4,5,6]),tensor([7,8,9])]], 2), [(1, (4, 7)), (2, (5, 8))]) t = Tuple(tensor([1,2,3]),TensorBase([2,3,4])) test_eq_type(batch_to_samples(t)[0][1], TensorBase(2)) test_eq(batch_to_samples(t).map(type), [Tuple]*3) #export @patch def interp_1d(x:Tensor, xp, fp): "Same as `np.interp`" slopes = (fp[1:]-fp[:-1])/(xp[1:]-xp[:-1]) incx = fp[:-1] - (slopes*xp[:-1]) locs = (x[:,None]>=xp[None,:]).long().sum(1)-1 locs = locs.clamp(0,len(slopes)-1) return slopes[locs]*x + incx[locs] brks = tensor(0,1,2,4,8,64).float() ys = tensor(range_of(brks)).float() ys /= ys[-1].item() pts = tensor(0.2,0.5,0.8,3,5,63) preds = pts.interp_1d(brks, ys) test_close(preds.numpy(), np.interp(pts.numpy(), brks.numpy(), ys.numpy())) plt.scatter(brks,ys) plt.scatter(pts,preds) plt.legend(['breaks','preds']); #export @patch def pca(x:Tensor, k=2): "Compute PCA of `x` with `k` dimensions." x = x-torch.mean(x,0) U,S,V = torch.svd(x.t()) return torch.mm(x,U[:,:k]) # export def logit(x): "Logit of `x`, clamped to avoid inf." x = x.clamp(1e-7, 1-1e-7) return -(1/x-1).log() #export def num_distrib(): "Return the number of processes in distributed training (if applicable)." return int(os.environ.get('WORLD_SIZE', 0)) #export def rank_distrib(): "Return the distributed rank of this process (if applicable)." return int(os.environ.get('RANK', 0)) #export def distrib_barrier(): "Place a synchronization barrier in distributed training so that ALL sub-processes in the pytorch process group must arrive here before proceeding." if num_distrib() > 1: torch.distributed.barrier() #export # Saving arrays requires pytables - optional dependency try: import tables except: pass #export def _comp_filter(lib='lz4',lvl=3): return tables.Filters(complib=f'blosc:{lib}', complevel=lvl) #export @patch def save_array(p:Path, o, complib='lz4', lvl=3): "Save numpy array to a compressed `pytables` file, using compression level `lvl`" if isinstance(o,Tensor): o = to_np(o) with tables.open_file(p, mode='w', filters=_comp_filter(lib=complib,lvl=lvl)) as f: f.create_carray('/', 'data', obj=o)
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Compression lib can be any of: blosclz, lz4, lz4hc, snappy, zlib or zstd.
#export @patch def load_array(p:Path): "Save numpy array to a `pytables` file" with tables.open_file(p, 'r') as f: return f.root.data.read() inspect.getdoc(load_array) str(inspect.signature(load_array)) #export def base_doc(elt): "Print a base documentation of `elt`" name = getattr(elt, '__qualname__', getattr(elt, '__name__', '')) print(f'{name}{inspect.signature(elt)}\n{inspect.getdoc(elt)}\n') print('To get a prettier result with hyperlinks to source code and documentation, install nbdev: pip install nbdev') #export def doc(elt): "Try to use doc form nbdev and fall back to `base_doc`" try: from nbdev.showdoc import doc doc(elt) except: base_doc(elt) #export def nested_reorder(t, idxs): "Reorder all tensors in `t` using `idxs`" if isinstance(t, (Tensor,L)): return t[idxs] elif is_listy(t): return type(t)(nested_reorder(t_, idxs) for t_ in t) if t is None: return t raise TypeError(f"Expected tensor, tuple, list or L but got {type(t)}") x = tensor([0,1,2,3,4,5]) idxs = tensor([2,5,1,0,3,4]) test_eq_type(nested_reorder(([x], x), idxs), ([idxs], idxs)) y = L(0,1,2,3,4,5) z = L(i.item() for i in idxs) test_eq_type(nested_reorder((y, x), idxs), (z,idxs))
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Image helpers
#export def to_image(x): if isinstance(x,Image.Image): return x if isinstance(x,Tensor): x = to_np(x.permute((1,2,0))) if x.dtype==np.float32: x = (x*255).astype(np.uint8) return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4]) #export def make_cross_image(bw=True): "Create a tensor containing a cross image, either `bw` (True) or color" if bw: im = torch.zeros(5,5) im[2,:] = 1. im[:,2] = 1. else: im = torch.zeros(3,5,5) im[0,2,:] = 1. im[1,:,2] = 1. return im plt.imshow(make_cross_image(), cmap="Greys"); plt.imshow(make_cross_image(False).permute(1,2,0)); #export def show_image_batch(b, show=show_titled_image, items=9, cols=3, figsize=None, **kwargs): "Display batch `b` in a grid of size `items` with `cols` width" if items<cols: cols=items rows = (items+cols-1) // cols if figsize is None: figsize = (cols*3, rows*3) fig,axs = plt.subplots(rows, cols, figsize=figsize) for *o,ax in zip(*to_cpu(b), axs.flatten()): show(o, ax=ax, **kwargs) show_image_batch(([Image.open(TEST_IMAGE_BW),Image.open(TEST_IMAGE)],['bw','color']), items=2)
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Model init
#export def requires_grad(m): "Check if the first parameter of `m` requires grad or not" ps = list(m.parameters()) return ps[0].requires_grad if len(ps)>0 else False tst = nn.Linear(4,5) assert requires_grad(tst) for p in tst.parameters(): p.requires_grad_(False) assert not requires_grad(tst) #export def init_default(m, func=nn.init.kaiming_normal_): "Initialize `m` weights with `func` and set `bias` to 0." if func: if hasattr(m, 'weight'): func(m.weight) if hasattr(m, 'bias') and hasattr(m.bias, 'data'): m.bias.data.fill_(0.) return m tst = nn.Linear(4,5) tst.weight.data.uniform_(-1,1) tst.bias.data.uniform_(-1,1) tst = init_default(tst, func = lambda x: x.data.fill_(1.)) test_eq(tst.weight, torch.ones(5,4)) test_eq(tst.bias, torch.zeros(5)) #export def cond_init(m, func): "Apply `init_default` to `m` unless it's a batchnorm module" if (not isinstance(m, norm_types)) and requires_grad(m): init_default(m, func) tst = nn.Linear(4,5) tst.weight.data.uniform_(-1,1) tst.bias.data.uniform_(-1,1) cond_init(tst, func = lambda x: x.data.fill_(1.)) test_eq(tst.weight, torch.ones(5,4)) test_eq(tst.bias, torch.zeros(5)) tst = nn.BatchNorm2d(5) init = [tst.weight.clone(), tst.bias.clone()] cond_init(tst, func = lambda x: x.data.fill_(1.)) test_eq(tst.weight, init[0]) test_eq(tst.bias, init[1]) #export def apply_leaf(m, f): "Apply `f` to children of `m`." c = m.children() if isinstance(m, nn.Module): f(m) for l in c: apply_leaf(l,f) tst = nn.Sequential(nn.Linear(4,5), nn.Sequential(nn.Linear(4,5), nn.Linear(4,5))) apply_leaf(tst, partial(init_default, func=lambda x: x.data.fill_(1.))) for l in [tst[0], *tst[1]]: test_eq(l.weight, torch.ones(5,4)) for l in [tst[0], *tst[1]]: test_eq(l.bias, torch.zeros(5)) #export def apply_init(m, func=nn.init.kaiming_normal_): "Initialize all non-batchnorm layers of `m` with `func`." apply_leaf(m, partial(cond_init, func=func)) tst = nn.Sequential(nn.Linear(4,5), nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(5))) init = [tst[1][1].weight.clone(), tst[1][1].bias.clone()] apply_init(tst, func=lambda x: x.data.fill_(1.)) for l in [tst[0], tst[1][0]]: test_eq(l.weight, torch.ones(5,4)) for l in [tst[0], tst[1][0]]: test_eq(l.bias, torch.zeros(5)) test_eq(tst[1][1].weight, init[0]) test_eq(tst[1][1].bias, init[1])
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Multiprocessing
#export from multiprocessing import Process, Queue #export def set_num_threads(nt): "Get numpy (and others) to use `nt` threads" try: import mkl; mkl.set_num_threads(nt) except: pass torch.set_num_threads(1) os.environ['IPC_ENABLE']='1' for o in ['OPENBLAS_NUM_THREADS','NUMEXPR_NUM_THREADS','OMP_NUM_THREADS','MKL_NUM_THREADS']: os.environ[o] = str(nt) #export @delegates(concurrent.futures.ProcessPoolExecutor) class ProcessPoolExecutor(concurrent.futures.ProcessPoolExecutor): def __init__(self, max_workers=None, on_exc=print, **kwargs): self.not_parallel = max_workers==0 self.on_exc = on_exc if self.not_parallel: max_workers=1 super().__init__(max_workers, **kwargs) def map(self, f, items, *args, **kwargs): g = partial(f, *args, **kwargs) if self.not_parallel: return map(g, items) try: return super().map(g, items) except Exception as e: self.on_exc(e) #export def parallel(f, items, *args, n_workers=defaults.cpus, total=None, progress=True, **kwargs): "Applies `func` in parallel to `items`, using `n_workers`" with ProcessPoolExecutor(n_workers) as ex: r = ex.map(f,items, *args, **kwargs) if progress: if total is None: total = len(items) r = progress_bar(r, total=total, leave=False) return L(r) def add_one(x, a=1): time.sleep(random.random()/100) return x+a inp,exp = range(50),range(1,51) test_eq(parallel(add_one, inp, n_workers=2), exp) test_eq(parallel(add_one, inp, n_workers=0), exp) test_eq(parallel(add_one, inp, n_workers=1, a=2), range(2,52)) test_eq(parallel(add_one, inp, n_workers=0, a=2), range(2,52)) #export def run_procs(f, f_done, args): "Call `f` for each item in `args` in parallel, yielding `f_done`" processes = L(args).map(Process, args=arg0, target=f) for o in processes: o.start() try: yield from f_done() except Exception as e: print(e) finally: processes.map(Self.join()) #export def parallel_gen(cls, items, n_workers=defaults.cpus, as_gen=False, **kwargs): "Instantiate `cls` in `n_workers` procs & call each on a subset of `items` in parallel." batches = np.array_split(items, n_workers) idx = np.cumsum(0 + L(batches).map(len)) queue = Queue() def f(batch, start_idx): for i,b in enumerate(cls(**kwargs)(batch)): queue.put((start_idx+i,b)) def done(): return (queue.get() for _ in progress_bar(items, leave=False)) yield from run_procs(f, done, L(batches,idx).zip())
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
`cls` is any class with `__call__`. It will be passed `args` and `kwargs` when initialized. Note that `n_workers` instances of `cls` are created, one in each process. `items` are then split in `n_workers` batches and one is sent to each `cls`. The function then returns a list of all the results, matching the order of `items` (if not `as_gen`) or a generator of tuples of item indices and results (if `as_gen`).
class SleepyBatchFunc: def __init__(self): self.a=1 def __call__(self, batch): for k in batch: time.sleep(random.random()/4) yield k+self.a x = np.linspace(0,0.99,20) res = L(parallel_gen(SleepyBatchFunc, x, n_workers=2)) test_eq(res.sorted().itemgot(1), x+1)
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
autograd jit functions
#export def script_use_ctx(f): "Decorator: create jit script and pass everything in `ctx.saved_variables to `f`, after `*args`" sf = torch.jit.script(f) def _f(ctx, *args, **kwargs): return sf(*args, *ctx.saved_variables, **kwargs) return update_wrapper(_f,f) #export def script_save_ctx(static, *argidx): "Decorator: create jit script and save args with indices `argidx` using `ctx.save_for_backward`" def _dec(f): sf = torch.jit.script(f) def _f(ctx, *args, **kwargs): if argidx: save = [args[o] for o in argidx] ctx.save_for_backward(*save) if not argidx: args = [ctx]+args return sf(*args, **kwargs) if static: _f = staticmethod(_f) return update_wrapper(_f,f) return _dec #export def script_fwd(*argidx): "Decorator: create static jit script and save args with indices `argidx` using `ctx.save_for_backward`" return script_save_ctx(True, *argidx) #export def script_bwd(f): "Decorator: create static jit script and pass everything in `ctx.saved_variables to `f`, after `*args`" return staticmethod(script_use_ctx(f)) #export def grad_module(cls): "Decorator: convert `cls` into an autograd function" class _c(nn.Module): def forward(self, *args, **kwargs): return cls.apply(*args, **kwargs) return _c
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Export -
#hide from nbdev.export import notebook2script notebook2script()
Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb.
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Soft Computing Vežba 1 - Digitalna slika, computer vision, OpenCV OpenCVOpen source biblioteka namenjena oblasti računarske vizije (eng. computer vision). Dokumentacija dostupna ovde. matplotlibPlotting biblioteka za programski jezik Python i njegov numerički paket NumPy. Dokumentacija dostupna ovde. Učitavanje slikeOpenCV metoda za učitavanje slike sa diska je imread(path_to_image), koja kao parametar prima putanju do slike na disku. Učitana slika img je zapravo NumPy matrica, čije dimenzije zavise od same prirode slike. Ako je slika u boji, onda je img trodimenzionalna matrica, čije su prve dve dimenzije visina i širina slike, a treća dimenzija je veličine 3, zato što ona predstavlja boju (RGB, po jedan segment za svaku osnonvu boju).
import numpy as np import cv2 # OpenCV biblioteka import matplotlib import matplotlib.pyplot as plt # iscrtavanje slika i grafika unutar samog browsera %matplotlib inline # prikaz vecih slika matplotlib.rcParams['figure.figsize'] = 16,12 img = cv2.imread('images/girl.jpg') # ucitavanje slike sa diska img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # konvertovanje iz BGR u RGB model boja (OpenCV ucita sliku kao BGR) plt.imshow(img) # prikazivanje slike
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Prikazivanje dimenzija slike
print(img.shape) # shape je property Numpy array-a za prikaz dimenzija
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Obratiti pažnju da slika u boji ima 3 komponente za svaki piksel na slici - R (red), G (green) i B (blue).![images/cat_rgb.png](images/cat_rgb.png)
img
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Primetite da je svaki element matrice **uint8** (unsigned 8-bit integer), odnosno celobroja vrednost u interval [0, 255].
img.dtype
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Osnovne operacije pomoću NumPyPredstavljanje slike kao NumPy array je vrlo korisna stvar, jer omogućava jednostavnu manipulaciju i izvršavanje osnovih operacija nad slikom. Isecanje (crop)
img_crop = img[100:200, 300:600] # prva koordinata je po visini (formalno red), druga po širini (formalo kolona) plt.imshow(img_crop)
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Okretanje (flip)
img_flip_h = img[:, ::-1] # prva koordinata ostaje ista, a kolone se uzimaju unazad plt.imshow(img_flip_h) img_flip_v = img[::-1, :] # druga koordinata ostaje ista, a redovi se uzimaju unazad plt.imshow(img_flip_v) img_flip_c = img[:, :, ::-1] # možemo i izmeniti redosled boja (RGB->BGR), samo je pitanje koliko to ima smisla plt.imshow(img_flip_c)
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Invertovanje
img_inv = 255 - img # ako su pikeli u intervalu [0,255] ovo je ok, a ako su u intervalu [0.,1.] onda bi bilo 1. - img plt.imshow(img_inv)
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Konvertovanje iz RGB u "grayscale"Konvertovanjem iz RGB modela u nijanse sivih (grayscale) se gubi informacija o boji piksela na slici, ali sama slika postaje mnogo lakša za dalju obradu.Ovo se može uraditi na više načina:1. **Srednja vrednost** RGB komponenti - najjednostavnija varijanta $$ G = \frac{R+G+B}{3} $$2. **Metod osvetljenosti** - srednja vrednost najjače i najslabije boje $$ G = \frac{max(R,G,B) + min(R,G,B)}{2} $$3. **Metod perceptivne osvetljenosti** - težinska srednja vrednost koja uzima u obzir ljudsku percepciju (npr. najviše smo osetljivi na zelenu boju, pa to treba uzeti u obzir)$$ G = 0.21*R + 0.72*G + 0.07*B $$
# implementacija metode perceptivne osvetljenosti def my_rgb2gray(img_rgb): img_gray = np.ndarray((img_rgb.shape[0], img_rgb.shape[1])) # zauzimanje memorije za sliku (nema trece dimenzije) img_gray = 0.21*img_rgb[:, :, 0] + 0.77*img_rgb[:, :, 1] + 0.07*img_rgb[:, :, 2] img_gray = img_gray.astype('uint8') # u prethodnom koraku smo mnozili sa float, pa sada moramo da vratimo u [0,255] opseg return img_gray img_gray = my_rgb2gray(img) plt.imshow(img_gray, 'gray') # kada se prikazuje slika koja nije RGB, obavezno je staviti 'gray' kao drugi parametar
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Ipak je najbolje se držati implementacije u **OpenCV** biblioteci :).
img_gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) img_gray.shape plt.imshow(img_gray, 'gray') img_gray
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit