content
stringlengths
7
2.61M
CLEVELAND, Ohio -- Move ahead, go forward and on to the new year. There are a ton of NYE shebangs going down in bars and clubs. Here are some. Good luck to you all in 2016... A new year, a centennial More than just another year, this soiree rings in a century of excellence. Propose a toast to the Cleveland Museum of Art, which kicks off it 2016 centennial celebration with a New Year's Eve party. The bash will allow partyers to explore the galleries and see Painting the Modern: Monet to Matisse. It's also a dance party, with music by King Britt and Cleveland DJ MisterBradleyP. Admission is $40; $30 for CMA members. For info and tickets, go here. Note: Painting the Modern is up through Jan. 5 - if you haven't seen it, well, you really should. Casino countdown It's like Vegas, without having to pay for the airfare. Ding, ding, ding, it's the 2016 New Year's Eve Party at the Slush Bar in Thistledown Racino. The casino countdown starts at 7 p.m. in the racino, 21501 Emery Road, Cleveland. It features live music by the Benjaminz, as well as King Lou and DJ Haz Matt. You also get a champagne toast. The semi-formal soiree runs until 2 a.m. Free. Go to caesars.com/thistledown/events/NYE or call 216-662-8600. It's raining walleye! Ball drops? Yawn. Glitzy soirees? Zzzz ... It's time for a something different, a totally new to ring in the new year. Yup, it's the Madness at Midnight Walleye Drop. For 20 years, the event has become a ritual in downtown Port Clinton, aka the Walleye Capital of the World -- 78 miles west of Cleveland. The 20-foot, 600-pound papier-mache fish is dropped 50 feet to "reel in the new year." Port Clinton eateries join in by serving specialties such as walleye chowder, even a "Walleye White" wine. For info, go to walleyemadness.com. Puttin' on the pops For 20 years, Carl Topilow and the Cleveland Pops Orchestra has been bringing a bit of Broadway to Cleveland on New Year's Eve. This year's edition of the annual show will feature the Pops performing show tunes and standards from productions such as "Jersey Boys," "Tommy," "Jekyll and Hyde" and "Les Miserables" - as well as orchestra medleys from "Star Wars" and "Downton Abbey." The two-hour show starts at 9 p.m. Thursday and is followed by dancing running until 1 a.m. in Severance. $31-112. For tickets and info, go to clevelandpops.com or call 216-231-1111. Looking forward, looking back Looking to party like it's 1929? Or, perhaps, a 2015 take on another time... say one that resembles the Art Deco era? Blow your merry maker like crazy for DECO + DANCE at Mahall's, 13200 Madison Ave, Lakewood. The NYE soiree is modeled after the movies of Buz Luhrmann... You know, the director known for modern takes on the past in flicks such as "The Great Gatsby," "Moulin Rouge" and "Romeo and Juliet." With DJ Himiko Gogo & DJ Road Chief will spin a wide range of dance music that includes Italo disco remixes, club, house and new wave. VJ Wes Johansen provides the videos. $5. Go to mahalls20lanes.com or call 216-521-3280. Bust out laughing You can laugh in the New Year at two spots this year: Pickwick and Frolic -- 2035 East Fourth Street, Cleveland - rolls out several New Year's Eve packages at a variety of prices and places within the entertainment complex. Tommy Johnagin will play two shows, at 7:30 and 10:30 p.m. Admission to the early show is $25 and $30. Admission to the latter is $75 and $80 and includes a balloon drop and champagne toast and an after-show with music by Nightbridge to go with hors d'oeuvres and party favors. There is also a $50 party package, which includes admission to Kevin's Martini Bar & Tap Room, music by Nightbridge and the vintage Vegas-style Midnight Martini Show in the Frolic Cabaret. It also includes hors d'oeuvres and party favors. There is also a $129 dinner gala -- which starts at 8:15 p.m. It includes the buffet dinner, the 10:30 comedy show, the Nightbridge show, the Midnight Martini Show, hors d'oeuvres, party favors and a balloon drop. Call 216-241-7425. Check out pickwickandfrolic.com for details and reservations. The Cleveland Improv - 1148 Main Avenue, on the west bank of the Cleveland Flats - will host a New Year's Eve bash hosted by Mike Polk Jr., the Cleveland guy who has just released a comedy disc, "Baselessly Arrogant: Live from a Bowling Alley." Polk will do two shows, at 7:30 and 10:30 p.m. The early show comes with a dinner option. $30-$55. For more info and tickets, go to clevelandimprov.com or call 216-696-4677. The Comedy Zone at the Hard Rock Rocksino -- 10777 Northfield Road, Northfield - will ring in 2016 with Cleveland native Pete George. He'll do two shows - 7 and 10 p.m. Thursday - as part of the casino's NYE festivities. Tickets $20-25 for early show; $30-35 for later. For more info on the show and the Hard Rock's Masquerade Ball, go to hrrocksinonorthfieldpark.com/nye-masquerade-ball-.htm. Party animals party for animals Not to sound sanctimonious or, um, cheap, but some of the money that gets tossed around at these ritzy soirees is a bit much. What, you mean you agree? Well, how about a homey, low-cost party that benefits animals? Check out This Old House II at Clifton Martini and Wine Bar - that homey spot that serves a stellar classic martini over at 10427 Clifton Boulevard, Cleveland. The dance-party-meets-cocktail bash will dovetail a chill vibe with a good cause. No cover, but donations will be accepted for Mid-Ohio Animal Welfare League - a volunteer group that spays and neuters dogs and cats and provides homes for them. Sleepy C and Kevin Bumpers will DJ. The party starts at 10 p.m. Thursday. Clifton Martini is also doing a dinner at 5. Go to cliftonmartini.com or call 216-965-0221. A just-another-night party You can sneer at NYE as overblown, overpriced amateur night at home alone. Or you can do it in public, with a bunch of like-minded friends at Now That's Class. Yup, the punk joint -- 11213 Detroit Ave, Cleveland - will host Our (expletive) Annual NYE Party. The bash entails nothing special, except for some lively company around the bar - "so come on that anticlimactic and tedious night, and make it like it's any other night of the year," boasts NTC. Ah, but the joint is rolling out some frills: The jukebox will run on free credits all night, so you can play what you want without blowing your quarters. Drink prices will not be jacked up for the night, either. The "festivities" start at 5 p.m. Thursday. Free. Go to nowthatsclass.net or call 216-221-8576.
Branch artery occlusion. An unusual complication of external carotid embolization. A case of retinal branch artery occlusion was caused by migration of emboli, presumably via collateral circulation, during therapeutic embolization of the maxillary artery. Migration of particles to the ophthalmic circulation is unusual with embolization of the branches of the external carotid artery. Meticulous technique, careful angiographic monitoring, and proper selection of embolic material may reduce, but not eliminate, migration of emboli to undesirable locations. Therapeutic embolization of vascular tumors and malformations in the external carotid territory is a recent radiologic innovation that is becoming increasingly popular. Therefore, we may expect to see more ocular complications from aberrant emboli as the use of this technique becomes more widespread.
On the approximation of the boundary control of the wave equation with numerical viscosity This article deals with the approximation of the boundary control of the 1-D linear wave equation. Due to the spurious high frequencies, the semi-discrete models obtained with finite difference or classical finite element methods are not uniformly controllable as the discretization parameter h goes to zero (see ). We propose a new strategy for the approximation of the boundary control based on the addition of a numerical vanishing viscous term. This will damp out the spurious high frequencies and will ensure the existence of a convergent sequence of approximate controls. We present an approximation algorithm and some numerical experiments.
"We have a town board in place that wants to protect history," said Zande. The most recent house that was given the landmark designation is 745 N. Main St. in downtown Mooresville. Normally, a homeowner asks the board for his house to become a landmark. The benefit for the homeowner is that property taxes are based on half the tax value of the home. The state historic preservation office determines the tax credit. There are guidelines for a home to be considered a landmark by the commission. The building must be at least 50 years old and not altered from its original state. After a homeowner applies for the historic designation, the commission determines approval as a landmarked site. The benefit to the town is that demolition can be delayed up to a year in order for the town and the commission to work with the owner to find alternatives. After a building is landmarked, the commission gets a say in any exterior improvements. Zande said he believes the commission's job is to ramp up the effort to protect historic sites. The Mooresville Historic Preservation Commission is trying to get nonprofit designation, and it raises money every year through a tour of homes. The tour of homes will be held Oct. 22-23. . Each building landmarked by the commission receives a plaque, and money raised covers the cost of those plaques. Additionally, the commission members want to buy signs to designate the specific neighborhoods such as Mill Village and the Cascades. The commission would like to have money to buy old homes, save and restore them, and re-sell them. "We are really moving as quickly as possible to preserve the history that is here. Mooresville is a great place, with great schools, and a great history. We are stewards of this time frame," said Zande.
Although I am aware that Slate (despite the global pretensions of cyberspace) has a primarily North American audience, and that North Americans (despite their pretensions to multiculturalism) are generally unified by their dislike/disdain for the British, perhaps I can convince a few of you, nevertheless, to spare a few moments of sympathy today for the Ulster Unionists, Northern Ireland’s largest Protestant political party. So attached to blood and soil and tradition that they still display a Union Jack on their Web site —the BBC wouldn’t be caught dead doing something so retrograde—the Ulster Unionists have just this weekend been bullied by their British cousins into signing yet another historic accord, agreeing to return to yet another power-sharing arrangement in Belfast with Sinn Fein, the political wing of the Irish Republican Army. For those of you (most of you, I suspect) who have been following the ups and downs of the Nasdaq far more closely than the ins and outs of the Northern Irish “peace process,” this means that the Unionists, who ended their participation in the Belfast miniparliament last February after the IRA consistently refused to comply with demands to give up its weapons, have just narrowly, reluctantly, voted to rejoin it, after an enormous internal battle. This is because an IRA statement on May 8 at last came out in favor of “a process that will completely and verifiably put IRA arms beyond use.” The statement was signed by P. O’Neill—Paddy O’Neill—which is the pseudonym the IRA traditionally used when it took responsibility for bombing civilians or “personnel,” as it traditionally describes British soldiers. Despite O’Neill’s participation, this statement provoked the weight of the British government, the media, and pretty much everyone else to fall squarely upon the heads of the Unionists, to pressure them to start playing the game again. Peter Mandelson, Tony Blair’s Northern Ireland minister, was pretty direct: “Something better is not going to turn up,” he told the Unionists. “[T]he negotiations are over. It’s time to make a choice.” The Irish prime minister, Bertie Ahern, told the Unionists to stop “obstructing” democracy. A BBC feature on Ulster yuppies living in London pretty much summed up the general British view of the problem: Clever young Protestants leave Belfast, boring old farts stay at home and battle for the right to stay British, and wouldn’t we be better off if everyone over there went to work for an online company or a bank. Yet the Unionists were right to be wary of going back to sharing power with Sinn Fein/IRA in Belfast, as a touch of explanatory semiotics reveals. In fact, “putting weapons beyond use” is a euphemism: What the myriad participants in the Northern Irish “peace process” have theoretically been trying to do is persuade the IRA to “decommission” its weapons. Indeed, an agreement to decommission was part of the April 1998 Good Friday agreement that set up the power-sharing arrangement in Belfast in the first place. Yet “decommission,” a word deliberately chosen for its neutrality, is itself a euphemism for “disarm.” And “disarm” is a euphemism too: What the Unionists really and truly want is for the IRA to promise to stop bombing people, forever. Such a promise, however, would be interpreted by the IRA as “surrender.” And “surrender”—to give up the armed struggle to reunite Ireland—is precisely what the IRA has consistently refused to do. Even IRA members’ “beyond use” statement from earlier this month makes that position pretty clear. They stick to their definition of “peace”: a unified Ireland. They stick to their definition of “causes of conflict”: the Britishness of Northern Ireland (although the Britishness of Northern Ireland is, of course, supported by a majority of its inhabitants). They berate those who would “abuse the peace process” by pursuing “the aim of defeating the IRA” (translation: How dare you demand that we give up terrorism). They offer to allow an independent international commission to inspect their arms dumps to ensure that their weapons remain “silent and secure” (translation: We are fully prepared to use weapons again if we need to). Reading this sort of thing, you would be hard pressed to guess that the actual “cause of conflict” in Northern Ireland is the IRA. The equation is pretty simple: No IRA, no terrorism. No terrorism, no conflict. Yet even that convoluted formulation was, it is now being said, too much for a few of the IRA men, who have left to join the “Real IRA,” a splinter group that has already distinguished itself by its skill with Semtex. All of this the Unionists knew perfectly well, but their choice wasn’t an easy one. They could rejoin the Belfast provincial government as a nonviolent political party sharing power with a political party backed by a few hundred “Paddy O’Neills” who still reserve the right to bomb everyone to kingdom come if things don’t go their way. Or the Unionists could refuse to cooperate, in which case they would have been called things like “retrograde” and “boring” and “irritating.” In the world of Northern Irish politics—a world where language is everything—what kind of choice was that? Back they went to the “peace process.” But if the “peace process” remains democratic—if, that is, it continues to respect the will of the inhabitants of Northern Ireland, who continue to want, for some unknown reason, to remain British—you can bet (pardon the metaphor) that it will explode once again.
Wanless report sparks debate on funding of health service NHS leaders are seeking to scale down public and political expectations of how fast the health service can turn extra cash into tangible improvements as the debate on funding gets under way. Doctors, managers, and policy makers all warned that it may take many years before results emerge from the extra £1bn ($1.4bn) for 20023 and continued long term increases for the NHS, announced by Chancellor Gordon Brown last week. They were anxious to ensure that the slow pace of change did not undermine commitment to an
Simulations of Aerodynamic Separated Flows Using the Lattice Boltzmann Solver XFlow : We present simulations of turbulent detached flows using the commercial lattice Boltzmann solver XFlow (by Dassault Systemes). XFlows lattice Boltzmann formulation together with an efficient octree mesh generator reduce substantially the cost of generating complex meshes for industrial flows. In this work, we challenge these meshes and quantify the accuracy of the solver for detached turbulent flows. The good performance of XFlow when combined with a Large-Eddy Simulation turbulence model is demonstrated for different industrial benchmarks and validated using experimental data or fine numerical simulations. We select five test cases: the Backward-facing step the Goldschmied Body the HLPW-2 (2nd High-Lift Prediction Workshop) full aircraft geometry, a NACA0012 under dynamic stall conditions and a parametric study of leading edge tubercles to improve stall behavior on a 3D wing. Introduction Aerodynamic performance plays a major role in the design process of an aircraft. The aeronautical industry has been using Computation Fluid Dynamics (CFD) as a complement to wind tunnel measurements, which are typically executed at the end of the production cycle. CFD studies present a strong advantage, as they allow several design iterations before manufacturing the product and therefore help to decrease manufacturing costs drastically. The analysis of an aircraft relies on its aerodynamic performance in linear regime, i.e., at low angles of attack. However, aircraft also require to maintain aerodynamic performance at high angles of attack, to ensure safety and stability at taking-off, landing or under extreme weather conditions. Presently, Navier-Stokes (N-S) solvers (using Reynolds Averaged Navier-Stokes (RANS) turbulence models) are able to predict, with high accuracy, linear aerodynamic performance at low angles of attack. However these solvers face strong difficulties and limitations when predicting aerodynamics near stall, which can be explained by the following reasons. First, the wing geometry of an aircraft in high-lift configuration is complex since the flap and slat are deflected, and thus it is extremely difficult to mesh in the gap between the wing and the flap or slat. Second, the flow is highly unsteady and separated, and standard turbulence models (such as RANS) fail to predict accurately flow separation. These two reasons make the solution convergence difficult or even impossible in most cases. Please note that the dependence of the flow accuracy and the quality of the mesh in N-S solvers as been widely explored in the past, e.g.,. Traditional CFD, using RANS closure models, were employed for decades in industry where robust and efficient simulation processes were established. Along these lines, several numerical treatments were proposed to enhance robustness of the N-S equations in complex engineering problems. For example, pressure-velocity decoupling in incompressible flows, multiphase multi-component flows or shock-turbulence interaction helped to obtain flow solutions and aerodynamic coefficients in complex flows. However, these traditional solvers face limitations especially related to the mesh, to model moving geometries with complex motions, or when strongly unsteady and separated flows are involved. The lattice Boltzmann Method (LBM) presents an alternative to classic finite volumes methods and shows promise to address some of these limitations, such as detached unsteady flows. Several works presented promising LBM flows over complex geometries with separated flows. More challenging simulations including moving geometries were addressed in, and fluid-structure interactions in. To illustrate the ability of the LBM to solve industrial problems with detached flows, XFlow is used to simulate five cases: the Backward-facing step, the Goldschmied Body, the HLPW-2 (2nd High-Lift Prediction Workshop), dynamic stall for a NACA0012 and a 3D wing with leading-edge tubercles. The content of this text is organized as follows. First, Section 2 introduces the LBM with the different models implemented in XFlow. Then, Section 3 describes various test cases with XFlow setup and numerical results. Finally, Section 4 summarizes the main conclusions of the work. Lattice Boltzmann Method The lattice Boltzmann method solves Boltzmann's transport equation using probability distribution functions, f, to describe the mesoscopic scale of fluid flow. The method uses a collision-streaming scheme to solve the discrete Boltzmann equation: where f i is the probability distribution functions in the direction i, Q is the number of velocity directions, x is the position vector, e i = (e ix, e iy, e iz ) is the discrete velocity direction vector and i is the collision operator. The first two terms are called streaming steps while the right hand side includes the collision operator. The stream-and-collide approach is posed over lattices, which are constructed of Cartesian points with a discrete set of velocity directions. The lattice scheme is usually denoted as DnQm where n represents the dimension of the problem, and m the number of velocity directions. However, XFlow is a LBM solver based on the D3Q27 lattice scheme with a central moment collision operator, which enhances the numerical stability compared with standard LBM. The macroscopic variables are calculated using the statistical moments, x k y l z m, defined by the following relation: where k, l, and m are the orders of the moments in the x, y, and z directions, respectively, which leads to the moment order: k + l + m. For example, the moment of order zero provides the density, and the moment of order one provides the momentum: Octree Lattice Structure The lattice structure included in XFlow is the D3Q27 organized as an octree structure. The octree structure provides the possibility to use non-uniform lattice structures, and therefore include different spatial scales at different locations of the flow domain. The size of a grid element is related to its refinement level L as ∆x = H/2 L, where H is the maximum length of the enclosing computational domain. A volumetric-based approach is adopted, in which the distribution functions, f i are assumed to be located at the barycenter of the lattice. When neighbor elements are on a different level, direct advection cannot be performed and, instead, ghost elements that act as a place-holder to provide interpolated values for fluid elements are introduced at the level interfaces. The XFlow pre-processor generates the initial octree lattice structure based on the input geometries, the user-specified lattice resolution for each geometry, as well as the farfield resolution. User-defined regions (sphere, box, cylinder, etc.) can also be created to refine arbitrary regions at the specified lattice resolution. The different spatial scales employed are hierarchically arranged. Each level solves spatial and temporal scales twice smaller than the previous level, thus forming the aforementioned octree structure (see Figure 1). This is particularly efficient as the ratio ∆x/∆t can be maintained in the entire fluid domain, favouring local time stepping approaches. XFlow uses local time stepping, allowing always having an adapted time step for every lattice in the fluid domain. In finite volume methods, a global time step is typically defined, which is inefficient when a variety of cell sizes exit in the mesh, which is especially true for geometry conforming meshes. However, it must be noted that techniques to alleviate these requirements, e.g., dual time-stepping or implicit-explicit time-stepping, where implicit time advancement can be used near the wall and explicit Runge-Kutta away from the wall to alleviate the strict time step needed by the explicit method and the high cost of the implicit method. The initial lattice structure can be modified during the simulations by the XFlow solver based on several criteria. First, if the computational domain changes due to presence of moving geometries, the lattice can dynamically be refined to follow the new position of the geometry every time step. Other adaptive refinement criteria to adapt the flow physics are also available. A refinement algorithm based on the level of vorticity is effective to dynamically refine the wake region (Figure 1), characterized by high vorticity. Wake refinement is activated when the local dimensionless vorticity is larger than a threshold value, * = ( ∆x/∆t ) 2. Additionally, the wake distance is controlled by a defined distance from the object up to which the wake refinement will take place. The Manhattan distance (the distance between two points in a grid based on a strictly horizontal and/or vertical path) is used to calculate the distance from the object to any lattice node in order to impose such condition. Collision Operator The collision operator, also called scattering operator, relaxes the system to an equilibrium state. The single-relaxation time based on the Bhatnagar-Gross-Krook (BGK) approximation is the most popular approach. The BGK collision operator, is still common but has several limitations for high Reynolds number flows. The multiple-relaxation time with raw moments (MRT-RM) was developed to overcome some limitations of the BGK method. This approach performs the collisions in momentum space instead of in velocity space. The increased flexibility in the selection of the relaxation parameters resulted in an enhanced stability when compared with the BGK approach. Despite its increased stability, the MRT-RM still shows instabilities for small viscosities, due to the lack of Galilean invariance. The multiple-relaxation time with central moments (MRT-CM) improves some of the shortcomings of the MRT-RM by redefining the moments with respect to a local velocity. When shifting the discrete velocities using the local velocity, the method provides higher degree of Galilean invariance, compared with the MRT-RM approach, enhancing stability. The expressions for the three collisions operators are: where is the relaxation time, f eq i is the local equilibrium function which is based on the Maxwell-Boltzmann distribution. In addition, M ij is the transformation moment matrix and s ij is the diagonal relaxation matrix, their implementations are based on the Premnath and Banerjee work. While most of the LBM collision operators are based on the BGK approach, XFlow uses a MRT collision operator implemented in central-moment space, which benefits of a low numerical dissipation. This MRT implementation allows reaching higher Mach and Reynolds numbers than the classic BGK approximation. Please note that this feature was exploited by the authors of this text to propose enhanced MRT operators to compute turbulent under-resolved flows. The interested reader is also referred to the recent review of collision operators. Turbulence Modeling Turbulence closure is provided by a Large Eddy Simulation (LES) technique. LES introduces an additional flow viscosity (varying in space and time), called turbulent eddy viscosity t, to model the under-resolved subgrid turbulence. The LES model implemented in XFlow is the Wall-Adapting Local Eddy (WALE) viscosity model, that provides a consistent local eddy-viscosity and near wall behavior. XFlow's implementation is formulated as follows: where ∆ f = C w ∆x is the filter scale based on the lattice size ∆x, S is the strain rate tensor of the resolved scales and the constant C w is typically 0.325. The strain rate tensor, g, is locally available with the LBM as the second-order moment, which makes efficient the implementation of state-of-the-art LES models. The formulae for the strain rate tensor are different in the BGK and MRT model. Indeed, previous works provided details on how to compute the strain rate tensor accurately in the MRT-LBM. In XFlow, the components of the strain rate tensor are obtained through the non-equilibrium moments in its cascaded formulation. Furthermore, the Cartesian lattice structure is suits well the LES turbulence model, since the LES turbulence model requires cells with proportioned aspect ratio, for isotropic turbulence outside boundary layers. The lattice completely fulfils this requirement. The LBM method is typically unsteady and hence will be efficient when considering unsteady flows, such as LES or DNS. If steady RANS solutions are sought, the steady RANS Navier-Stokes solvers are typically more efficient that time resolving LBM (e.g., low floating point operations, excellent data locality, ability to vectorize and multi-thread operations). However, if a time resolved LES solution is to be found, LBM demonstrated higher efficiency than Navier-Stokes solvers for a similar accuracy. For example, Barad et al. presented a fair comparison between Navier-Stokes and LBM with the same setup (Cartesian grid, turbulence model, wall treatment) and demonstrated that LBM is 12-15 times faster than Navier-Stokes, for this unsteady flow. Near-Wall Treatment In addition to the LES turbulence modeling, XFlow uses wall function to model the near-wall region and boundary layer, and therefore employs the so-called Wall-Modeled LES approach (WMLES). The lattice structure's isotropy would lead to an unreasonably high number of elements to resolve the boundary layer. This issue is addressed by using a generalized law of the wall. The boundary layer is modeled by a generalized law of wall given by Shih et al. based on a previous work of Tennekes and Lumley. This approach takes into account the effect of adverse and favorable pressure gradients. where y is the normal distance from the wall, u is the velocity skin friction, w is the wall shear stress, dp w /dx is the wall pressure gradient with x the local tangential to the wall direction, u p is a velocity of the wall streamwise pressure gradient and U is the mean velocity at a distance from the wall. The interpolating functions f 1 and f 2 are shown in Figure 2. The velocity field normal to the boundary layer is obtained through the y + variable, which depends on the distance between the first lattice on the wall, y, and the velocity of this first lattice, u c.Please note that XFlow projects the set of discrete velocities on the geometry tessellation to obtain the wall distance as depicted Figure 3. This implies a high level of details for the geometry discretization as one lattice node can detect up to 27 geometry projections (using the D3Q27 lattice scheme). These projected velocities are also used to calculate the curvature of the surface that is taken into account for the wall function. Figure 2. Unified laws of the wall. Dynamic Geometries The flexibility of the octree lattice structure and the advanced near-wall treatment proposed by XFlow allows addressing one of the most challenging features faced by the traditional CFD: fluid-structure interactions. XFlow proposes two different options to handle dynamic geometries: the enforced behavior, and the rigid body dynamics behavior. The enforced behavior moves a geometry based on an input position and orientation laws, enforcing the motion of the object. The rigid body dynamics behavior couples the fluid equations resolved by XFlow that handles rigid bodies allowing up to 6 degrees of freedom motions. For both dynamic behaviors, the lattice structure is updated every time step to mark the lattice nodes that belong to the fluid region, and those that belong to solid region as depicted Figure 4. The discrete velocities are also projected every time step in order to compute the new distance to the wall required for the wall function. Moving and rotating geometries can be addressed with an immersed boundary method inspired in Strack's work. This method replaces broken links with a modified LBM collision operator, alleviating the pressure fluctuation. Here, an extension of the fluid region is created inside the geometry in which the velocity and pressure fields are solved to provide the correct wall boundary conditions. This requires computing the solid-covered fraction of the volume associated with each lattice. This computation is a simple lookup and does not need for expensive triangle lookups, which results in an efficient and cost effective method. LBM takes into account the collision with the given solid fraction at each lattice and includes XFlow's law-of-the-wall, allowing for accurate determination of no-slip velocity and skin friction. Simulations We select five industrial cases of increasing complexity to validate XFlow under turbulent regimes and detached flow conditions. We include the backward-facing step, the Goldschmied Body, the HLPW-2 (2nd High-Lift Prediction Workshop), dynamic stall for a NACA0012 and a parametric study to improve wing stall using tubercles located at the leading edge. Introduction The backward-facing step problem is a fundamental test case to validate reattached and separated flows. The phenomenon of flow separation is a problem of great importance for fundamental and industrial reasons. Armaly et al. presented a detailed experiment on the backward-facing step geometry. According the Reynolds number, Re D = UD, the flow regime can be laminar or turbulent. Here D = 2h denotes the hydraulic diameter of the inlet channel with the step height, h. The flow is laminar when the Reynolds number is Re D < 400 and it can be assumed to be a two-dimensional problem. Under these conditions a recirculation zone appears where a strong mixing process takes place. The flow becomes three-dimensional when the Reynolds number is Re D > 400. Furthermore, when the Reynolds number is round Re D ∼1200, the flow suffers a transition process from laminar to turbulent flow. The simulation is performed with Reynolds number Re h = 5100, so that, the flow is strongly three-dimensional and turbulent. Setup and Mesh Convergence The setup simulation was set according to Le and Moin's. The computational domain (see Figure 5) is defined with the following dimensions: L x = 30 m (10 m prior to the step and a 20 h post-expansion section), L y = 6 m, L z = 1.25 m and h = 1 m, giving an expansion ratio is ER = L y /(L y − h) = 1.2. Free slip boundary conditions were used at the top boundary and periodic boundaries were set for lateral walls. No-slip boundaries were used for the backward-facing step. A velocity inlet boundary condition was used at the inlet and a convective pressure outlet was imposed at the outlet. To simulate this case, a single phase external flow set up with the isothermal model wasselected. In addition, the turbulence model, Wall-Adapting Local Eddy-viscosity (WALE), was used. The reference velocity and length used to compute aerodynamic coefficients are U 0 = 1 m/s and h = 1 m. The density and dynamic viscosity is selected as = 1 Kg/m 3 and = 1.96 10 −4 Pas, in order to obtain a Reynolds number Re h = 5100. At the inlet, a velocity profile is imposed and is defined by a Reynolds number Re = 670 (Re * = 1000), where is the momentum thickness and 99 = 6.1 *. Grid independence is checked using four different grid sizes, as summarised in Table 1. The reattachment length, X r, is used for the convergence analysis. It is calculated in different positions in the spanwise direction and averaged the last 0.15 s of the simulation. The reattachment length converges to a value X r = 6.33h, which is in good agreement with the reattachment length calculated by Le and Moin, X r = 6.28h and obtained from experiment by Jovic and Driver, X r = 6.00h. Therefore, the Extra-Fine grid was retained in the following section. Results In addition to the reattachment length, included in Table 1, in this section we included the distribution of mean and standard deviations for the velocity and pressure at different longitudinal sections, X/h = 4 (recirculation region), 6 (reattachment location) and 10. The results extracted from XFlow are compared with Le and Moin and Jovic and Driver numerical and experimental results, respectively. Figure 6 shows the mean x-velocity profile at the three sections, X/h =4, 6 and 10. The figure shows very good agreement between XFlow and the DNS/experiments for the mean velocity. We observe a small region with flow acceleration near the recirculation at X/h = 4, but the global trend is correct. Finally, Figure 8 shows the streamwise pressure coefficient defined as Cp = p−p ∞ Introduction The self-propulsive fuselage concept was introduced by F. R. Goldschmied in the 50's showing promise in reducing drag and increasing the aerodynamic efficiency of bullet shape fuselages (see Figure 9). Namely, Goldschmied introduced the idea of including an almost passive flow control strategy to force the flow to remain attached, hence reducing drag. To control separation a fan is installed within the body tail and induces a pressure deficit. It is capable of forcing the flow to remain attached even in presence of strong adverse pressure gradients induced by the body geometry. The challenge for XFlow is to predict correctly the detached flow with different pressure gradients produced by the fan. Setup and Mesh Convergence The model was created from the Thomason's work. It was tested in a wind tunnel with the following dimensions: 12 m long, 1.22 m wide and 0.9 m height. The free slip boundary condition was used at the lateral walls. A velocity inlet boundary condition was used at the inlet and convective pressure outlet at the outlet. For the Goldschmied's wall, the boundary condition of non-equilibrium wall function was imposed to take into account pressure gradients in separated regions. The fan boundary condition was imposed through a pressure gradient, ∆P. To simulate this case, a single phase external flow set up with the isothermal model has been selected. In addition, Wall-Adapting Local Eddy-viscosity (WALE) turbulence model, was used. The reference velocity and length used to compute aerodynamic coefficients are U ∞ = 20 m/s and S ref = 0.2402 m 2 (frontal surface area). The resulting Reynolds number (based on the free stream velocity and the body area) is Re = 8.9 10 4. In reference to the grid dependency, three different grid sizes for the zero pressure gradient condition, ∆P = 0 Pa, were computed, as shown Table 2. The mean drag coefficient, Cd, is computed over the last 0.1 s of the simulation and converges towards the experimental value when refining the mesh. Therefore, the Fine grid was used for the following simulations because it has a better agreement between the simulations and experimental values. An adaptive grid refinement in function of the vorticity field was used during the computations. Results Additionally to the Cd comparison shown in Table 2, Figure 10 shows the pressure coefficient distribution and surface flow pattern for both pressure gradient condition, ∆P = 0 Pa and 500 Pa, with the finest grid. When comparing these distributions against experimental values it appears that the numerical solution predicts accurately the detached flow position. For ∆P = 0 Pa, detachment occurs slightly before than in the experiments; however the pressure drop level is well behaved. When the cotrol is activated, at ∆P = 500 Pa, the pressure coefficient is slightly over predicted; however the detachment flow point is well captured. Introduction The 2nd High-Lift Prediction Workshop (HiLiftPW-2) provides a benchmark to study linear and post-stall regions on a high-lift aircraft configuration. The DLR F11 geometry includes geometrical details such as slat tracks, flap track fairings and slat pressure tube bundles, which introduce complexity when generating the mesh. This section contains part of the results published by Holman et al. with the most complex geometry. They focused on the linear region of lift, drag and moments curves, and pressure coefficient distributions. However, only a few participants were able to predict the stall and the post-stall regions produced by the slat tracks, flap track fairings and tubes bundle. Setup and Mesh Convergence The geometry is based on the DLR F11. The study employs the Configuration 2 is formed by a wing (with deflected slat and flap, with 26.5 deg and 32 deg respectively) and fuselage. Configuration 5 is based on Configuration 2 geometry including slat tracks, flap track fairings, and pressure tube bundles, as shown in Figure 11a). The reference area for dimensioning the aerodynamic coefficient is S ref = 0.41913 m 2 and the moment reference center is x = 1.4289 m, y = 0.0 m, and z = −0.04161 m. The simulation is set in the XFlow environment, which features a virtual wind tunnel. The virtual wind tunnel defines a rectangular domain with pre-set boundary conditions designed for external aerodynamics. In this case, the wind tunnel size is m, which is wide enough to avoid significant wall and blockage effects. The boundary conditions are set as an inlet velocity with 59.5 m/s, which corresponds to a Mach number of 0.175, and an outlet boundary set as gauge pressure outlet of zero Pascals to mimic atmospheric conditions. The symmetry plane is set as a free-slip ground wall. Experimental data with two different flow conditions was provided by the HiLiftPW-2 committee. Two Reynolds number were defined, Re = 1.35 10 6 and 15.1 10 6 and the Mach number was set to Ma = 0.175, for both Reynolds. Here, we only present the results for the higher Reynolds number (Re = 15.1 10 6 ). The solver is set as single-phase, isothermal and the WALE turbulence model is selected. The velocity field is initialized with the magnitude and direction of the input boundary, and the initial pressure field is set to zero within the entire fluid domain. We perform a grid convergence study to determine the spatial discretization required to capture the physics appropriately. The HiLiftPW-2 committee provided coarse, medium, fine and extra-fine meshes, for the participants to run their codes and to check convergence. However, XFlow avoids the traditional meshing process using an octree structure to address any complex geometry. Comparisons for different sizes of near-wall refinements (extra-coarse, coarse, medium, and fine resolutions) are included in Table 3 at = 16 deg. These meshes use the farfield scale fixed to 0.256 m, using the Configuration 2. The criterion of convergence is based on the global lift and drag coefficients, averaged for the last 0.02 s of the simulation. We can clearly observe that the solution improves when the lattice is refined near the aircraft walls. Namely, the extra-fine grid provides good accuracy, showing a relative error, for the lift prediction (in comparison with wind tunnel), of only 0.4%, at 16 deg. However, note that coarser meshes also provide accurate drag coefficients with a lower number of lattices. An important influence to the lattice resolution near walls is observed. However, the drag and lift are well captured once a resolution of 0.5 mm on the wing is set. The fine resolution provides a good accuracy with an acceptable computational time, and is therefore retained for the rest of the study. Results In this section, we report Xflow results, which are compared to experimental data, for the aerodynamic coefficients, including a full polar curve. Figure 11a shows the lift coefficient for both configurations which agrees well to the experimental data below 16 deg, in the linear region, where both absolute lift value and linear lift slope are successfully predicted. For example, the lift coefficient predicted at 16 deg by XFlow shows a relative error of only 0.7% when compared to wind-tunnel data, and the linear slope is exactly matched. Additionally, we observe that the track fairings and the pressure tube bundles have a negligible effect on the aerodynamics, in the linear region. Differences appear near stall, where the stall angle of attack is predicted at 22.4 deg in XFlow, while it is 21 deg for the wind-tunnel data. We observe that when including the the flap track fairings, slat tracks and slat pressure tube bundles, the coefficient change significantly in this region, suggesting a high sensitivity of the aerodynamics to these geometric components, near stall. Additionally, XFlow predicts very well the experimental flow topology, as depicted in Figure 11b, where the WMLES turbulence modeling can be appreciated. The flow structure at 24 deg can be observed in Figure 11c, which compares the two configurations. The figure shows the generation of turbulent strips in conf. 5, which is induced by the slat tracks and the slat pressure tube bundles. This turbulent pattern is not visible in conf. 2. Introduction The symmetric airfoil NACA0012 was selected to validate XFlow when computing flows around moving geometries. The main aerodynamic differences between static and rotating lift curves are sketched in Figure 12. Dynamic stall occurs when a rapid variation in the angle of attack is seen by the airfoil and typically leads to a hysteresis cycle in the aerodynamic forces. It is related to the apparition of a vortex near the leading edge on the suction side that enhances lift considerably (when compared to the static case). Massive and abrupt stall, linked to a sudden loss of lift, occurs once the vortex convects at the trailing edge. This phenomenon was widely studied experimentally and numerically due to its appearance in helicopters aerodynamics and wind turbines [16,. The variation of the angle of attack, in the dynamic simulation is described by the following equation: where 0 is the initial angle of attack, amp is the angle of attack amplitude, is the pitch rate, U ∞ is the reference velocity, k is the reduced frequency and c the reference chord length. It is interesting to note that the pitch rate,, is directly governed by the non-dimensional reduced frequency, k. The reduced frequency governs the degree of unsteadiness such that for steady state aerodynamics k = 0, quasi-steady aerodynamics, 0 ≤ k ≤ 0.05, and unsteady aerodynamics, k > 0.05. Additionally, for k > 0.2 it is considered highly unsteady aerodynamics. In this study, a rotational velocity, k = 0.1 was chosen to show the capability of XFlow to predict the dynamic stall. Lift, drag and pitching moment will be compared to experimental data. Setup and Mesh Convergence The geometry for this study corresponds to the well-known NACA0012 airfoil. The reference chord length is c = 0.61 m (or aerodynamic chord). The center of rotation is located at 25% of reference length, c, from the leading edge. These reference values are used to calculate the lift and drag coefficients and the pitching moment coefficient respectively. A rectangular domain was used with 32c long, 5c height and 2.5c wide. Farfield velocity boundary condition is used at the inlet and zero gauge pressure at the outlet. For the lateral walls, slip walls were used. At airfoil walls, non-equilibrium wall function is selected to take into account pressure gradients that may govern separation and stall. Additionally, the rotating geometry is addressed with immersed boundary method. An external flow (single phase and air fluid properties) with the isothermal flow condition and the turbulence model, Wall-Adapting Local Eddy-viscosity (WALE). Where flow conditions are defined through the non-dimensional numbers: Mach number, Ma = 0.072, Reynolds number, Re = 0.98 10 6, and reduced frequency, k = 0.1. In reference with the grid refinement, Figure 13 shows the grid convergence for different refinements (see Table 4). Figure 13 depicts the hysteresis for all lattice sizes. Additionally, we observe that for the finest mesh, the linear region and the maximum lift are well captured. Discrepancies can be seen in the low part of the curve, for all mesh sizes. Results In this section, the lift coefficient hysteresis and snapshots during the dynamic simulation with the Fine grid are shown in Figure 14. The lift coefficient was compared with experimental data. XFlow captures well the maximum lift. The hysteresis is relatively well captured although discrepancies are observed in the recovery region. The instantaneous vorticity iso-countours are depicted for various angles of attack, which correspond to different points in the lift hysteresis loop. Please note that in the linear region, the flow is mainly attached (see Figure 14a). In Figure 14b the flow detached and the convective vortex characteristic of rapid pitching (and dynamic stall) is near the end of the wing, which will soon produce an abrupt stall. The snapshots are consistent with the simulated and experimental curve. Figure 14c is characterized by massive detachment and deep stall, the suction side of the wing shows indeed massive detachment. The lift coefficient in Figure 14d,e disagrees between computations and experiments. In these regions XFlow predict a more rapid recovery (less detachment) that in the experiments. Introduction Tubercles were proposed to soften aerodynamic characteristics near stall, the origin of this idea goes back to Humpback whale's fins (see Figure 15), which evolved during millions of years of evolution to enhance manoeuvrability in water. For example, an amazing feature of the humpback is its acrobatic behavior during feeding known as bubble netting. These whale's fins operate at Reynolds numbers, Re= 1.1 10 6, based on the sea water viscosity and density at 16 C. Tubercles are among other passive flow control devices, being explored to enhance aerodynamic performance. An overview of devices that may help control stall or improve performance (decrease drag) was summarized by Aftab et al.. The geometrical parameters that define the tubercles are the following: Wavelength, /c: defines the non-dimensional wavelength (or equivalently the spatial frequency) of the tubercles, with c being the local chord. Amplitude, A/c: defines the amplitude of the tubercles. It is also made non-dimensional using the local chord c. Span section, /b: defines the wing span section where the tubercles start. Please note that b is the total wing span. Aerodynamic characteristics at different Reynolds were obtained in experiments. Johari et al. tested a NACA63 4 021 airfoil, at angle of attack range from 6 deg to 30 deg and Re = 1.83 10 5 and 2 10 6. Airfoils with varying tubercle amplitude, A = 0.025c, 0.05c and 0.12c and wavelength = 0.25c and 0.5c were tested. Forces and moments were measured in a water tunnel. The results showed smooth stall, due to the presence of the tubercles, improving the post-stall behavior by 50%. Configurations with small amplitude, A = 0.025c, and large wavenumber = 0.5c gave better results. Additionally, tubercles provide a reduction in performance in the pre-stall region but avoid the abrupt stall noticed for the baseline airfoil. Later, Wei et al. reported the differences between directing the tubercles normal to the span (modified A) or normal to the tapered leading edge (modified B). These also considered a tapered wing with sweep-back, with tubercles of amplitude, A = 0.12c, where c is the mean chord for rectangular wing and larger wavelength, = 0.5c. Johari et al. obtained similar results with these configurations where the post-stall behavior improve; however the drag increased slightly. In this section, a swept-wing is used to study the effect of the tubercles leading-edge in the aerodynamic properties. Setup and Mesh Convergence A swept-wing geometry was used for this study. The analysis was performed using a baseline geometry, which was modified to include tubercles configurations. The mean aerodynamic chord and the wingspan are MAC = 2.702 m and b = 6.225 m respectively. The wing with leading-edge tubercles was based on the baseline wing. In this study, the tubercles were varied to study their effect over the aerodynamic coefficients. The parametric study was defined with the frequencies /c = 0.1, 0.2 and 0.4, and the amplitudes A/c = 0.025, 0.05 and 0.1. These values were selected based on previous studies by Wei et al., who showed that for these values, the effect of the tubercles were more noticeable in this type of wing configuration. Table 5 shows the different set of parameter used to generate 9 different tubercles configurations. Additionally, the idea was to localize the tubercles only near the wing tip, since the baseline wing geometry was observed that stall initiates at the tip, in this study fixed to /b = 0.50. Selected configurations are shown in Figure 16. The wind tunnel dimensions for the simulation are: 37c long, 9.25c wide and18.5c height. The symmetry wall was defined as free slip wall where the wing geometry relies on it. Velocity inlet boundary conditions are used at the inlet. Zero-gauge pressure is imposed at the outlet. Finally, the wing wall required the non-equilibrium enhanced wall functions to take into account pressure gradients. An external flow (single phase and air fluid properties) with the isothermal flow condition and the turbulence model: Wall-Adapting Local Eddy-viscosity (WALE) were selected. The flow condition is defined by the Mach number which is set to Ma = 0.2 and the Reynolds number was fixed to Re/L = 4.66 10 6 with a reference length, L. The latter Mach and Reynolds correspond to see level conditions. Please note that for each new geometry the surface area was computed. This area is then used to non-dimensionalize the lift and drag forces, such that the resulting aerodynamic coefficients are consistent and represent the relative force to the modified shape. The wing with A/c = 0.025 and /c = 0.2 was used to study mesh convergence. For this purpose, we select an angle of attack, = 18 deg. It corresponds to the stall region in the lift curve (as will be shown later in Figure 17). To ensure that the mesh refinement does not affect the results, a comparison was performed for different sizes of near wall refinements. In all cases the farfield scale is 5.12 m. Table 6 summarizes the lattice resolution, number of lattices in each grid, the drag and lift coefficients averaged the last 1s of the simulation, the computational time and the number of cores used for computation. The four simulations use adaptive mesh refinement based on the vorticity field with the lattice resolution value. Regarding the lift and drag coefficients in Table 6, it can be seen that the errors between both resolution 5 and 10 mm are 1% for lift and 2% of drag. These convergence study is remarkable since at this angle of attack (with stalled flow), the flow is more complex to resolve. Looking for a trade-off between accuracy and computational cost, 10mm resolution (Medium mesh) was used for the rest of simulations. Results The results for lift, drag and efficiency at = 18 deg are compared in Table 7 for all configurations. The percentages, indicate the percentage variation of the modified geometry with respect to the baseline configuration. First, it can be seen that variations are relatively small in general. Second, we observe that the configurations A1040 and A0540 show a more important increase in the lift coefficient. The A0540 configuration shows an increase in the drag coefficient; however A1040 shows a decrease in the drag coefficient. This can be further quantified by comparing the lift over drag, L/D, where they have a 5.9% and 2.7% gain in L/D with respect to the baseline configuration. It may be concluded that /c = 0.4 is the wavelength that provides the more benefit in terms of lift coefficient, and that amplitudes around A/c = 0.1 show the most potential. In this work, the choice of the optimal configuration is based on ∆Cd < 0 ∆Cl > 0 and with maximum ∆L/D . Therefore, the optimal tubercles configuration geometry is A1040 (A/c = 0.1 and /c = 0.4). Table 7. Percentage variation of lift, drag and efficiency of the modified geometries (see Table 4) with respect to the baseline configuration. Baseline A1010 A0510 A02510 A1020 A0520 A02520 A1040 A0540 A02540 Having determined the most promising configuration for a high angle of attack ( = 18 deg), the rest angles of attack were simulated as depicted in Figure 17. The baseline geometry was compared to the optimal tubercles configuration geometry (A/c = 0.1 and /c = 0.4). More interestingly, note that the geometries with tubercles show an identical lineal regime for low angle of attack and a higher maximum lift with a more benign stall. However, when comparing the drag, we observe a mild increase at medium range of angles of attack that results in a decrease of the L/D. Overall, the efficiency, L/D, shows that the tubercles diminish the performance at medium range angles of attack, even if the lift coefficient improves near stall. These trends are remarkably similar to the experiments reported in Wei Wei et al. for a tapered swept-back wing, which was mentioned in the introduction of this section. For completeness, Figure 18 includes visualization of instantaneous pressure contours on the wing surface for various angles of attack. The figures suggest that the higher lift at large angles of attack is related to localized suction regions, which are generated by the tubercles. These regions are particularly present at angles 12, 14 and 18 deg and located near the wing tip. These results suggest that acting on the wing tip may have a beneficial effect on the aerodynamic performance, perhaps more important that including tubercles along 50% of the span. Conclusions This work showed that the lattice Boltzmann method implemented in XFlow is able to solve advanced flow problems even in the presence of complex geometries or moving parts. Five industrial cases were studied, showing the reliability of the lattice Boltzmann method to resolve detached flows in engineering problems of industrial interest. The automation of the the generation using octrees for any geometry and angle of attack in XFlow, shortened significantly the setup of the simulation included, which transforms most engineering efforts into machine cost. Mesh automation using octrees does not damage the accuracy of accurate results, that agree well with published data for detached flows. Furthermore, the simulations were run in competitive turnaround times, even when using an unsteady LES approach. It is concluded that XFlow provides accurate results for detached flows enabling the study of the stall flows, where the flow becomes highly unsteady and turbulent, while minimising human interactions.
Ectopic Expression of Fiber Related Gbwri1 Complements Seed Phenotype in Arabidopsis thaliana WRINKLED1 belongs to AP2/EREB family of transcription factors whose role has been well established in seed oil biosynthesis. The objective of the study was to trace the role of fiber related Gbwri1 in seed development and fatty acid biosynthesis. In this study, we isolated a transcript from elite fiber producing cotton (Gossypium barbadense), which is over-expressed in G. barbadense fibers as compared to G. hirsutum and G. arboreum. The putative protein encoded by this transcript exhibited homology in specific domains and protein structure with WRINKLED1 of Arabidopsis thaliana and was thus designated as Gbwri1. In this study, we investigated the functional homology of fiber elongation related Gbwri1 with fatty acid biosynthesis regulator Atwri1. Ectopic expression of Gbwri1 in wri1-3 mutant of A. thaliana was analyzed. In the transgenic lines of A. thaliana, Gbwri1 resumed the seed weight, seed area, and surface morphology to the wild type. Gbwri1 transformation rescued the wrinkled phenotype of wri1-3 mutants by resuming the expression of fatty acid biosynthesis genes biotin carboxyl carrier protein isoform 2 (bccp2) and keto-ACP synthase 1 (kas1). Moreover, the seedling development of transgenic lines on non-sucrose medium demonstrated that the Gbwri1 was able to regulate the supply of sucrose for normal seedling establishment. Our results showed that the transformation of Gbwri1 in A. thaliana wri1-3 mutant was able to complement wri1-3 impaired phenotype. Thus, Gbwri1 is involved in cotton fiber development and fatty acid biosynthesis in seeds. © 2021 Friends Science Publishers
1. Field Embodiments of the present disclosure relate to a vacuum cleaner capable of being used in an upright mode and a handy mode. 2. Description of the Related Art In general, vacuum cleaners are appliances designed to do cleaning by suctioning dust along with air using a suction force generated from a fan rotated by a motor, separating the dust included in the suctioned air from the air, and collecting the separated dust. Such vacuum cleaners include a main body with a fan motor generating a suction force, a head unit that is disposed in the front of the main body and suctions dust from a floor along with air, a handle grasped by a user so as to allow movement of the vacuum cleaner, and an extension frame that connects the handle and the main body and enables the user to move the main body in a standing posture. Some of the vacuum cleaners have recently been designed to include a first cleaner module that cleans a floor in an upright mode and a second cleaner module that is removably installed on the first cleaner module and is used in a handy mode, thereby making it possible to be used in the upright mode and the handy mode.
(L-R) Chef Chas Boydston, President of the Chicago Chefs of Cuisine; Chef Joe Randall of Chef Joe Randall’s Cooking School of Savannah and Chef Dwight Evans, Vice President of the Chicago Chefs of Cuisine and Chef of the Year for 2013. At Chef Joe Randall’s Cooking School, for more than 13 years, he has shared his passion for southern food and culture with the Savannah metropolitan region and beyond. He preaches the gospel of authentic southern cuisine to all comers. The success of the school is a credit to Chef Joe’s great love of southern cuisine, the city of Savannah and his efforts to share the historically accurate heritage of southern culture with visitors from all over the world. Chef Joe Randall is the owner of a nationally acclaimed cooking school in Savannah, Ga. He is a 50 year veteran of the hospitality and food service industry. The depth and range of his experience and his dedication to professional excellence, has earned him the respect of professional chefs as well as restaurant managers and owners.
<filename>packages/tiled-map/src/MapData/TiledMapData.ts<gh_stars>1-10 /** * @license * Copyright 2021 piyoppi * SPDX-License-Identifier: MIT */ import { MapChip, MapChipProperties, AutoTileMapChipProperties, isAutoTileMapChipProperties, AutoTileMapChip } from './../MapChip' import { MapChipImage } from '../MapChipImage' import { MapPaletteMatrix } from './MapPaletteMatrix' export type TiledMapDataItem = MapChip | null export type TiledMapDataProperties = { chipCountX: number, chipCountY: number, values: Array<number> palette: Array<MapChipProperties | AutoTileMapChipProperties | null> } export class TiledMapData extends MapPaletteMatrix<TiledMapDataItem> { filter(needles: Array<MapChip>): TiledMapData { const filtered = this.items.map(chip => needles.some(needle => !!chip && needle.compare(chip)) ? chip : null) return new TiledMapData( this.width, this.height, filtered ) } findByImage(image: MapChipImage) { const registeredChips = new Set() return this.items.filter(chip => { if (!chip) return false const found = chip.items.find(fragment => fragment.chipId === image.id) && !registeredChips.has(chip.identifyKey) if (found) { registeredChips.add(chip.identifyKey) } return found }) as Array<MapChip> } toObject(): TiledMapDataProperties { return { chipCountX: this.width, chipCountY: this.height, values: this.values.items, palette: this.palette.map(data => data ? (data as MapChip).toObject() : null) } } static fromObject(val: TiledMapDataProperties) { const palette = val.palette.map(data => { if (!data) return null if (isAutoTileMapChipProperties(data)) { return AutoTileMapChip.fromObject(data) } return MapChip.fromObject(data) }) const tiledMapData = new TiledMapData(val.chipCountX, val.chipCountY, []) tiledMapData.setValuePalette(val.values, palette) return tiledMapData } }
package com.mapd.parser.extension.ddl.heavydb; public enum HeavyDBEncoding { NONE, FIXED, COMPRESSED, DICT, DAYS }
LOS ANGELES -- Jordan Farmar has agreed to sign with the Los Angeles Lakers, the veteran guard told ESPNLosAngeles.com Tuesday night. The sides agreed to a one-year, veteran minimum deal worth approximately $1 million. L.A. must first negotiate a buyout with Farmar's club in Turkey, Anadolu Efes, which is believed to be in the $500,000 range. The buyout does not count against the salary cap, an important detail to the luxury tax-laden Lakers. The deal required a significant financial sacrifice by Farmar to be completed. The 6-foot-2, 180-pound guard signed a three-year, $10.5 million deal last summer to play in the Turkish Basketball League. "They knew about my deal overseas and really didn't push it earlier because they didn't think I'd be willing to give up that guaranteed money I had over there," Farmar said in a phone interview with ESPNLosAngeles.com. "I wanted to be back in the NBA, but more importantly, back with the Lakers. This is the only situation I would have taken a minimum deal with." Farmar, 26, grew up in L.A. and played his first four professional seasons with the Lakers after being drafted in the first round out of UCLA in 2006. He was a key reserve on the Lakers' championship teams in 2008-09 and 2009-10. "I've been a Laker since I was born," Farmar said. "I grew up a Laker fan, so regardless of where I am or who I'm playing for, or what I'm doing in life, I'm always going to stay connected to what's going on here [in Los Angeles]." After leaving L.A. as a free agent in 2010, he spent parts of two seasons with the New Jersey Nets, scoring a career-high 10.4 points per game on a career-best 44 percent 3-point shooting in 2011-12. Farmar was traded to the Atlanta Hawks in the Joe Johnson deal after that season, but had his contract bought out and played last season with Anadolu Efes, averaging 13.8 points and 3.9 assists in 29 games played. Farmar also played seven games for Israel's Maccabi Electra of the EuroLeague during the NBA lockout in the summer of 2011. "The plan for me was to get back to the NBA eventually regardless," Farmar said. "I really, really enjoyed my time in Israel. I thought it was a possibility that it would be exactly the same [in Turkey] and I would hop on over there and never look back and I would make good money overseas, but just being over there and staying up until 2, 3, 4 in the morning and watching every NBA game, or watching the Lakers go through what they were going through was just tough. "I missed my family, I missed being home and, ultimately, I missed being a Laker." Farmar has played primarily at point guard throughout his NBA career, making him the third point guard on a Lakers team that already has Steve Nash and Steve Blake under contract. However, Farmar said he has spoken to coach Mike D'Antoni about playing some shooting guard as well, especially in the early going as Kobe Bryant recovers from a ruptured Achilles tendon. "We talked about it a lot," Farmar said. "Me being able to play both positions and Steve Blake being able to play both positions, if we wanted to go small to move Kobe [Bryant] down [to small forward] and Jodie Meeks down and stuff like that [we could do that]. … It was important. [D'Antoni's] system is going to be to open it up and he wants to get a lot of guard play and decision-makers on the floor together at times." Farmar gives L.A. nine players under contract for next season. Bryant, Nash, Pau Gasol, Metta World Peace, Blake, Jordan Hill, Jodie Meeks and Chris Kaman, whom L.A. agreed to ink to a deal when the league moratorium is lifted Wednesday, are the others. They have also extended a qualifying offer to Robert Sacre, making him a restricted free agent, and drafted Ryan Kelly in the second round. If they both make the team, that puts the roster at 11. The roster could dip down back to 10 if L.A. decides to exercise its amnesty clause, with World Peace being the most likely candidate. It all adds up to the Lakers still looking to add three to four more players on veteran minimum deals before the season begins, while still maintaining their goal of having every player but Nash come off the books next summer in the hopes of making a major splash in free agency. "I think it's a work in progress right now," Farmar said. "We're trying to put things together and still leave flexibility for the future." Farmar said he is looking forward to slipping back into his No. 1 purple and gold uniform. "I think it will be an amazing feeling, man," Farmar said. "At the end of the day, I'll always feel like I'm a Laker, regardless of I'm here or somewhere else. I'm happy just to be back and to be able to help this team go wherever we can this year and I would love to be here for the future, so we'll see how it works out."
<filename>src/main/java/com/exactpro/th2/validator/model/BoxLinkContext.java /* * Copyright 2022 Exactpro (Exactpro Systems Limited) * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.exactpro.th2.validator.model; import com.exactpro.th2.infrarepo.repo.RepositoryResource; import com.exactpro.th2.validator.enums.BoxDirection; import com.exactpro.th2.validator.enums.SchemaConnectionType; public final class BoxLinkContext { private String boxName; private String boxPinName; private BoxDirection boxDirection; private SchemaConnectionType connectionType; private RepositoryResource linkedResource; private String linkedResourceName; private String linkedPinName; public String getBoxName() { return boxName; } public String getBoxPinName() { return boxPinName; } public BoxDirection getBoxDirection() { return boxDirection; } public SchemaConnectionType getConnectionType() { return connectionType; } public RepositoryResource getLinkedResource() { return linkedResource; } public String getLinkedResourceName() { return linkedResourceName; } public String getLinkedPinName() { return linkedPinName; } public static class Builder { private String boxName; private String boxPinName; private BoxDirection boxDirection; private SchemaConnectionType connectionType; private RepositoryResource linkedResource; private String linkedResourceName; private String linkedPinName; public Builder setBoxName(String boxName) { this.boxName = boxName; return this; } public Builder setBoxPinName(String boxPinName) { this.boxPinName = boxPinName; return this; } public Builder setBoxDirection(BoxDirection boxDirection) { this.boxDirection = boxDirection; return this; } public Builder setConnectionType(SchemaConnectionType connectionType) { this.connectionType = connectionType; return this; } public Builder setLinkedResource(RepositoryResource linkedResource) { this.linkedResource = linkedResource; return this; } public Builder setLinkedResourceName(String linkedResourceName) { this.linkedResourceName = linkedResourceName; return this; } public Builder setLinkedPinName(String linkedPinName) { this.linkedPinName = linkedPinName; return this; } public BoxLinkContext build() { BoxLinkContext boxLinkContext = new BoxLinkContext(); boxLinkContext.boxName = boxName; boxLinkContext.boxPinName = boxPinName; boxLinkContext.boxDirection = boxDirection; boxLinkContext.connectionType = connectionType; boxLinkContext.linkedResource = linkedResource; boxLinkContext.linkedPinName = linkedPinName; boxLinkContext.linkedResourceName = linkedResourceName; return boxLinkContext; } } }
The field of this invention relates in general to phase modulation and in particular to a phase modulator with improved accuracy and stability. In many instruments, it is important to provide an accurate source of phase modulation which can serve as a calibration standard. In the paper "A Method of Reducing Disturbances in Radio Signaling by a System of Frequency Modulation", Proceedings of the Institute of Radio Engineers, Vol. 24, No. 5, pp. 689-740, May 1936, Armstrong disclosed a phase modulator for providing a source of phase modulation. The input RF power is split into two paths. In one path, a product modulator is used to cause the modulating signal to vary the amplitude of the RF signal. A 20.degree. phase changing device is employed in either one of the two paths. The outputs of the two paths are then summed to give the output of the phase modulator. The product modulation in the second path is performed with a more or less linear modulator so that the resulting sideband amplitudes are nearly proportional to the amplitude of the modulating signal. Hence the sideband amplitudes in the output of the modulator are sensitive to changes in amplitude of the modulating signal or changes in the gain or linearity of the product modulator. It is therefore desirable to provide a phase modulator in which such disadvantages of the Armstrong phase modulator are overcome.
Last year, I was fortunate to be able to go out to Eastern Greene Elementary School, to interview a second grade teacher named Larry Leonard. By chance, I had been assigned to cover a dinner being held at the Event Center, and I happened to be seated across from Eastern Greene Schools Superintendent, Ted Baechtold and his wife. Baechtold told me about Mr. Leonard’s second grade dual immersion ASL class, in which students are taught for half of the day in spoken English, and completely in American Sign Language for the second half. I was immediately intrigued by the idea, and contacted Mr. Leoneard to arrange for a class visit. I was delighted to see Leonard in action, and fascinated to watch the class as they played card games, had a math lesson and basically acted like second graders, only cooler. One little girl named Kyanna caught my eye, as she noticed the spiders on my shoes and taught me the sign for ‘spider’. The world needs more teachers like Mr. Leonard, who, as I explain in this issue of the newspaper, took his class to the Indiana School for the Deaf in Indianapolis, enriching these kids lives by showing them the very people they are learning to communicate with. In honor of American Sign Language Day, I salute Mr. Leonard, Mr. Baechtold and Eastern Greene Schools, GOOD JOB!
WASHINGTON (AP) � Seeking to circumvent congressional opposition, President Barack Obama will promote a series of executive branch steps aimed at jumpstarting the economy this week, beginning with new rules to make it easier for homeowners to refinance their mortgages. An administration official said the housing initiative will help homeowners with little or no equity in their homes refinance by cutting the cost of doing so and removing caps for deeply underwater borrowers. The new rules apply to homeowners with federally guaranteed mortgages who are current on their payments. Obama will discuss the initiative during a meeting with homeowners Monday in Las Vegas, a city hard hit by foreclosures and sagging home prices. One in every 118 homes in the state of Nevada received a foreclosure filing in September, according to the foreclosure listing firm RealtyTrac. With the president's jobs bill struggling in Congress, the White House is refocusing its efforts on steps Obama can take to address the nation's economic woes without getting lawmakers' approval. During his three-day trip to the West Coast this week, Obama will use a new catchphrase to try to push Republicans into action: "We can't wait." It's his latest in a string of slogans aimed at placing blame on Republicans for lack of action on the economy. GOP leaders counter that the sluggish economy and stubbornly high unemployment rate are the result of Obama administration policies, including the 2009 stimulus package and financial regulation bill, that have failed. "They got everything they wanted from Congress the first two years. Their policies are in place. And they are demonstrably not working," Senate Minority Leader Mitch McConnell, R-Ky., said Sunday. Last month, Obama announced a $447 billion jobs plan, filled with tax increases on the wealthy and new spending on education, infrastructure and aid to state and local governments. Efforts to pass the full measure were blocked by Senate Republicans, who see the president's proposal as a second stimulus. That's left Obama and his Democratic allies pushing lawmakers to pass the bill in individual pieces, though the fate of most of the measures remains unclear. The housing program Obama will discuss Monday will be implemented by the independent Federal Housing Finance Agency. At its core, the initiative will relax eligibility standards for a federal refinancing program, allowing those who owe more on their house than it is worth to take advantage of loans with lower interest rates. The administration official had no estimate for how many homeowners could be helped by relaxed rules. The official spoke on the condition of anonymity to discuss the housing program ahead of the president's Las Vegas meeting. Following his events in Nevada, the president will travel to Los Angeles for three fundraisers for his re-election campaign, including one at the home of movie stars Melanie Griffith and Antonio Banderas. Obama will also make stops this week in San Francisco and Denver.
package com.yj.platform.dbproxy.mybatis.model; import com.baomidou.mybatisplus.annotation.FieldFill; import com.baomidou.mybatisplus.annotation.TableField; import com.baomidou.mybatisplus.annotation.Version; import java.io.Serializable; import java.util.Date; /** * 所有model基类 * * @author 杨旭平 * @date 2021/6/17 10:59 */ public class BaseModel implements Serializable { /** * 创建时间 */ @TableField(fill = FieldFill.INSERT) protected Date createTime; /** * 修改时间 */ @TableField(fill = FieldFill.INSERT_UPDATE) protected Date updateTime; /** * 记录版本 */ @Version protected Integer version; public Date getCreateTime() { return createTime; } public void setCreateTime(Date createTime) { this.createTime = createTime; } public Date getUpdateTime() { return updateTime; } public void setUpdateTime(Date updateTime) { this.updateTime = updateTime; } public Integer getVersion() { return version; } public void setVersion(Integer version) { this.version = version; } }
<filename>dist/user-info/user-info.service.d.ts import UserInfoDatabase from '../database/userInfo.database'; import { AddToWatchRequest } from './dto/add-to-watch.dto'; import { UpdateUserInfoDto } from './dto/update-user-info.dto'; export declare class UserInfoService { private db; getOtherUserInfo(id: string): Promise<any>; addToWatch(postToAdd: AddToWatchRequest): Promise<boolean>; constructor(db: UserInfoDatabase); create(info: any): Promise<any>; findAll(): Promise<any>; findOne(id: string): Promise<any>; update(id: string, updateUserInfoDto: UpdateUserInfoDto): Promise<any>; remove(id: string): Promise<any>; }
/* * Copyright 2014 Guidewire Software, Inc. */ package gw.lang.reflect.gs; import gw.lang.parser.Keyword; import gw.lang.reflect.IDefaultTypeLoader; public enum ClassType { Enhancement, Program, Template, Eval, Class, Interface, Structure, Annotation, Enum, JavaClass, Unknown ; public boolean isJava() { return this == JavaClass; } public boolean isGosu() { return this == Enhancement || this == Program || this == Template || this == Eval || this == Class || this == Interface || this == Structure || this == Annotation || this == Enum; } public static ClassType getFromFileName(String name) { if (name.endsWith( IDefaultTypeLoader.DOT_JAVA_EXTENSION)) { return JavaClass; } if (name.endsWith( GosuClassTypeLoader.GOSU_ENHANCEMENT_FILE_EXT )) { return Enhancement; } if (name.endsWith( GosuClassTypeLoader.GOSU_PROGRAM_FILE_EXT )) { return Program; } if (name.endsWith( GosuClassTypeLoader.GOSU_TEMPLATE_FILE_EXT )) { return Template; } if (name.endsWith( GosuClassTypeLoader.GOSU_CLASS_FILE_EXT) || name.endsWith( ".gr" ) || name.endsWith( ".grs" )) { return Class; } return Unknown; } public String getExt() { switch( this ) { case Class: return GosuClassTypeLoader.GOSU_CLASS_FILE_EXT; case Program: return GosuClassTypeLoader.GOSU_PROGRAM_FILE_EXT; case Enhancement: return GosuClassTypeLoader.GOSU_ENHANCEMENT_FILE_EXT; case Template: return GosuClassTypeLoader.GOSU_TEMPLATE_FILE_EXT; default: return ""; } } public String keyword() { switch( this ) { case Enhancement: return Keyword.KW_enhancement.getName(); case Interface: return Keyword.KW_interface.getName(); case Structure: return Keyword.KW_structure.getName(); case Annotation: return Keyword.KW_annotation.getName(); case Enum: return Keyword.KW_annotation.getName(); case Class: case Program: case Template: case Eval: return Keyword.KW_class.getName(); default: return "<unknown>"; } } }
Astrocytoma in childhood: survival and performance. Seventy-nine children (35 female, 44 male) with proven or presumed astrocytoma were treated from 1967 to 1987. The tumors were supratentorial located in 24 children, cerebellar in 21 children, and pontine in 34 children. If possible, a radical tumor resection (4%), a subtotal tumor resection (51%), or a biopsy (8%) was performed. The predominant pathological Kernohan grading for the supratentorial, cerebellar, and pontine located tumors were grades II, II, and IV respectively. Histology was unknown in 15 out of 34 pontine tumors and in 1 out of 24 supratentorial tumors. Low-graded tumors (46%) were irradiated with a local field (1.8/45-50 Gy) and children with high-graded tumors (34%) received a total brain irradiation (1.8/40 Gy) followed by a boost irradiation (10 Gy) in 5 or 6 fractions. Overall 1-, 5-, and 10-year survivals of children with supratentorial, cerebellar, and pontine located tumors were 96%-91%-46%, 95%-95%-95%, and 35%-20%-20% respectively. For all tumor locations, 77% of deaths occurred within 2 years of treatment. The performance status of both children with supratentorial and cerebellar astrocytoma showed an increase during the first year of treatment and then stabilized on a rather high level (mean performance after 5 years of 60% and 70% respectively). Children with pontine tumors showed a steep decrease in performance status during the first year of treatment and then stabilized on a low level (mean performance after 5 years of 15%). In our study, children with supratentorial astrocytoma showed improvement in both survival and performance status after irradiation following surgical removal of the tumor.(ABSTRACT TRUNCATED AT 250 WORDS)
Genetic Diversity of Mungbean (Vigna radiata L.) in Iron and Zinc Content as Impacted by Farmers Varietal Selection in Northern India From the last few years a debate has been continuing over the issue of malnutrition and hunger in the developing countries. The present article investigates the importance of participatory varietal selection in the development of a suitable cultivar of mungbean along with the nutritional content and the agronomic traits of the cultivars selected by farmers in participatory varietal selection. A combination of the conventional survey strategy, participatory varietal selection, molecular markers, and chemical analysis were used to carry out the study, and results revealed that the farmers have the capacity to utilize available genetic resources to manage disease, and they can identify the disease at early stages of plant development. The genetic diversity was studied using 23 inter-simple sequence repeat marker, which shows that the extent of genetic diversity ranges from 65% to 87%, while chemical analysis of selected mungbean cultivars shows a moderate amount of iron (3.9 mg/100 g) and zinc (2.5 mg/100 g).
New Report Details The Horrors Of Life Under ISIS In Sirte, Libya : The Two-Way Human Rights Watch interviewed 45 Sirte residents for its report, which paints a vivid picture of how ISIS controls every aspect of life. ISIS has benefited from the chaos in Libya. A view of buildings ravaged by fighting in Sirte, Libya, in 2011. Crucifixions, executions, food shortages, forced prayer: These are features of life in the ISIS stronghold of Sirte, Libya, according to a new Human Rights Watch report. ISIS has controlled Sirte since last August. The central Mediterranean city is the hometown of Libya's former dictator Moammar Gadhafi and the site of some of the final battles of Libya's 2011 revolution. Human Rights Watch interviewed 45 residents of the city for its report, which paints a vivid picture of how ISIS controls every aspect of life, "down to the length of men's trousers, the breadth and color of women's gowns, and the instruction students receive in state schools." "We were filled with hope. Then step by step, Daesh [ISIS] took over. Now we feel we are cursed," one resident who fled Sirte told HRW. The city drew international attention in February 2015 when an ISIS video showed its fighters decapitating 21 men, almost all of them Egyptian Coptic Christians who were kidnapped in Sirte. In its new report, Human Rights Watch says it documented 28 other killings by ISIS in the Sirte area between mid-February 2015 and mid-February 2016. These amounted to "scenes of horror — public beheadings, corpses in orange jumpsuits hanging from scaffolding in what they referred to as 'crucifixions,' and masked fighters snatching men from their beds at night." That's in addition to "scores" of rival fighters disappeared by ISIS and presumed dead, the rights group says. For example, Human Rights Watch documented the execution of 23-year-old Amjad bin Sasi, who was accused of "insulting God." According to two family members, "ISIS enforcers burst into bin Sasi's house and hauled him to jail for allegedly naming God while swearing as he brawled with a neighbor earlier that day." He was brought before an ISIS judge and, three days later, shot dead in a public execution. His family says they haven't received his body, because ISIS will not allow them to bury him in a Muslim cemetery — they consider him a non-believer. The report details other cases where people have been taken from their beds at night and executed for being suspected "spies." Others were killed for preaching against ISIS or for conducting "sorcery." ISIS keeps a tight grip on the day-to-day activities of the Sirte population, according to the report. Residents say morality police "aided by informants patrolled the streets threatening, fining or flogging men for smoking, listening to music, or failing to ensure their wives and sisters were covered head to toe in loose black abayas, and hauling boys and men into mosques for prayer and mandatory religious education classes." The militant group has taken control of the city's schools, port, air base, power station and radio station, the report says. At least two-thirds of Sirte's population has fled since ISIS took over. "The group is failing to provide basic necessities to the local population. Instead it is diverting food, medicine, fuel, and cash, along with homes it confiscated from residents who fled, to as many as 1,800 fighters, police and functionaries it has amassed in the city." According to the report, ISIS fighters have destroyed "at least 20 homes belonging to fighters from prominent local families who joined militias trying to ousted the group." And five times a day during prayer times, ISIS enforcers comb the streets, "herding residents into mosques and ordering merchants to close their shops from the start of the prayers of al-Asr in the mid-afternoon until the end of the prayer of al-Isha after nightfall, current and displaced residents said." They say the punishment for not attending is flogging, according to the report. Additionally, women are not allowed to leave their homes without a male relative and fully covered according to the ISIS guidelines. Likewise, "shop owners are whipped and their shops are closed if they receive an unaccompanied woman." Chaos has erupted in Libya since the ouster of Gadhafi, with two separate governments each claiming authority. A U.N.-brokered agreement signed in December was meant to unify the two into one. But as The Associated Press reports, the agreement only has "patchy support" and the government's head Fayez Serraj "has been ensconced in a naval base in Tripoli since his return to the country in March, unable to exercise much power beyond his office walls — much like his predecessors." ISIS, which first appeared in Libya in 2014, has benefited from Libya's ongoing chaos and now has multiple local affiliates. Human Rights Watch says the militant group controls some 120 miles of Mediterranean coastline, where Sirte lies. As NPR's Michele Kelemen reported, the U.S. is considering sending weapons to the U.N.-backed government. John Kirby, spokesman for Secretary of State John Kerry, says "it will take time to make sure weapons get to the right groups in Libya and don't just add to the chaos," Michele reported.
FX American Horror Story: Roanoke — so far, only the AHS social media accounts have confirmed that title; creator Ryan Murphy remains frustratingly mum — is shrouded in mystery. The premise seems simple enough: It's a documentary-style story about a couple, Shelby and Matt (Sarah Paulson and Cuba Gooding Jr. in the dramatic reenactments; Lily Rabe and André Holland in documentary interviews) who are seemingly being haunted by evil spirits from the lost Roanoke colony. (Again, these supposedly nefarious sprits are being played by actors in the reenactments.) Of course that hasn't stopped fans — myself included — from trying to overanalyze everything looking for clues. There has to be something more to Season 6, doesn't there? One of the most intriguing aspects of this season's documentary format is the addition of two unreliable narrators. Shelby and Matt have overlapping stories, and sometimes what they're saying doesn't add up. For example, Shelby (Rabe) repeatedly says she was uncomfortable living in the country, but in the reenactments, Shelby (Paulson) doesn't exhibit any trepidation — until teeth start falling from the sky. Also, Shelby (Paulson) and Lee (Angela Bassett) were apparently trapped downstairs for "20 to 30 minutes," but we only saw a fraction of their encounter when they watched the Piggy-Man tape. Did something else happen between the two? Are they telling the truth? More important, what even is the truth? Does Shelby and Matt's Roanoke nightmare even exist?! These are much broader, more existential questions than we're used to asking from AHS. There's no Rubber Man to identify, no Supreme to be named — just a thin line between reality and retelling. And it's driving us all crazy. Why? Because we've come to expect something more intricate. Here's the season as we know it to be now: Shelby (Rabe) and Matt (Holland) are participating in a true crime docu-series called My Roanoke Nightmare, in which they're giving first-person, albeit unreliable, accounts of a supernatural series of events that occurred at some point in the past. Actors played by Paulson and Gooding Jr. — as well as Bassett, Kathy Bates, Wes Bentley, and an unrecognizable Lady Gaga — reenact Shelby and Matt's accounts for television. Shelby and Matt are not ghosts (at least, not yet). There's no breaking the fourth wall (again, not yet). Paulson and Gooding Jr. are only actors (for now). There's no time travel, aliens, or Nazis. Roanoke is completely grounded in Shelby and Matt's reality — or at least what we perceive to be their reality. FX FX It's entirely possible that we're being manipulated to accept Matt and Shelby's story as fact when it could all just be an artifice constructed by reality television producers. It could even be Billie Dean Howard's new show. After all, Howard, the medium Paulson played in Season 1's Murder House and again in Hotel, is not only familiar with the Roanoke story, but she's also working in television now. (Howard's involvement in My Roanoke Nightmare would be a satisfying way to tie the seasons together.) However, at this point in the season, there's no way of knowing whether Matt and Shelby's story is part of something bigger. One popular fan theory suggests that this season of American Horror Story will feature multiple stories. It's possible that My Roanoke Nightmare is only the first chapter in the larger, potentially fractured tale Murphy intends to tell. But again, there's no way of knowing that after one episode. Right now, there are three narratives that exist this season —Shelby's, Matt's, and Lee's. Everything else is all pretense. For the first time in our collective American Horror Story viewing experience, we're confronted with the most troubling realization of them all: Maybe there's nothing to actually pick apart. Honestly, that's kind of thrilling in its own right.
“I’m going to talk about what surveillance is possible, the technology that has happened under Obama, but also what’s possible with encryption tools,” he said. Other speakers will delve into issues including reproductive rights, immigration and LGBTQ visibility. Topics include “Appalachia in the Current Political Landscape,” “White Supremacy,” and “Threats to International Environmental Policies.” Speakers come from faculty and graduate programs from across the university. The teach-in will be noon to 7 p.m. Wednesday at the Blazer Dining Hall. There is no involvement by the UK administration, spokesman Jay Blanton said, but funding was provided through the UK Student Government Association. For more information, go to Facebook.com/events/228919590887989 or email [email protected].
<gh_stars>10-100 #pragma once #include "../../JObject.hpp" namespace android::app { class ActivityManager_RecentTaskInfo; } namespace android::content { class Context; } namespace android::content { class Intent; } namespace android::os { class Bundle; } namespace android::app { class ActivityManager_AppTask : public JObject { public: // Fields // QJniObject forward template<typename ...Ts> explicit ActivityManager_AppTask(const char *className, const char *sig, Ts...agv) : JObject(className, sig, std::forward<Ts>(agv)...) {} ActivityManager_AppTask(QJniObject obj); // Constructors // Methods void finishAndRemoveTask() const; android::app::ActivityManager_RecentTaskInfo getTaskInfo() const; void moveToFront() const; void setExcludeFromRecents(jboolean arg0) const; void startActivity(android::content::Context arg0, android::content::Intent arg1, android::os::Bundle arg2) const; }; } // namespace android::app
def determinant(self): if self.m != self.n: raise exc.LinearAlgebraError("cannot calculate the determinant of" "a non-square matrix") if self.m == 1: return self[0, 0] return sum([self[0, j] * (-1 if j % 2 else 1) * self.subset([i for i in range(1, self.m)], [k for k in range(self.n) if k != j]).determinant for j in range(self.n)])
Web-based applications are frequently made available to one or more user systems over the Internet or another suitable network. For example, user systems may use a web browser to access one or more web-based applications that are running on a server system. Interaction between the browser and a web-based application may be considered a session, such as a hypertext transfer protocol (HTTP) session. Information about the session, which can be referred to as session state information, may be stored at the server system. Each application on the server system with which the browser interacts may be associated with its own distinct session (i.e., between that application and the browser) with its own session state information. A first web application with which the browser is interacting and for which a session is established may transfer control of the browser interaction to a second web application. This transfer may occur for any number of reasons. However, it may be difficult or impossible for the first web application to share the session state information with the second web application.
Leveraging modern DNA assembly techniques for rapid, markerless genome modification Abstract The ability to alter the genomic material of a prokaryotic cell is necessary for experiments designed to define the biology of the organism. In addition, the production of biomolecules may be significantly improved by application of engineered prokaryotic host cells. Furthermore, in the age of synthetic biology, speed and efficiency are key factors when choosing a method for genome alteration. To address these needs, we have developed a method for modification of the Escherichia coli genome named FAST-GE for Fast Assembly-mediated Scarless Targeted Genome Editing. Traditional cloning steps such as plasmid transformation, propagation and isolation were eliminated. Instead, we developed a DNA assembly-based approach for generating scarless strain modifications, which may include point mutations, deletions and gene replacements, within 48h after the receipt of polymerase chain reaction primers. The protocol uses established, but optimized, genome modification components such as I-SceI endonuclease to improve recombination efficiency and SacB as a counter-selection mechanism. All DNA-encoded components are assembled into a single allele-exchange vector named pDEL. We were able to rapidly modify the genomes of both E. coli B and K-12 strains with high efficiency. In principle, the method may be applied to other prokaryotic organisms capable of circular dsDNA uptake and homologous recombination. Introduction The ability to manipulate the genetic material of a cell has proven to be invaluable for understanding the organism's biology. The basis of our understanding of many essential biological processes has been enabled by the ability to perform gene deletion and complementation studies. Furthermore, countless strains have been specifically engineered to facilitate the production of valuable proteins requiring modified cellular environments or helper factors. A number of the genome modification methods used today were developed decades ago and have had only minor updates due to new techniques becoming available. Most of the current methods for modifying common lab organisms such as Escherichia coli fall into two categories. The first set of methods is driven by homologous recombination and is reliant on proteins such as endogenous RecA. A primary advantage of methods that utilize the cell's own recombination machinery is that helper plasmids are not necessary. Homologous recombination results in a single crossover event between the incoming DNA and the chromosome resulting in a tandem arrangement of the wild-type gene and mutant gene. Then, a second crossover event potentially leaves the mutation of interest in the chromosome. These protocols (labelled as 'Classic allelic-exchange' in Fig. 1) generally do not leave a scar in the genome upon removal of the selection marker. However, these methods are often cumbersome, taking as much as a week for a single modification. A second set of commonly used methods is derived from the phage lambda recombination system, most famously described by Datsenko and Wanner, and utilizes the expression of exogenous phage proteins to mediate the recombination steps. The lambda Red system allows for faster integration and modification of a desired locus using a linear PCR product that contains short segments of homology, but this method requires the presence of a helper plasmid driving the expression of the phage genes and is less efficient for the replacement of large pieces of DNA. Though more rapid than classic methods, optimized lambda Red techniques still require approximately 4 days from start to finish, counting the time it takes to initially transform and subsequently cure the helper plasmids. The lambda Red system has also been applied for DNA oligonucleotide-mediated genome modification; however, this method will not be discussed further as it is primarily limited to the generation of point mutations or codon substitutions. Recently, the CRISPR/Cas9 system has been adapted for modifying genomes in multiple prokaryotic organisms [18,. Unfortunately, to take full advantage of this system, the organism must be highly recombinogenic, which E. coli is not. In general, the application of the CRISPR/Cas9 system in E. coli requires the use of the lambda Red system for the initial allele exchange step and then the Cas9 nuclease is employed to eliminate the WT allele. This method is effective but the overall combined use of lambda proteins and Cas9 is subject to the same drawbacks of other Red-mediated methods [18,. The exception is a recent report that employed a Cas9 nickase to initiate the generation of large genomic deletions. The Cas9 nickase method is dependent upon naturally occurring (or preengineered) repetitive DNA elements in order for intervening DNA to be eliminated by homologous recombination. Due to its ability to target multiple loci utilizing different guide RNAs, the CRISPR/Cas9 system holds promise as a way to multiplex genome editing; however, currently the ability to simultaneously introduce changes into multiple locations of the E. coli chromosome is limited. For an overview of current genome engineering methods, see Fig. 1. Given the various drawbacks of current methods, we sought to push the limits of speed, efficiency and versatility by leveraging recent advances in molecular biology. By utilizing a state-of-the-art DNA assembly approach to create sealed, circular DNA in vitro, we are able to completely eliminate the need to propagate plasmids in order to perform a genomic modification in E. coli. Our work utilized a modified version of Gibson assembly, however, any DNA assembly technique able to efficiently generate high levels of fully circularized DNA should be applicable in our protocol. Consequently, the described protocol allowed us to perform scarless modifications of the E. coli genome in approximately 48 h from the reception of PCR primers, significantly reducing the time relative to any other published method. This rapid turn-around allows for fast creation of complex engineered E. coli hosts tailored for specific applications, potentially saving weeks of effort if multiple modifications are required. Finally, given the simplicity of the protocol, we believe it should be possible to apply this method to a wide range of prokaryotic organisms, so long as the host is capable of efficient circular DNA uptake, homologous recombination, and the promoter driving the genes of the pDEL vector are functional in the target organism. The method described herein is named FAST-GE, an acronym chosen to indicate Fast Assembly-mediated Scarless Targeted Genome Editing. to help remove the antibiotic resistance marker from the chromosome. When utilizing the CRISPR/Cas9, the first helper plasmid also contains the red exo, beta and gamma genes in addition to Cas9. The second helper plasmid contains the guide RNA, used to eliminate any WT cells which did not undergo desired recombination. Regardless of the method of construction, all modifications should be sequenced in order to verify the presence of the desired modification. Materials and methods Bacterial strains, plasmids and enzymes E. coli strain NEB10-beta (New England BioLabs, Ipswich, MA) was used for all cloning steps to create the pDEL vector. Plasmids containing the R6K origin were replicated in BW23473. E. coli strains NEB Express, T7 Express and ER2744 (New England BioLabs, Ipswich, MA) were utilized in genome modification experiments. All enzymes were from New England BioLabs (Ipswich, MA). Chemicals were purchased from Sigma-Aldrich (St. Louis, MO). Primers and gBlock V R synthetic DNAs were synthesized by IDT (Coralville, IA). The DasherGFP gene was obtained from DNA2.0 (Menlo Park, CA). Kanamycin and ampicillin were used at 40 mg/ml and 100 mg/ml, respectively, unless otherwise stated. The rhaBAD and lac promoters were induced with 0.2% rhamnose and 0.5 mM IPTG, respectively. Counter-selection plates contained 5% (w/v) sucrose on plates containing 5 g/l of yeast extract, 10 g/l of tryptone and 7.5 g/l of agar, in addition to inducers as described above. Routine cell growth was performed at 37 C in lysogeny broth medium supplemented with 0.1% glucose. Colony PCR screens were performed using Quick-Load Taq 2X Master Mix, while Q5 Hot Start High Fidelity 2X Master Mix (New England BioLabs, Ipswich, MA) was used to PCR-amplify genomic DNA for sequence verification. Construction of pDEL The open reading frame for the sacB gene was amplified from pRE112 and inserted into pMAL-c5X using NEBuilder V R HiFi DNA assembly to create pMAL-sacB. Subsequently, the sacB gene was amplified with a lac promoter. Low-level basal expression of sacB is not harmful in the absence of sucrose, as such the lac repressor is not required. If the strain encodes lacI, then IPTG is necessary during the resolution step. Plasmid pKD4 was used as the source of the kanamycin resistance gene and associated promoter. The R6K origin of replication was amplified from pCD13PKS. The region containing the I-SceI endonuclease site as well as the I-SceI gene under the control of rhaBAD promoter was ordered as a gBlock V R ; the sequence is shown in Supplementary Table S1. The rhamnose promoter was chosen as it should be tightly repressed in the absence of the inducer, but will also generate sufficient levels of I-SceI transcripts in the majority of E. coli strains. To generate the pDEL vector, the following PCR fragments were assembled using an NEBuilder V R HiFi DNA assembly reaction: the R6K origin, kanamycin resistance gene, I-SceI gBlock V R and the Plac-sacB fragment. The final circular vector is shown in Supplementary Fig. S1 and the entire pDEL vector sequence is included in the Supplementary Material. Genome modification experiments For genome modification experiments, electrocompetent E. coli were prepared as follows: cells were grown in 3 ml cultures of LB until late exponential phase (OD600 0.8). Cells were harvested by centrifugation and washed 3 times with 500 ml of ice cold 10% glycerol. Finally, cells were resuspended in 50 ml of 10% glycerol. Genome modification constructs were made by assembling upstream and downstream homology regions (Fig. 2, boxes A and C, respectively), created using site-specific primers containing regions of microhomology to each other as well as to a PCR product of pDEL amplified with primers located between the R6K origin of replication and the I-SceI gene. In cases when the desired modification was gene replacement, the PCR product of the new gene fragment was assembled between the upstream and downstream regions of homology. Typical assembly reactions contained 250-300 ng of total DNA, at manufacturer's recommended ratios in a total volume of 10 ml. Sequencing of individual clones generated from each of the HiFi DNA assemblies confirmed a very low error rate. Results are shown in Supplementary Table S2. For transformation, 2 ml of assembled DNA was mixed with electrocompetent cells and electroporated in a 1 mm cuvette according to manufacturer's instructions (BioRad). Electroporated cells were resuspended in 950 ml of SOC and allowed to recover at 37 C for 1.5-2 h. Following the recovery step, 100 and 900 ml aliquots were plated on fresh LB-Kan plates and incubated at 37 C overnight. Colonies from the LB-Kan plate were picked, transferred into a liquid LB-Kan culture and simultaneously screened by colony PCR to confirm the location of the initial recombination event. Cultures containing the desired integration were allowed to reach early log phase (approx. OD 600 0.2), at other as well as to the linearized deletion cassette (amplified from pDEL). If region B is being mutated or replaced, as opposed to deleted, an additional fragment containing that modification must be included and will be assembled between A and C. Linear DNA fragments are subsequently assembled into a circular construct using NEBuilder V R HiFi DNA assembly, transformed into electrocompetent cells, and the initial integration event is selected for by kanamycin resistance. The integration location is verified by PCR and colonies containing the desired insertion are subcultured into media containing the inducers for I-SceI and SacB, and subsequently on counter-selection plates with the same inducers. Expression of I-SceI promotes homologous recombination in response to a double-strand break in the genome and SacB is used to select against cells that failed to remove the integration cassette. The final colonies are screened by PCR for the presence of the desired genome modification. which point they were diluted 1:500 into fresh LB containing 100 mM IPTG and 0.2% w/v rhamnose but lacking kanamycin. Post induction, cultures were allowed to grow for an additional 3 h at 37 C, after which they were plated on counter-selection plates containing 5% sucrose, 100 mM IPTG and 0.2% rhamnose. Following an overnight incubation at 37 C, colonies from the counter-selection plates were picked and grown in LB to be analysed by PCR with locus-specific primers for removal of the integration cassette. In order to confirm the desired changes, the PCR products were column purified and sequenced using the Sanger method. Composition of the insertion cassette When designing the pDEL vector, several critical factors were considered. First, the vector size was maintained as compact as possible in order to optimize amplification yield and fidelity. This led us to critically evaluate the components currently used with other genome modification approaches. We determined that at the minimum an antibiotic selection marker and a counter-selection marker were required. The frequency of homologous recombination in wild-type E. coli is low; however, recombination can be increased locally by the presence of a double-strand break. Thus, an I-SceI restriction site and the corresponding endonuclease gene were added to the construct to encourage homologous recombination by inducing a doublestrand break. Additionally, we customized the promoters of the sacB and I-SceI genes, to improve the overall efficiency of the protocol. The final composition of the deletion cassette is shown in Fig. 2. Kanamycin resistance was chosen as the primary selection marker as growth on kanamycin plates has proven to be a consistent indicator of single-copy genome integration. For counter-selection SacB was chosen as its activity and toxicity in E. coli has been well documented over the past several decades. The sacB gene originates from B. subtilis and encodes a periplasmic levansucrase. The exact mechanism of toxicity by SacB is still not well understood, but it is thought that periplasmic SacB creates large levan polymers when cells are grown in the presence of sucrose. In order to optimize sacB expression and periplasmic localization, we replaced the native Bacillus promoter with a lac promoter. Counter-selection on sucrose proved to be very robust as a large majority of surviving colonies were free of the sacB gene (Supplementary Table S3). Inclusion of the I-SceI homing endonuclease as well as the cognate recognition site was based on several recent papers demonstrating the increased recombination frequency during genome alteration by the generation of a unique double-strand break in the E. coli chromosome by I-SceI. In order to augment I-SceI cleavage efficiency, two terminators were introduced immediately upstream of the I-SceI cut site as active transcription through the recognition sequence was recently reported to lower I-SceI cleavage efficiency. In addition to the components actively involved in the recombination process, we chose to include the R6K origin of replication on the pDEL plasmid. The R6K origin is unique in that it requires the presence of the pir gene product for the plasmid to replicate; common E. coli strains lack this gene. The conditional origin was included as an alternative strategy, in cases where plasmid isolation is a more practical first step. In our experience, the DNA assembly products were successfully transformed and integrated directly in all strains tested. Rapid, targeted genome modification Three changes were made to non-essential chromosomal regions: deletion of the lac operon (Fig. 3A), a point mutation in the lacZ gene ( Supplementary Fig. S1A), and insertion of the T7 RNA polymerase gene (Fig. 4). Changes were introduced into both E. coli B and K-12 backgrounds. These modifications were chosen to demonstrate the capability of the FAST-GE method to easily generate a large deletion, a point mutation and an insertion, respectively. The detailed illustration of the construct used to generate the deletion of the lac operon in ER2523 (an E. coli B derivative) is shown in Fig. 3A. Electrocompetent ER2523 cells were transformed with 150 ng of DNA from the assembly reaction. This protocol was sufficient to result in several correctly integrated transformants (Fig. 3B). Two different primer pairs are used to identify strains with the pDEL construct integrated correctly at the desired locus. All eight colonies analysed contained a pDEL integration at the lac locus when combining the results of the F1-R1 and F2-R2 PCR analyses. Each primer pair has two potential products depending on whether recombination occurred via the 5 0 flank homology region or the 3 0 flank homology region. In each diagnostic PCR reaction, the extension time is set to amplify the shorter of the two possible PCR products. We found that this experimental approach was acceptable as shown in Fig. 3B. Recombinant number one was chosen for the resolution step. After reaching noticeable turbidity grown in the presence of kanamycin (OD 600 approx. 0.2), the culture from recombinant number one was diluted 1:500 into fresh LB medium lacking kanamycin but containing rhamnose and IPTG, for induction of I-SceI and sacB, respectively. After 2 h of incubation in antibioticfree medium, dilutions were plated on sucrose agar plates. Following counter-selection on sucrose, final resolution and removal of the integration cassette was highly effective. Table 1 presents a summary of the process for deletion of the lac operon. The removal of the counter-selection cassette relies upon homologous recombination and depending on the recombination site the process can lead to one of two outcomes. First, the entire integration cassette may be removed such that the original genome sequence is restored (henceforth referred to as WT sequence). Alternatively, the integration cassette may be removed resulting in the desired genomic modification (Fig. 3A). Blue colony colour (X-gal conversion) was used as an indicator of lac operon function and Fig. 3C shows that 9 of 16 resolved clones had regained b-galactosidase activity, suggesting resolution to WT sequence. PCR analysis of the same clones showed that six of the recombinants had the desired deletion of the lac operon. Several of the resulting 1.3 kb F1-R2 PCR products were sequenced to find that no unintended mutations were introduced. The described method of genome engineering is not biased according to the modification type. To demonstrate the ability of the FAST-GE method to introduce large insertions in the E. coli genome, the T7 RNA polymerase (T7 gene 1) was inserted into the chromosome of MC1061, a common E. coli K-12 strain. T7 gene 1 may be inserted into the lac locus to facilitate IPTGinducible expression of the T7 RNA polymerase for the purpose of recombinant protein expression. Accordingly, we chose to recreate the lacZ-T7gene1 operon fusion (approx. 3 kb), as found in T7 Express cells, between the yahF and mhpT genes (Fig. 4A). The integration and counter-selection steps were followed by colony PCR analysis as described above (Fig. 4). As with other experiments, we were able to obtain desired clones in 48 h from the beginning of the process. Using a similar procedure, we were also able to replace a single nucleotide in order to create an E462A active site substitution within LacZ (Supplementary Fig. S1). pected PCR product size for successful T7 gene 1 insertion is 5.2 kb. Resolution to WT yahF-mhpT locus yields a PCR product of 2 kb. Lack of a PCR product would suggests that the integration cassette was not successfully removed from the genome. The ratio of isolated colonies reverting to WT sequence as opposed to the desired modification varied depending on the modification and the integration locus (Table 1). Working with essential genes The method described in this article is also beneficial for determining whether a gene is essential for growth of the bacterium or if a specific mutation destroys function. Since a fully functional copy of the target gene is maintained during the integration event, integration should always be possible as the present method separates the integration and resolution steps. If all of the resolved colonies in a sufficiently large sample have reverted to WT sequence, then one can reasonably assume that the desired modification is not tolerated by the cell (Fig. 5). In contrast, in lambda Red-derived methods, the absence of viable transformants does not necessarily indicate that a gene is essential or that the modification being attempted is not tolerated by the cell. An additional advantage of this method is that resolution can be carried out under different culture conditions, such as varied temperatures or growth medium composition. Thus, one can determine if a gene's activity is essential for survival under varied experimental conditions. When we attempted to fuse a fluorescent protein (DasherGFP) to the C-terminus of the ribosome L9 subunit, corresponding to the rplI gene, colonies containing the initial crossover event were isolated (Fig. 5B). Out of the four colonies tested, only one returned the expected fragment size for the two diagnostic PCR reactions (Fig. 5B). The colony showing the expected PCR pattern was chosen for resolution. However, none of the colonies tested after sucrose selection contained the desired gene fusion (Table 1) (Fig. 5C). This led us to conclude that this fusion protein is not tolerated by E. coli due to disruption of ribosome function. Best practices for genome modification success When optimizing the FAST-GE protocol, we have noted several points where special care should be practiced to achieve best results and we offer several guidelines to improve the likelihood of success for first-time users. First, as with other homologydriven genome modification methods, the size of the flanking homology regions will affect the efficiency of the initial recombination event. Following standard practice for RecA-mediated homologous recombination, we recommend designing homology regions of at least 500 bases on both sides of the desired modification. Secondly, researchers may choose to transform the assembled genome modification construct into a pir strain in parallel with transformation into the desired host. This optional, extra step ensures that a suicide plasmid is available to use, in case direct transformation of the assembly reaction does not yield colonies or if electroporation is not an option for transformation. Thirdly, if multiple changes to the genome are desired, we suggest that all of the necessary genome modification constructs be assembled in parallel in a pir host and after the first modification, purified suicide plasmids can be used to perform subsequent modifications, simplifying and expediting generation of the desired strain. Fourthly, when attempting direct transformations, we strongly recommend that a high concentration of DNA be used for the DNA assembly reaction, up to 250-300 ng of DNA can be assembled in a 10 ml reaction, with subsequent transformation of 2 ml of this reaction. If working with strains known to have low competence, larger assemblies on the scale of 500-1000 ng of DNA in a 50 ml reaction may be necessary. Then, concentrate the DNA before the transformation step. Discussion For decades, genome editing in bacteria has relied upon the use of plasmids, either as a template for recombination or as the source of helper proteins to facilitate the recombination event. The reliance on replicative plasmids, requires an initial transformation step to introduce them into the hosts, as well as a curing procedure to remove them after the genome modification protocol is completed. In some methods, plasmid curing is not straightforward. In allele exchange methods where the plasmid is carrying the WT gene after resolution, if plasmid curing fails, then the researcher is left wondering whether the WT gene might be essential for viability. Some allele exchange methods are particularly time-consuming and may require 2-3 weeks from construct design to verification of the final strain. In this study, we demonstrate the ability to modify a bacterial genome without the need for a replicative plasmid, thus drastically reducing protocol time. This feat is accomplished by leveraging recent advances in DNA assembly technologies in order to transform the cells with a non-replicative, circularized piece of DNA. The assembled construct may be immediately transformed into the desired strain without the need for sequence verification due to high fidelity of the assembly process. Importantly, only a small number of resolved strains (typically fewer than eight) need to be analysed by focused sequencing reactions to verify the genome alteration as well as fidelity of the assembly process. Eliminating the requirement for helper plasmid(s) enables the generation of markerless genomic modifications within 48 h from receipt of the necessary primers. Minimum requirements for this protocol are fragments of DNA sequence flanking the site of the desired modification and linear DNA encoding the pDEL deletion cassette, with all pieces containing small homology regions sufficient to allow assembly via the NEBuilder V R HiFi DNA assembly method (or other high-efficiency DNA assembly methods). A high fidelity DNA polymerase should be employed to obtain the necessary DNA fragments and bacterial colonies may serve as the source of chromosomal template DNA to generate fragments for the DNA assembly reaction. As a result, time-consuming plasmid or genomic DNA purification procedures may be bypassed. To our knowledge, this protocol is at least 2 days faster, from start to finish, than any other method currently described. In addition, the FAST-GE method is accessible to anyone familiar with PCR and electroporationbased transformation protocols. While we were able to generate all of our desired mutants in both K-12 and B derivatives using direct transformation of the assembly reaction, we do understand that some researchers may prefer to use a purified plasmid when working with difficult to transform strains. For those preferring to work with an isolated assembly clone we have included the conditional R6K origin of replication on the pDEL vector, which allows transformation of the assembly reaction into a pir host for multi-copy replication. In a drive to improve the overall speed of the protocol, versatility was not compromised as insertions, deletions and point mutations are all equally possible. The only limitation of this method (and many other methods) is that the strain to be modified must be capable of homologous recombination. For example, typical cloning strains (e.g. DH5a) containing a recA mutation are not suitable for modification by this method. Colonies containing the desired initial crossover event are easy to identify by colony PCR and require at most the screening of eight colonies, which is on par or better than any other protocols reported to date. Additionally, no difference in the frequency of integration was apparent regardless of whether the target gene was essential for E. coli growth. This combination of efficiency and speed is unique and is of especially high value to researchers interested in making multiple genome modifications. Author Contributions I.B.T. and J.C.S. designed research. I.B.T. performed experiments. I.B.T. and J.C.S. analysed data and wrote the paper.
<gh_stars>1-10 /* * Copyright (c) Facebook, Inc. and its affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.facebook.litho; import android.graphics.Rect; import android.os.Looper; import androidx.annotation.NonNull; import androidx.annotation.Nullable; import androidx.annotation.VisibleForTesting; import java.util.ArrayList; import java.util.Collections; import java.util.List; /** * A {@link ComponentTree} for testing purposes. Leverages test classes to create component layouts * and exposes additional information useful for testing. */ public class TestComponentTree extends ComponentTree { public static Builder create(ComponentContext context, Component root) { return new Builder(context).withRoot(root); } private TestComponentTree(ComponentTree.Builder builder) { super(builder); } public List<Component> getSubComponents() { return extractSubComponents(getMainThreadLayoutState().getDiffTree()); } @Override protected LayoutState calculateLayoutState( ComponentContext context, Component root, int widthSpec, int heightSpec, boolean diffingEnabled, @Nullable LayoutState previousLayoutState, TreeProps treeProps, @LayoutState.CalculateLayoutSource int source, String extraAttribution) { return LayoutState.calculate( new TestComponentContext( ComponentContext.withComponentTree(new TestComponentContext(context), this), new StateHandler()), root, null, mId, widthSpec, heightSpec, diffingEnabled, previousLayoutState, source, extraAttribution); } @VisibleForTesting @Override public void setLithoView(@NonNull LithoView view) { super.setLithoView(view); } @VisibleForTesting @Override public void mountComponent(@Nullable Rect currentVisibleArea, boolean processVisibilityOutputs) { super.mountComponent(currentVisibleArea, processVisibilityOutputs); } @VisibleForTesting @Override public void measure(int widthSpec, int heightSpec, int[] measureOutput, boolean forceLayout) { super.measure(widthSpec, heightSpec, measureOutput, forceLayout); } @VisibleForTesting @Override public void attach() { super.attach(); } private static List<Component> extractSubComponents(DiffNode root) { if (root == null) { return Collections.emptyList(); } final List<Component> output = new ArrayList<>(); if (root.getChildCount() == 0) { if (root.getComponent() != null && root.getComponent() instanceof TestComponent) { TestComponent testSubcomponent = (TestComponent) root.getComponent(); output.add(testSubcomponent.getWrappedComponent()); } return output; } for (DiffNode child : root.getChildren()) { output.addAll(extractSubComponents(child)); } return output; } public static InternalNode resolveImmediateSubtree( ComponentContext c, Component component, int widthSpec, int heightSpec) { c.setLayoutStateContextForTesting(); InternalNode node = TestLayoutState.createAndMeasureTreeForComponent(c, component, widthSpec, heightSpec); return node; } public static List<Component> extractImmediateSubComponents(InternalNode root) { if (root == null || root == ComponentContext.NULL_LAYOUT) { return Collections.emptyList(); } final List<Component> output = new ArrayList<>(); if (root.getChildCount() == 0) { if (root.getTailComponent() != null && root.getTailComponent() instanceof TestComponent) { TestComponent testSubcomponent = (TestComponent) root.getTailComponent(); output.add(testSubcomponent.getWrappedComponent()); } return output; } for (int i = 0; i < root.getChildCount(); i++) { InternalNode child = root.getChildAt(i); output.addAll(extractImmediateSubComponents(child)); } return output; } public static List<Component> extractImmediateSubComponents( ComponentContext context, Component component, int widthSpec, int heightSpec) { NodeConfig.sInternalNodeFactory = new TestInternalNodeFactory(); ComponentTree tree = ComponentTree.create(context).build(); ComponentContext c = new TestComponentContext( ComponentContext.withComponentTree(new TestComponentContext(context), tree), new StateHandler()); InternalNode root = resolveImmediateSubtree(c, component, widthSpec, heightSpec); NodeConfig.sInternalNodeFactory = null; return extractImmediateSubComponents(root); } public static class Builder extends ComponentTree.Builder { private Builder(ComponentContext context) { super(context); } @Override public Builder withRoot(Component root) { return (Builder) super.withRoot(root); } @Override public Builder incrementalMount(boolean isEnabled) { return (Builder) super.incrementalMount(isEnabled); } @Override public Builder layoutDiffing(boolean enabled) { return (Builder) super.layoutDiffing(enabled); } @Override public Builder layoutThreadLooper(Looper looper) { return (Builder) super.layoutThreadLooper(looper); } @Override public TestComponentTree build() { return new TestComponentTree(this); } } public static class TestDefaultInternalNode extends DefaultInternalNode { public TestDefaultInternalNode(ComponentContext c) { super(c); } @Override public InternalNode child(Component child) { if (child != null) { return child(TestLayoutState.newImmediateLayoutBuilder(getContext(), child)); } return this; } } public static class TestInternalNodeFactory implements NodeConfig.InternalNodeFactory { @Override public InternalNode create(ComponentContext c) { return new TestDefaultInternalNode(c); } } }
<gh_stars>1-10 package models import "strconv" // Diagram ... type Diagram struct { ID string `xml:"id,attr"` Plane Plane `xml:"bpmndi:BPMNPlane"` } // SetID ... func (dia *Diagram) SetID(num int64) { dia.ID = "BPMNDiagram_" + strconv.FormatInt(num, 16) }
<gh_stars>1-10 /* * Created on Feb 5, 2005 * * TODO To change the template for this generated file go to * Window - Preferences - Java - Code Style - Code Templates */ package wdb; import java.io.File; import com.sleepycat.bind.serial.StoredClassCatalog; import com.sleepycat.je.Database; import com.sleepycat.je.DatabaseConfig; import com.sleepycat.je.DatabaseException; import com.sleepycat.je.Environment; import com.sleepycat.je.EnvironmentConfig; /** * @author <NAME> * * TODO To change the template for this generated type comment go to * Window - Preferences - Java - Code Style - Code Templates */ public class SleepyCatDbEnv { private Environment environment = null; private Database database = null; private Database classDB = null; private StoredClassCatalog classCatalog = null; public SleepyCatDbEnv() { } public void openEnv(File envHome, boolean readOnly) throws DatabaseException { //Setup the environment EnvironmentConfig envConfig = new EnvironmentConfig(); envConfig.setAllowCreate(!readOnly); environment = new Environment(envHome, envConfig); //Setup the classcatalog DatabaseConfig dbConfig = new DatabaseConfig(); dbConfig.setAllowCreate(!readOnly); classDB = environment.openDatabase(null, "ClassCatalog", dbConfig); // Instantiate the class catalog classCatalog = new StoredClassCatalog(classDB); } public void openDb(String name, boolean readOnly) throws DatabaseException { DatabaseConfig dbConfig = new DatabaseConfig(); dbConfig.setAllowCreate(!readOnly); database = environment.openDatabase(null, name, dbConfig); } public void closeDb(String name) throws DatabaseException { if (database != null) { database.close(); database = null; } } public void closeEnv() throws DatabaseException { if (database != null) { database.close(); } if (classDB != null) { classDB.close(); } if (environment != null) { environment.close(); } } /** * @return Returns the classCatalog. */ public StoredClassCatalog getClassCatalog() { return classCatalog; } /** * @return Returns the database. */ public Database getDatabase() { return database; } /** * @return Returns the environment. */ public Environment getEnvironment() { return environment; } }
package org.redquark.leetcode.challenge; /** * @author <NAME> * <p> * Problem Statement * Given a non-empty array of integers, every element appears twice except for one. Find that single one. * Your algorithm should have a linear runtime complexity. Could you implement it without using extra memory. */ public class Challenge01_SingleNumber { /** * @param numbers - the array of numbers * @return - the element which is not repeated * <p> * Algorithm: * 1. Assign first element of the array to a variable * 2. Loop the array for the rest of the element * 3. In each iteration XOR the above variable with the current element of the array * 4. After the iteration the variable will contain the value of the unique element * 5. Return the value in the variable */ public int findSingleNumber(int[] numbers) { // Temp variable int temp = numbers[0]; // Loop for the remaining elements in the array for (int i = 1; i < numbers.length; i++) { // XOR the current element with the temp variable temp = temp ^ numbers[i]; } // Return the temp return temp; } }
Operational aspects of adsorption airconditioner used in diesel locomotive A locomotive cabin adsorption airconditioner has been equipped in #DF4B2369 locomotive; and has been successfully run for 2 years. It is powered by waste heat from the exhaust of the diesel engine. The influence on heat transfer is described by the equivalent heat transfer coefficient or thermal resistance of components inside the adsorber. The variation of adsorption capacity is expressed by a nonequilibrium adsorption function. The dynamic heat transfer process of adsorption airconditioning system is treated with the lumped parameter method. Some typical running experimental results are present. The diesel engine rotating speed and locomotive speed influenced on the refrigeration system are discussed. The maximum mean refrigeration power is regarded as an objective function. Based on experiments and theoretical analysis, the running characteristics of the airconditioning system are optimized. Some techniques of performance improvement are suggested as well. Copyright © 2006 John Wiley & Sons, Ltd.
/** * Adds a workspace to the whole system */ public void addNewWorkspace(WorkspaceImpl newWorkspace) throws Exception { store(newWorkspace); enablePanelProviders(newWorkspace); fireWorkspaceCreated(newWorkspace); }
Officials move a portrait of Meles Zenawi shortly after the announcement of his death in Addis Ababa August 21, 2012. The death of Ethiopian Prime Minister Meles Zenawi has raised questions about the state of press freedom in the country. After weeks of government silence over Meles' health, he died suddenly in a Belgian hospital on August 20. Journalists who had reported on his health had seen harsh reprisals from the government, such was the case with Temesgen Desalegn, editor of the prominent Ethiopian weekly newspaper Feteh who was jailed late last month. Analysts say hardliners in the government, coupled with the country's one-party rule, will keep the Ethiopian press firmly under government control in the future. Mohamed Keita with the Committee to Protect Journalist's Africa Program says government prosecution and laws prevented a free press from developing under Meles. "Systematic persecution and criminalization of news gathering activities, critical reporting, investigative journalism never had a chance to grow under his rule because access to information never became a reality and his government continually enacted laws that ever restricted the activities of journalists and criminalized these activities," said Keita. The illness and whereabouts of Meles had been a source of rampant media speculation for weeks, including reports that he had died or gone on holiday. Keita says this is because of the government's culture of secrecy. "Because the government did not provide reliable information, refused to give details about his whereabouts and his condition," noted Keita. "This reflected the culture of secrecy within the ruling party and so in the absence of reliable information rumors ran wild and this is why there was so much speculation." Meles has been succeeded by Hailemariam Desalegn, who had been deputy prime minister. Keita thinks freedom of the press in Ethiopia will not improve under Hailemariam because of hardliners' influence in the ruling party. "The ruling party, there are hard-liners in the party and they wield a lot of influence," Keita noted. "I don't think Hailemariam is a hard-liner, but I'm sure he's under a lot of pressure so I don't know if he'll have a chance to really break with the past." VOA correspondent Peter Heinlein, who was based in Addis Ababa, says the government made it increasingly hard to report during his several years there. "We saw a steady increase in the regulation of the news media and also the government is very clever in limiting the number of sources that are available to reporters," Heinlein explained. "People in Ethiopia are generally wary of speaking to reporters and many times I would go back to a source or a person I'd spoken to and interviewed for a second time and found that after they appeared on VOA the first time they were warned that this is not the thing to do and some of them flat out told me 'I'm scared to talk to VOA. I'm scared to talk to the foreign press.'" Heinlein says it was difficult for the Ethiopian press to report accurately on Meles' deteriorating health because of the government line. "The state media and the private media were more or less hewing to the government line," Heilein added. "It's very difficult to really suss out what the truth is in an environment like that." Press freedom so far has not improved under Meles' successor. Feteh newspaper editor Temesgen Desalegn was denied bail Thursday after being jailed for reporting on the health of the prime minister last month. Heinlein thinks press freedom will not improve under the new leadership. "Hailemariam is basically the same government as Meles Zenawi," Heilein noted. "Ethiopia is a one party state defacto and the policies won't change. The policies are dictated by a small politburo known as the executive committee and that executive committee has not relinquished one iota of its policy-making authority now that Meles Zenawi is gone." Amnesty International has condemned the government's detention of Temesgen, saying the arrest is a worrying signal that the government intends to carry on targeting dissent.
/* * Copyright 2013-2022 Step Function I/O, LLC * * Licensed to Green Energy Corp (www.greenenergycorp.com) and Step Function I/O * LLC (https://stepfunc.io) under one or more contributor license agreements. * See the NOTICE file distributed with this work for additional information * regarding copyright ownership. Green Energy Corp and Step Function I/O LLC license * this file to you under the Apache License, Version 2.0 (the "License"); you * may not use this file except in compliance with the License. You may obtain * a copy of the License at: * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef OPENDNP3_UNITTESTS_COPYABLEBUFFER_H #define OPENDNP3_UNITTESTS_COPYABLEBUFFER_H #include <ser4cpp/container/SequenceTypes.h> #include <stddef.h> #include <memory> #include <sstream> /** Implements a dynamic buffer with a safe copy constructor. This makes it easier to compose with classes without requiring an explicit copy constructor */ class CopyableBuffer { public: // Construct null buffer CopyableBuffer(); // Construct based on starting size of buffer CopyableBuffer(size_t size); CopyableBuffer(const ser4cpp::rseq_t&); CopyableBuffer(const uint8_t* data, size_t size); CopyableBuffer(const CopyableBuffer&); CopyableBuffer& operator=(const CopyableBuffer&); ~CopyableBuffer(); bool operator==(const CopyableBuffer& other) const; bool operator!=(const CopyableBuffer& other) const { return !(*this == other); } ser4cpp::rseq_t ToRSeq() const { return ser4cpp::rseq_t(buffer, size); } operator const uint8_t*() const { return buffer; } operator uint8_t*() { return buffer; } size_t Size() const { return size; } void Zero(); protected: uint8_t* buffer; private: size_t size; }; #endif
Impact of the lower energy threshold on the NEMA NU2-2001 count-rate performance of a LSO based PET-CT scanner. UNLABELLED The aim of this study was to investigate the impact of the lower energy threshold (LET) on the NEMA NU2-2001 count-rate performance of a LSO-based PET scanner (Siemens PET-CT Biograph Sensation 16). The quantitative measurements were focused on three different aspects: noise equivalent count rate (NEC), scatter fraction, and absolute sensitivity. METHODS According to the NEMA-NU2-2001 protocol count-rate-performance (NEC-2R, scatter fraction) and sensitivity were evaluated performing serial measurements at LETs of 350, 375, 400, 410, 420, 430, 440, and 450 keV (the upper energy threshold was fixed to 650 keV). NEMA protocols were adapted to account for the intrinsic radioactivity of Lu in the LSO crystals. RESULTS Up to a radioactivity concentration of 8 kBq/ml the highest NEC-rates were obtained at an LET of 410 keV, between 8 and 20 kBq/ml at an LET of 420 keV and above 20 kBq/ml at an LET of 430 keV. The overall NEC maximum was 67 kcps at 430 keV (at 28 kBq/ml). The minimum scatter fraction was measured at a radioactivity concentration of approximately 0.5 kBq/ml. The scatter fraction decreased continuously from 45% at an energy threshold of 350 keV to 24% at 450 keV. The maximum sensitivity of 5.8 kcps/MBq, was obtained at an LET of 350 keV and the minimum sensitivity of 4.2 kcps/MBq at an LET of 450 keV. At the LET with the maximum NEC-rate (430 keV) the sensitivity was 4.8 kcps/MBq. CONCLUSION The optimal count-rate performance of the LSO-based PET system was found at LETs between 410 keV and 430 keV depending on the actual radioactivity concentration placed in the scanner. A global maximum in NEC count rate was obtained at an LET of 430 keV.
If you're a game developer, it looks like your work just got a whole lot better, as the Unity game rendering engine is now free for iOS, Android and BlackBerry 10. Unity CEO, David Helgason, announced the changes to their terms today during the Unite Nordic trade conference. These licensing fees for the Unity engine in their basic form previously cost $800, and are now absolutely free to developers. Helgason has said that the change to the pricing structure is to help build momentum for indie game developers and studios. The same free deal will eventually arrive on Windows Phone 8, too.
import os import logging import socket from logging.handlers import SysLogHandler class ContextFilter(logging.Filter): hostname = socket.gethostname() def filter(self, record): record.hostname = ContextFilter.hostname return True PAPERTRAIL_ADDRESS = os.getenv("PAPERTRAIL_ADDRESS") PAPERTRAIL_PORT = int(os.getenv("PAPERTRAIL_PORT", 0)) def set_logger(prefix="") -> logging.Logger: logger = logging.getLogger() purge_papertrail_handlers() syslog = SysLogHandler(address=(PAPERTRAIL_ADDRESS, PAPERTRAIL_PORT)) syslog.addFilter(ContextFilter()) format = f"%(asctime)s %(hostname)s {prefix}: %(message)s" formatter = logging.Formatter(format, datefmt="%b %d %H:%M:%S") syslog.setFormatter(formatter) syslog.setLevel(logging.INFO) logger.addHandler(syslog) return logger def purge_papertrail_handlers(): logger = logging.getLogger() for h in logger.handlers: if is_papertrail_handler(h): logger.removeHandler(h) def is_papertrail_handler(handler): return isinstance(handler, SysLogHandler) and ( "papertrail" in handler.address or "papertrail" in handler.address[0] )
package org.pac4j.lagom.javadsl; import akka.NotUsed; import com.lightbend.lagom.javadsl.api.Descriptor; import com.lightbend.lagom.javadsl.api.Service; import com.lightbend.lagom.javadsl.api.ServiceCall; import static com.lightbend.lagom.javadsl.api.Service.named; import static com.lightbend.lagom.javadsl.api.Service.pathCall; /** * Descriptor of Lagom service for tests. * * @author <NAME> * @since 1.0.0 */ public interface TestService extends Service { ServiceCall<NotUsed, String> defaultAuthenticate(); ServiceCall<NotUsed, String> defaultAuthorize(); ServiceCall<NotUsed, String> defaultAuthorizeByRole(); ServiceCall<NotUsed, String> defaultAuthorizeConfig(); ServiceCall<NotUsed, String> cookieAuthenticate(); ServiceCall<NotUsed, String> cookieAuthorize(); ServiceCall<NotUsed, String> cookieAuthorizeConfig(); ServiceCall<NotUsed, String> headerAuthenticate(); ServiceCall<NotUsed, String> headerAuthorize(); ServiceCall<NotUsed, String> headerAuthorizeConfig(); ServiceCall<NotUsed, String> headerJwtAuthenticate(); @Override default Descriptor descriptor() { return named("default").withCalls( pathCall("/default/authenticate", this::defaultAuthenticate), pathCall("/default/authorize", this::defaultAuthorize), pathCall("/default/authorize/role", this::defaultAuthorizeByRole), pathCall("/default/authorize/config", this::defaultAuthorizeConfig), pathCall("/cookie/authenticate", this::cookieAuthenticate), pathCall("/cookie/authorize", this::cookieAuthorize), pathCall("/cookie/authorize/config", this::cookieAuthorizeConfig), pathCall("/header/authenticate", this::headerAuthenticate), pathCall("/header/authorize", this::headerAuthorize), pathCall("/header/authorize/config", this::headerAuthorizeConfig), pathCall("/header/jwt/authenticate", this::headerJwtAuthenticate) ) .withExceptionSerializer(new Pac4jExceptionSerializer()) .withAutoAcl(true); } }
Gender-Based HIV and AIDS Risk Reduction Training for Health Educators in Katlehong, Johannesburg Abstract The aim of the study was to improve health educators locus of control, self-efficacy, sexual assertiveness, and to reduce HIV and AIDS risk through training. A gender-based HIV and AIDS risk reduction training programme was used to train health educators in Katlehong, Johannesburg. Thirty-three health educators volunteered to participate in the study. Participants were invited through the organisations internal communication channels. Participants were recruited from the organisations two branches. Participants locus of control, selfefficacy, sexual assertiveness and gender-based HIV and AIDS risk were assessed before training and after training. The instruments that were used to assess participants psychological well-being were Roters locus of control scale, general self-efficacy scale, sexual assertiveness scale for women, and HIV and AIDS risk was assessed using the gender-based HIV and AIDS risk scale. In addition, a qualitative design that used focus groups to get participants views on how they would use the train-the-trainer gender-based skills they got from Tshenolo HIV and AIDS Prevention Project to protect themselves from HIV infection and to empower vulnerable girls and women, people at risk of HIV infection and those living with HIV and AIDS through training. Data were analysed using t related samples tests and thematic content analysis. The results of this study indicated that participants felt that the training programme equipped them with personal skills to deal with gender transformative HIV and AIDS prevention programmes and policies. The perception of participants in this study was that the gender-based training programme had adequately prepared them to help vulnerable groups experiencing gender inequality in the community. The quantitative results indicated an improvement in psychological wellbeing of participants and a significant reduction in HIV and AIDS risk after training. Future studies could focus on the longitudinal relationship between attending a gender-based risk reduction training programme and HIV incidence among participants.
package edu.app.server.service; import edu.app.server.model.User; import edu.app.server.repository.AuthorityRepository; import edu.app.server.repository.UserRepository; import lombok.extern.log4j.Log4j2; import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional; import org.springframework.validation.annotation.Validated; import java.util.List; import java.util.Optional; /** * Es el modulo de servicio que implementa las logicas de negocio o transformaciones a los datos de User. * * @author <NAME> * @see User */ @Log4j2 @Service public class UserService { /** * Es el repositorio de Authority. * * @see AuthorityRepository */ private final UserRepository userRepository; /** * Es el encriptador de datos. * * @see BCryptPasswordEncoder */ private final BCryptPasswordEncoder bCryptPasswordEncoder; /** * Contructor que injecta las dependencias de Spring container. * * @param userRepository Es el repositorio que sera injectado por Spring. * @param bCryptPasswordEncoder Es el encriptador de datos que sera injectado por Spring. */ public UserService(UserRepository userRepository, BCryptPasswordEncoder bCryptPasswordEncoder) { this.userRepository = userRepository; this.bCryptPasswordEncoder = bCryptPasswordEncoder; } /** * Guarda el usuario, mediante una transacción de escritura. * * @param user Es el usuario a guardar. * @return Es el usuario con el id asignado en la base de datos. * @see UserRepository */ @Transactional public User saveUser(@Validated User user) { log.info("Guardando usuario"); log.info("Codificando el password"); user.setPassword(bCryptPasswordEncoder.encode(user.getPassword())); return this.userRepository.save(user); } /** * Actualiza el usuario, mediante una transacción de escritura. * * @param user Es el usuario a actualizar. * @return Es el usuario con el los valores actualizados en la base de datos. * @see UserRepository */ @Transactional public User updateUser(@Validated User user) { log.info("Obteniendo el usuario a actualizar."); User oldUser = this.userRepository.findById(user.getId()).orElse(null); if (oldUser != null) { log.info("Actualizando el usuario"); oldUser.setUsername(user.getUsername()); log.info("Codificando el password"); oldUser.setPassword(<PASSWORD>.bCryptPasswordEncoder.encode(user.getPassword())); oldUser.setIsEnable(user.getIsEnable()); oldUser.setAuthorities(user.getAuthorities()); return this.userRepository.save(oldUser); } else { log.info("No se actualizo ningun el usuario."); return null; } } /** * Realiza la busqueda por nombre, mediante una transacción de solo lectura. * * @param username Es el nombre del usuario. * @return Es el usuario que se busco. * @see UserRepository */ @Transactional(readOnly = true) public User getByUsername(String username) { log.info("Buscando usuario con el nombre usuario: " + username); User user = this.userRepository.findByUsername(username).orElse(null); if (user == null) { return null; } else { log.info("Charging authorities: " + user.getAuthorities()); return user; } } /** * Obtiene todas los usuarios, mediante una transacción de solo lectura. * * @return Son todas los usuarios realizadas por el repositorio. * @see UserRepository */ @Transactional(readOnly = true) public List<User> getAllUsers() { log.info("Recuperando todos los usuarios"); return this.userRepository.findAll(); } /** * Elimina el usuario, mediante una transacción de escritura. * * @param user Es el usuario a eliminar. * @return Es una cadena con la confirnación. * @see UserRepository */ @Transactional public String deleteUser(User user) { Optional<User> userObj = this.userRepository.findById(user.getId()); User userFind = userObj.orElseGet(null); if (userFind == null) { log.info("No se borro el usuario."); return "Unsuccessful delete user"; } else { log.info("Borrando el usuario: " + userFind.toString()); this.userRepository.delete(userFind); return "Success delete user"; } } /** * Obtiene el usuario mediante su id, mediante una transacción de solo lectura. * * @param id Es el id del User. * @return Retorna el valor existe o no existe que es retornado por el repositorio. * @see UserRepository */ @Transactional(readOnly = true) public User getById(Long id) { log.info("Buscando usuario por su id: " + id); return this.userRepository.findById(id).orElseGet(null); } }
def writefile(data, output, title): import csv with open(output, 'w') as csvfile: w = csv.writer(csvfile) w.writerow([title]) w.writerow([' ']) for key in sorted(data.keys()): val = data[key]['data'] unit = data[key]['unit'] if not data[key].get('rel', False): w.writerow( [key.ljust(30) + ':' + str(str(val) + ' ' + unit).rjust(17)]) else: rel = data[key]['rel'] w.writerow( [key.ljust(30) + ':' + str(str(val) + ' ' + unit).rjust(17) + str(' (' + '%6.2f' % (rel) + ' %)')]) return
#include "caffe2/operators/merge_id_lists_op.h" namespace caffe2 { namespace { REGISTER_CPU_OPERATOR(MergeIdLists, MergeIdListsOp<CPUContext>); OPERATOR_SCHEMA(MergeIdLists) .NumInputs([](int n) { return (n > 0 && n % 2 == 0); }) .NumOutputs(2) .SetDoc(R"DOC( MergeIdLists: Merge multiple ID_LISTs into a single ID_LIST. An ID_LIST is a list of IDs (may be ints, often longs) that represents a single feature. As described in https://caffe2.ai/docs/sparse-operations.html, a batch of ID_LIST examples is represented as a pair of lengths and values where the `lengths` (int32) segment the `values` or ids (int32/int64) into examples. Given multiple inputs of the form lengths_0, values_0, lengths_1, values_1, ... which correspond to lengths and values of ID_LISTs of different features, this operator produces a merged ID_LIST that combines the ID_LIST features. The final merged output is described by a lengths and values vector. WARNING: The merge makes no guarantee about the relative order of ID_LISTs within a batch. This can be an issue if ID_LIST are order sensitive. )DOC") .Input(0, "lengths_0", "Lengths of the ID_LISTs batch for first feature") .Input(1, "values_0", "Values of the ID_LISTs batch for first feature") .Output(0, "merged_lengths", "Lengths of the merged ID_LISTs batch") .Output(1, "merged_values", "Values of the merged ID_LISTs batch"); NO_GRADIENT(MergeIdLists); } } C10_EXPORT_CAFFE2_OP_TO_C10_CPU( MergeIdLists, "_caffe2::MergeIdLists(Tensor[] lengths_and_values) -> (Tensor merged_lengths, Tensor merged_values)", caffe2::MergeIdListsOp<caffe2::CPUContext>);
Electrospun poly(vinyl alcohol)/poly(acrylic acid) fibres with excellent water-stability Abstract The water stability of electrospun poly(vinyl alcohol) (PVA) nanofibres was improved significantly by annealing with poly(acrylic acid) (PAA). Effects of annealing were tested on solution-cast PVA/PAA films by measurement of the swelling degree. The influence of PVA/PAA ratio, annealing temperature and period, molecular weight of PAA, and addition of esterification catalyst was investigated. The results were verified by a significant improvement of the water-stability of electrospun PVA/PAA composite nanofibres.
def picking_face_groups(self, *group_names): return self.select().pick_face_groups(*group_names).end()
<reponame>guineawheek/ftcdata import pycountry import functools class LocationHelper: _remap = { "south korea": "Korea, Republic Of", "chinese taipei": "Taiwan", "russia": "Russian Federation", "iran": "Iran, Islamic Republic Of", } @classmethod @functools.lru_cache(maxsize=256) def unabbrev_state_prov(cls, country, state_prov): # these are military overseas addresses # there are ways to resolve this to correct championships # i'm just too lazy rn if country == "USA" and state_prov == "AE": return "Armed Forces Europe" if country == "USA" and state_prov == "AP": return "Armed Forces Pacific" # fix some pedantic issues if country.lower() in cls._remap: country = cls._remap[country.lower()] # fix the one cayman islands case elif country == "Cayman Islands": # idk why the state_prov field is filled for this one return "" # and the south africa case if country == "South Africa" and state_prov == "GP": return "Gauteng" elif country == "Mali" and state_prov == "BKO": return "Bamako" # some countries don't have divisions ig if not state_prov.strip(): return '' # like 2 entries are already expanded for some reason # division codes won't be above 3 characters if len(state_prov) > 3: return state_prov if country == "USA" and state_prov == "22": return "Armed Forces Europe" cc = pycountry.countries.lookup(country) if cc is None: # country lookup failed whoops return state_prov st = pycountry.subdivisions.lookup(cc.alpha_2 + "-" + state_prov) return st.name if st else state_prov @classmethod def unabbrev_state_prov_team(cls, team): return cls.unabbrev_state_prov(team.country, team.state_prov) @classmethod def abbrev_state_prov(cls, country, state_prov): pass @classmethod def abbrev_country(cls, country): if country.lower() in cls._remap: country = cls._remap[country.lower()] return pycountry.countries.lookup(country).alpha_3.lower()
/*! * iOS SDK * * Tencent is pleased to support the open source community by making * Hippy available. * * Copyright (C) 2019 THL A29 Limited, a Tencent company. * All rights reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #import <UIKit/UIKit.h> @class HippyAnimatedImage; // // An `HippyAnimatedImageView` can take an `HippyAnimatedImage` and plays it automatically when in view hierarchy and stops when removed. // The animation can also be controlled with the `UIImageView` methods `-start/stop/isAnimating`. // It is a fully compatible `UIImageView` subclass and can be used as a drop-in component to work with existing code paths expecting to display a `UIImage`. // Under the hood it uses a `CADisplayLink` for playback, which can be inspected with `currentFrame` & `currentFrameIndex`. // @interface HippyAnimatedImageView : UIImageView // Setting `[UIImageView.image]` to a non-`nil` value clears out existing `animatedImage`. // And vice versa, setting `animatedImage` will initially populate the `[UIImageView.image]` to its `posterImage` and then start animating and hold `currentFrame`. @property (nonatomic, strong) HippyAnimatedImage *animatedImage; @property (nonatomic, copy) void(^loopCompletionBlock)(NSUInteger loopCountRemaining); @property (nonatomic, strong, readonly) UIImage *currentFrame; @property (nonatomic, assign, readonly) NSUInteger currentFrameIndex; // The animation runloop mode. Enables playback during scrolling by allowing timer events (i.e. animation) with NSRunLoopCommonModes. // To keep scrolling smooth on single-core devices such as iPhone 3GS/4 and iPod Touch 4th gen, the default run loop mode is NSDefaultRunLoopMode. Otherwise, the default is NSDefaultRunLoopMode. @property (nonatomic, copy) NSString *runLoopMode; @end
/** * takes an array of integers and set all values to 0 */ private void initIntArray(int arr[]){ for(int i = 0; i < arr.length; i++){ arr[i] = 0; } }
A city police detective has been charged with driving under the influence and interfering with an officer after an early morning incident in Plainville, Hartford Deputy Chief Brian Foley said Monday. Robert Lanza, an 11-year veteran of the department assigned to the special investigations division, was arrested Sunday. Police officials in Hartford said they launched an immediate investigation. "HPD Internal Affairs investigators are currently in contact with the Plainville Police Department to gather all documents, audio/video, photographs and other evidence related to this incident," Foley said. "The entirety of the incident will be thoroughly reviewed when we receive those items." Shortly after midnight Sunday, police received a call about an erratic driver on Route 72. Bristol officers spotted Lanza's gray Honda Accord on Forestville Avenue, and notified Plainville police that the car was "swerving and weaving all over the road, crossing over the fog line and almost hitting guardrails several times." After pulling Lanza over, a Plainville officer approached the car. "He appeared extremely intoxicated, his speech was severely slurred while he spoke, his eyes appeared glossy and I could smell the distinct odor of an alcoholic beverage … even though he had chewing tobacco in his mouth," Officer Roman Blajerski wrote in a report. "I asked Lanza if he had anything to drink and he wouldn't answer." Lanza failed several field sobriety tests, and pulled away from police when they tried to arrest him, according to the report. He refused two breathalyzer tests and declined the opportunity to call for an attorney, officers said. He was later released on a $10,000 non-surety bond and is due in court Sept. 5. Foley said Hartford officials would plan to review incident reports before determining whether to put Lanza on leave. He did not report for work on Monday. Lanza earns a $78,001 annual salary from the city.
<gh_stars>100-1000 import flatmap from 'lodash.flatmap'; import { getDataOrDefault } from './helpers'; import { fromJSON, Schema, SchemaJSON } from './schema'; import * as Contentful from '@contentful/rich-text-types'; import { ContentfulNode, ContentfulElementNode, SlateNode, SlateElement, SlateText, SlateMarks, } from './types'; export interface ToSlatejsDocumentProperties { document: Contentful.Document; schema?: SchemaJSON; } export default function toSlatejsDocument({ document, schema, }: ToSlatejsDocumentProperties): SlateNode[] { // TODO: // We allow adding data to the root document node, but Slate >v0.5.0 // has no concept of a root document node. We should determine whether // this will be a compatibility problem for existing users. return flatmap(document.content, node => convertNode(node, fromJSON(schema))); } function convertNode(node: ContentfulNode, schema: Schema): SlateNode { if (node.nodeType === 'text') { return convertTextNode(node as Contentful.Text); } else { const contentfulNode = node as ContentfulElementNode; const childNodes = flatmap(contentfulNode.content, childNode => convertNode(childNode, schema)); const slateNode = convertElementNode(contentfulNode, childNodes, schema); return slateNode; } } function convertElementNode( contentfulBlock: ContentfulElementNode, slateChildren: SlateNode[], schema: Schema, ): SlateElement { const children = slateChildren.length === 0 && schema.isTextContainer(contentfulBlock.nodeType) ? [{ text: '', data: {} }] : slateChildren; return { type: contentfulBlock.nodeType, children, isVoid: schema.isVoid(contentfulBlock), data: getDataOrDefault(contentfulBlock.data), }; } function convertTextNode(node: Contentful.Text): SlateText { return { text: node.value, data: getDataOrDefault(node.data), ...convertTextMarks(node), }; } function convertTextMarks(node: Contentful.Text): SlateMarks { const marks: SlateMarks = {}; for (const mark of node.marks) { marks[mark.type as keyof SlateMarks] = true; } return marks; }
class ToolPredictionServer: """ A script to predict next tool for a sequence """ @classmethod def __init__( self ): """ Init method. """ self.tools_dictionary_path = "../data/data_rev_dict.txt"
P2.18The value of light microscopy to diagnose urogenital gonorrhoea in indonesian clinic-based and outreach sexually transmitted infections services Introduction Gonorrhoea is a common sexually transmitted disease caused by Neisseria gonorrhoeae (Ng) infection. Light microscopy of urogenital smears is used as a simple tool to diagnose urogenital gonorrhoea in many resource-limited settings. We aimed to evaluate the accuracy of light microscopy to diagnose urogenital gonorrhoea as compared to a PCR based test. Methods In 2014, we examined 632 male urethral and 360 endocervical smears in clinic-based and outreach settings in Jakarta, Yogyakarta and Denpasar, Indonesia. Using the detection of Ng DNA by a validated PCR as reference test, we evaluated the accuracy of two light microscopic criteria to diagnose urogenital gonorrhoea in genital smears: 1) the presence of intracellular Gram negative diplococci (IGND) and 2)≥5polymorphonuclear leukocytes (PMNL)/oil-immersion field (oif) in urethral, or >20PMNL/oif in endocervical smears. Results In male urethral smears, IGND testing had a sensitivity, specificity, and kappa of respectively 59.0%, 89.4%, and 0.49. For PMNL count these were respectively 59.0%, 83.7%, and 0.40. The accuracy of IGND in the clinic-based settings (respectively 72.0%, 95.2%, and 0.68) was better than in the outreach settings (respectively 51.2%, 83.4%, and 0.35). In endocervical smears, light microscopy performed poorly regardless of the setting or symptomatology, with kappas ranging from 0.09 to 0.24. Conclusion Light microscopy using IGND and PMNL criteria can be an option with moderate accuracy to diagnose urethral gonorrhoea among males in a clinic-based setting. The poor accuracy in detecting endocervical infections indicates an urgent need to implement advanced methods, such as PCR. Further investigations are needed to identify the poor diagnostic outcome in outreach services. Support: This study was fully funded by Excellence Scholarship Program (Beasiswa Unggulan), Ministry of Research, Technology and Higher Education, Republic of Indonesia
Risk of Colon Perforation During Colonoscopy at Baylor University Medical Center Colonoscopy is an important procedure in preventing colon cancer. The risk of colonic perforation during colonoscopy at the Baylor University Medical Center (BUMC) Gastrointestinal Laboratory was chosen as a surrogate marker for the safety of colonoscopy. A recent 2-year experience at BUMC was examined and compared with reports in the medical literature. The results are presented here along with a discussion of problems inherent with different health care systems and their ability to accurately track complications. It was concluded that colonoscopy at BUMC is as safe as that reported by comparable health care systems. The risk of perforation at BUMC was 0.57 per 1000 procedures or 1 in 1750 colonoscopies. Continued efforts to make colonoscopy safer are needed.
HIV care outcomes among transgender persons with HIV infection in the United States, 20062021 Supplemental Digital Content is available in the text Objectives: HIV prevalence is an estimated 14% among transgender women (TW) and 3% among transgender men (TM). HIV care is vital for viral suppression but is hindered by transphobia and HIV stigma. We assessed HIV care outcomes among transgender persons (TG) with HIV in the United States. Design: Systematic review and meta-analysis of peer-reviewed journal articles. Methods: We searched multiple electronic databases and Centers for Disease Control and Prevention's HIV Prevention Research Synthesis database for 2006September 2020. Eligible reports were US-based studies that included TG and reported HIV care outcomes. Random-effects models were used to calculate HIV care outcome rates. The protocol is registered with PROSPERO (CRD42018079564). Results: Few studies reported outcomes for TM; therefore, only TW meta-analysis results are reported. Fifty studies were identified having low-to-medium risk-of-bias scores. Among TW with HIV, 82% had ever received HIV care; 72% were receiving care, and 83% of those were retained in HIV care. Sixty-two percent were currently virally suppressed. Among those receiving HIV care or antiretroviral therapy (ART), 67% were virally suppressed at last test. Sixty-five percent were linked to HIV care 3months or less after diagnosis. Seventy-one percent had ever been prescribed ART. Approximately 66% were taking ART, and 66% were ART-adherent. Only 56% were currently adherent the previous year. Conclusions: HIV care outcomes for TW were not ideal, and research gaps exists for TM. High heterogeneity was observed; therefore, caution should be taken interpreting the findings. Integrating transgender-specific health needs are needed to improve outcomes of transgender persons across the HIV care continuum.
/** * Add all the subject classifications from the bibliographic * metadata. */ @Override protected void addCategories() { List<MetadataValue> dcv = itemService .getMetadataByMetadataString(item, "dc.subject.*"); if (dcv != null) { for (MetadataValue aDcv : dcv) { entry.addCategory(aDcv.getValue()); } } }
N = int(input()) A = list(map(int,input().split())) pA = [(-1)**(0)*A[0]] mA = [(-1)**(0+1)*A[0]] for i in range(1,N): pA.append((-1)**(i)*A[i] + pA[i-1]) mA.append((-1)**(i+1)*A[i] + mA[i-1]) ansl = [] for s in range(N): if s == 0: ansl.append(pA[N-1]) else: if s%2 == 0: ansl.append( pA[N-1]-pA[s-1] + mA[s-1] ) elif s%2 == 1: ansl.append( mA[N-1]-mA[s-1] + pA[s-1] ) print(*ansl)
/** * {@link AbstractTypeMapping} implementation for objects with ids of type * {@link Integer} */ public abstract class AbstractIntegerTypeMapping<T> extends AbstractTypeMapping<T, Integer> { public AbstractIntegerTypeMapping(String typeAlias, Class<T> typeClass) { super(typeAlias, typeClass, Integer.class); } @Override public final String toString(Integer id) { return id.toString(); } @Override public final Integer toId(String id) { // TODO parse errors return Integer.parseInt(id); } }
An instrument for nitric oxide measurements in the stratosphere. A completely automatic chemiluminescent instrument has been developed for in situ measurements of NO in the stratosphere. Signal intensity is linear in NO. Typical responsivity at 21.3 km is 1860 counts/sec ppbv. With a 1 sec measuring time constant, the detection limit as determined by noise is 0.03 ppbv. The instrument has been flown from balloon platforms to 30.8 km and from aircraft platforms between 12.2. and 18.3 km.
// HandleConn will handle a connection from the server's accept loop. func (s *Server) HandleConn(conn *tls.Conn) { if s.handler != nil { s.handler(conn) } }
<reponame>kananlanginhooper/scully import { createHash } from 'crypto'; import { scullyConfig } from '@scullyio/scully'; import { HTTPResponse } from 'puppeteer'; import { config } from './config'; import { generateId } from './generateId'; import { determineTTL } from './installInterceptor'; import { get, set } from './ldb'; import { CacheItem } from './local-cache.interface'; import { usageStatistics } from './usageStatistics'; export async function handlePuppeteerResponse(resp: HTTPResponse) { try { const responseHeaders = resp.headers(); const id = generateId(); if (responseHeaders['from-scully-cache']) { /** no need to reprocess */ return; } const status = await resp.status(); // as redirects don't have a "body" replace it with an empty string const body = status >= 300 && status <= 399 ? '' : await resp.text(); const request = await resp.request(); const url = request.url(); const { referer, ...headers } = request.headers(); if (config.includeReferer) { headers.referer = referer; } const hash = createHash('md5').update(id).update(url).update(body).digest('hex'); const TTL = determineTTL(url); usageStatistics.traffic += body.length; const cache: CacheItem = { hash, url, environment: config.environment, project: scullyConfig.projectName, inserted: Date.now(), requestHeaders: headers, TTL, response: { headers: { ...responseHeaders, 'from-scully-cache': true }, contentType: resp.headers()['content-type'] || headers['content-type'] || 'umh', status: resp.status(), body, }, }; await set({ url, headers, id }, hash); if (referer) { await set({ referer, url, id }, hash); } const previous: CacheItem = await get<CacheItem>({ hash }).catch(() => undefined); if (previous === undefined) { await set({ hash }, cache); } } catch (e) { console.error(e); } }
package org.motechproject.dhis2.rest.domain; public enum DhisStatus { OK, SUCCESS, ERROR }
It took Barbara Hustedt Crook an awfully long time to get around to writing her first musical. She started last year, shortly before her 60th birthday. Her friend and collaborator, Robert Strozier, waited even longer; he's 65. It's not that they didn't have the creative chops for the job. The two have spent their careers writing and editing in New York City, and Crook has a background in performing, singing and piano. But creating a musical always felt just out of reach--until now.
Early adoption means many features are still a work-in-progress. If the past is anything to go by, we expect it might be some time before Tesla has any Model 3 electric cars for us to review. The company's order books are overflowing, and in the past we've seen that any production capacity is prioritized for paying customers rather than the press. But as Model 3s start finding their way into the hands of customers who aren't Tesla employees, plenty more details about the hotly anticipated car are becoming public, thanks to owners at the Model 3 Owners Club. Members of the club complied a list of over 80 different features of the car they're curious about, including questions about how the car operates (does the card unlock all the doors, where does the UI show you that your turn signals are active), physical aspects of the car (what does the tow hitch attachment look like, how much stuff can you fit in the front and rear cargo areas), and subjective details (how aggressive is the energy regeneration, does that wood trim cause glare). At least two members of the club have received delivery of their cars, and unlike Tesla employees and special friends of the company who have cars, they appear to be under no requirement to keep this info quiet. So far, we've learned a few interesting facts. For instance, the windshield wipers are turned on and off by a stalk like just about every other car on the market, but changing the speed (slow/fast/intermittent) is handled by a menu on the touchscreen. The stalk also does double duty turning on the headlights, and there are no rain sensors for the wipers. The touchscreen UI really is the only way to interact with every other function, according to owners, even the rear air vents are controlled from up front (although there are USB ports in the back). Rear seat passengers also won't get seat heaters from what we gather—unless Tesla plans to activate them in a later software update—and the steering wheel is not heated either. The two buttons on the steering wheel do not appear to be user-configurable. Instead, the left button primarily deals with audio functions (scroll up and down for volume, left and right to change track) while the other one is for adjusting the mirrors and steering wheel position while in those menus in the UI. Additionally it appears that as of now, there's no way to tab through a different part of the UI without taking your hands off the steering wheel. Many of us had assumed that the controls on the wheel would allow the driver to interact with the car's different menus without taking a hand off the wheel, and it's disappointing to hear that this isn't the case. The problem is compounded in this case due to the fact that one needs to interact with a touchscreen that may preclude building up muscle memory, and as of now even changing cruise control speed requires the touchscreen. Human factors are definitely Tesla's weak point compared to the clever engineering that goes into the powertrain, and we hope that some attention is paid to this in a future software update. Future software updates will also be necessary to add features to the infotainment system, which currently doesn't have the ability to stream FM radio or browse the internet yet. And at least one person is a little sad that there's no physical AM radio, although we can't say we're terribly surprised given that it's 2017 and not 1957. We do expect that by the time we get to test a Model 3, much of this information will be out of date, but if you've been lucky enough to take delivery of your car already, please let us know how it is in the comments.
Mechanisms and molecules controlling the development of retinal maps. All mature vertebrates exhibit precise topographic mapping from the retina to the tectum, or its mammalian homologue, the superior colliculus (SC). In frogs and fish the development of this projection is precise from the outset; in avians retinal axon targeting is more diffuse but respects a coarse topographic matching; and in rodents early projections show no topographic specificity. Topography in avians and rodents emerges from a process of branch extension, arborization, and elimination of aberrant axonal projections. Despite these differences, the basic mechanisms controlling the development of this retinotopy are conserved. It has been hypothesized that molecules distributed in a position-dependent manner in the retina and the tectum or SC control the development of these maps. A number of candidate molecules have been identified on the basis of their distribution, or their ability to influence axonal growth in vitro. In addition, transcription factors and signaling molecules are expressed in a position-dependent manner and may regulate the expression of molecules involved in retinotopic map formation.
# Generated by Django 3.0.8 on 2021-02-25 01:29 from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ ('blog', '0002_auto_20210210_0609'), ] operations = [ migrations.AddField( model_name='post', name='likes', field=models.IntegerField(default=0), ), migrations.AlterField( model_name='post', name='body', field=models.TextField(), ), ]
#ifndef _PHP_WINVER_H #define _PHP_WINVER_H #ifndef SM_TABLETPC #define SM_TABLETPC 86 #endif #ifndef SM_MEDIACENTER #define SM_MEDIACENTER 87 #endif #ifndef SM_STARTER #define SM_STARTER 88 #endif #ifndef SM_SERVERR2 #define SM_SERVERR2 89 #endif #ifndef VER_SUITE_WH_SERVER #define VER_SUITE_WH_SERVER 0x8000 #endif #ifndef PRODUCT_ULTIMATE #define PRODUCT_UNDEFINED 0x00000000 #define PRODUCT_ULTIMATE 0x00000001 #define PRODUCT_HOME_BASIC 0x00000002 #define PRODUCT_HOME_PREMIUM 0x00000003 #define PRODUCT_ENTERPRISE 0x00000004 #define PRODUCT_HOME_BASIC_N 0x00000005 #define PRODUCT_BUSINESS 0x00000006 #define PRODUCT_STANDARD_SERVER 0x00000007 #define PRODUCT_DATACENTER_SERVER 0x00000008 #define PRODUCT_SMALLBUSINESS_SERVER 0x00000009 #define PRODUCT_ENTERPRISE_SERVER 0x0000000A #define PRODUCT_STARTER 0x0000000B #define PRODUCT_DATACENTER_SERVER_CORE 0x0000000C #define PRODUCT_STANDARD_SERVER_CORE 0x0000000D #define PRODUCT_ENTERPRISE_SERVER_CORE 0x0000000E #define PRODUCT_ENTERPRISE_SERVER_IA64 0x0000000F #define PRODUCT_BUSINESS_N 0x00000010 #define PRODUCT_WEB_SERVER 0x00000011 #define PRODUCT_CLUSTER_SERVER 0x00000012 #define PRODUCT_HOME_SERVER 0x00000013 #define PRODUCT_STORAGE_EXPRESS_SERVER 0x00000014 #define PRODUCT_STORAGE_STANDARD_SERVER 0x00000015 #define PRODUCT_STORAGE_WORKGROUP_SERVER 0x00000016 #define PRODUCT_STORAGE_ENTERPRISE_SERVER 0x00000017 #define PRODUCT_SERVER_FOR_SMALLBUSINESS 0x00000018 #define PRODUCT_SMALLBUSINESS_SERVER_PREMIUM 0x00000019 #define PRODUCT_HOME_PREMIUM_N 0x0000001A #define PRODUCT_ENTERPRISE_N 0x0000001B #define PRODUCT_ULTIMATE_N 0x0000001C #define PRODUCT_WEB_SERVER_CORE 0x0000001D #define PRODUCT_MEDIUMBUSINESS_SERVER_MANAGEMENT 0x0000001E #define PRODUCT_MEDIUMBUSINESS_SERVER_SECURITY 0x0000001F #define PRODUCT_MEDIUMBUSINESS_SERVER_MESSAGING 0x00000020 #define PRODUCT_SERVER_FOUNDATION 0x00000021 #define PRODUCT_HOME_PREMIUM_SERVER 0x00000022 #define PRODUCT_SERVER_FOR_SMALLBUSINESS_V 0x00000023 #define PRODUCT_STANDARD_SERVER_V 0x00000024 #define PRODUCT_DATACENTER_SERVER_V 0x00000025 #define PRODUCT_ENTERPRISE_SERVER_V 0x00000026 #define PRODUCT_DATACENTER_SERVER_CORE_V 0x00000027 #define PRODUCT_STANDARD_SERVER_CORE_V 0x00000028 #define PRODUCT_ENTERPRISE_SERVER_CORE_V 0x00000029 #define PRODUCT_HYPERV 0x0000002A #define PRODUCT_STORAGE_EXPRESS_SERVER_CORE 0x0000002B #define PRODUCT_STORAGE_STANDARD_SERVER_CORE 0x0000002C #define PRODUCT_STORAGE_WORKGROUP_SERVER_CORE 0x0000002D #define PRODUCT_STORAGE_ENTERPRISE_SERVER_CORE 0x0000002E #define PRODUCT_STARTER_N 0x0000002F #define PRODUCT_PROFESSIONAL 0x00000030 #define PRODUCT_PROFESSIONAL_N 0x00000031 #define PRODUCT_SB_SOLUTION_SERVER 0x00000032 #define PRODUCT_SERVER_FOR_SB_SOLUTIONS 0x00000033 #define PRODUCT_STANDARD_SERVER_SOLUTIONS 0x00000034 #define PRODUCT_STANDARD_SERVER_SOLUTIONS_CORE 0x00000035 #define PRODUCT_SB_SOLUTION_SERVER_EM 0x00000036 #define PRODUCT_SERVER_FOR_SB_SOLUTIONS_EM 0x00000037 #define PRODUCT_SOLUTION_EMBEDDEDSERVER 0x00000038 #define PRODUCT_ESSENTIALBUSINESS_SERVER_MGMT 0x0000003B #define PRODUCT_ESSENTIALBUSINESS_SERVER_ADDL 0x0000003C #define PRODUCT_ESSENTIALBUSINESS_SERVER_MGMTSVC 0x0000003D #define PRODUCT_ESSENTIALBUSINESS_SERVER_ADDLSVC 0x0000003E #define PRODUCT_SMALLBUSINESS_SERVER_PREMIUM_CORE 0x0000003F #define PRODUCT_CLUSTER_SERVER_V 0x00000040 #define PRODUCT_ENTERPRISE_EVALUATION 0x00000048 #define PRODUCT_MULTIPOINT_STANDARD_SERVER 0x0000004C #define PRODUCT_MULTIPOINT_PREMIUM_SERVER 0x0000004D #define PRODUCT_STANDARD_EVALUATION_SERVER 0x0000004F #define PRODUCT_DATACENTER_EVALUATION_SERVER 0x00000050 #define PRODUCT_ENTERPRISE_N_EVALUATION 0x00000054 #define PRODUCT_STORAGE_WORKGROUP_EVALUATION_SERVER 0x0000005F #define PRODUCT_STORAGE_STANDARD_EVALUATION_SERVER 0x00000060 #define PRODUCT_CORE_N 0x00000062 #define PRODUCT_CORE_COUNTRYSPECIFIC 0x00000063 #define PRODUCT_CORE_SINGLELANGUAGE 0x00000064 #define PRODUCT_CORE 0x00000065 #define PRODUCT_PROFESSIONAL_WMC 0x00000067 #endif #ifndef VER_NT_WORKSTATION #define VER_NT_WORKSTATION 0x0000001 #define VER_NT_DOMAIN_CONTROLLER 0x0000002 #define VER_NT_SERVER 0x0000003 #endif #ifndef VER_SUITE_SMALLBUSINESS #define VER_SUITE_SMALLBUSINESS 0x00000001 #define VER_SUITE_ENTERPRISE 0x00000002 #define VER_SUITE_BACKOFFICE 0x00000004 #define VER_SUITE_COMMUNICATIONS 0x00000008 #define VER_SUITE_TERMINAL 0x00000010 #define VER_SUITE_SMALLBUSINESS_RESTRICTED 0x00000020 #define VER_SUITE_EMBEDDEDNT 0x00000040 #define VER_SUITE_DATACENTER 0x00000080 #define VER_SUITE_SINGLEUSERTS 0x00000100 #define VER_SUITE_PERSONAL 0x00000200 #define VER_SUITE_BLADE 0x00000400 #define VER_SUITE_EMBEDDED_RESTRICTED 0x00000800 #define VER_SUITE_SECURITY_APPLIANCE 0x00001000 #endif #ifndef VER_SUITE_STORAGE_SERVER # define VER_SUITE_STORAGE_SERVER 0x00002000 #endif #ifndef VER_SUITE_COMPUTE_SERVER # define VER_SUITE_COMPUTE_SERVER 0x00004000 #endif #ifndef PROCESSOR_ARCHITECTURE_AMD64 #define PROCESSOR_ARCHITECTURE_AMD64 9 #endif #endif
class SQLORM: """ A SQLAlchemy-based ORM for Brick models. Currently, the ORM models Locations, Points and Equipment and the basic relationships between them. """ def __init__(self, graph, connection_string="sqlite://brick_orm.db"): """ Creates a new ORM instance over the given Graph using SQLAlchemy. The ORM does not capture *all* information expressed in a Brick model, but can be easily extended over time to capture more information. Args: graph (brickschema.Graph): a Brick schema graph containing instances we want to interact with. **Note**: this graph should not have any inference applied to it (RDFS or otherwise) connection_string (str): a database URL telling SQLAlchemy how to connect to the database that is backing the ORM. See [SQLAlchemy's documentation on database URLs](https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls) """ self._graph = graph self._engine = create_engine(connection_string) Base.metadata.create_all(self._engine) # the SQLAlchemy session; use for queries, etc self.session = sessionmaker(bind=self._engine)() # populate the database # get all equipment res = self._graph.query( """SELECT ?equip ?type WHERE { ?equip rdf:type/rdfs:subClassOf* brick:Equipment . ?equip rdf:type ?type }""" ) for (equip_name, equip_type) in res: equip = Equipment(name=equip_name, type=equip_type) self.session.merge(equip) # get all points of equipment res = self._graph.query( """SELECT ?point ?type ?equip WHERE { ?point rdf:type/rdfs:subClassOf* brick:Point . ?point rdf:type ?type . { ?point brick:isPointOf ?equip . } UNION { ?equip brick:hasPoint ?point . } }""" ) for (point_name, point_type, equip_name) in res: point = Point(name=point_name, type=point_type, equipment_id=equip_name) self.session.merge(point) # get all locations res = self._graph.query( """SELECT ?location ?type WHERE { ?location rdf:type/rdfs:subClassOf* brick:Location . ?location rdf:type ?type . }""" ) for (loc_name, loc_type) in res: loc = Location(name=loc_name, type=loc_type) self.session.merge(loc) # get all locations of equipment res = self._graph.query( """SELECT ?location ?type ?equip WHERE { ?equip rdf:type/rdfs:subClassOf* brick:Equipment . ?location rdf:type/rdfs:subClassOf* brick:Location . ?location rdf:type ?type . { ?location brick:isLocationOf ?equip . } UNION { ?equip brick:hasLocation ?location . } }""" ) for (loc_name, loc_type, equip_name) in res: # get existing equip object equip = ( self.session.query(Equipment).filter(Equipment.name == equip_name).one() ) # get existing Location loc = self.session.query(Location).filter(Location.name == loc_name).one() self.session.merge(loc) loc.equipment.append(equip) self.session.merge(loc) self.session.commit()
<gh_stars>1-10 #!/usr/bin/env python # Copyright (c) 2011 Google Inc. All rights reserved. # Use of this source code is governed by a BSD-style license that can be # found in the LICENSE file. """These functions are executed via gyp-sun-tool when using the Makefile generator.""" import os import fcntl import plistlib import shutil import string import subprocess import sys def main(args): executor = SunTool() executor.Dispatch(args) class SunTool(object): """This class performs all the SunOS tooling steps. The methods can either be executed directly, or dispatched from an argument list.""" def Dispatch(self, args): """Dispatches a string command to a method.""" if len(args) < 1: raise Exception("Not enough arguments") method = "Exec%s" % self._CommandifyName(args[0]) getattr(self, method)(*args[1:]) def _CommandifyName(self, name_string): """Transforms a tool name like copy-info-plist to CopyInfoPlist""" return name_string.title().replace('-', '') def ExecFlock(self, lockfile, *cmd_list): """Emulates the most basic behavior of Linux's flock(1).""" # Rely on exception handling to report errors. fd = os.open(lockfile, os.O_RDONLY|os.O_NOCTTY|os.O_CREAT, 0o666) fcntl.flock(fd, fcntl.LOCK_EX) return subprocess.call(cmd_list)
<reponame>gseteamproject/gseproject package gseproject.core.interaction; public interface IActuator { void init(); }
The present application claims priority to Japanese Application No. P2000-117326 filed Apr. 19, 2000, which application is incorporated herein by reference to the extent permitted by law. The present invention relates to a method of manufacturing a solid-state image pickup device, and particularly to a method of manufacturing a sensor portion in a solid-state image pickup device having a virtual gate structure that a substrate surface side of a sensor area is pinned. In a solid-state image pickup device, for example, a CCD (Charge Coupled Device) type image pickup device, a sensor photodetecting portion for performing photoelectric conversion (hereinafter referred to as xe2x80x9csensor portionxe2x80x9d) comprises an n-type layer for photoelectrically converting incident light to charges and accumulating the charges thus obtained, a p-type layer for forming an overflow barrier, and a p-type high-concentration (p+) layer for pinning the surface of an Si substrate so as to suppress emission of charges (dark current) occurring due to interface level. Here, if the pinning effect of the p+ layer on the surface of the Si substrate is insufficient, the dark current component would be increased and the image quality under a dark condition is adversely effected. In a case where a profile of a sensor portion is formed, it is general that a transfer electrode of a vertical transfer portion is formed of, for example, polysilicon, and then ion implantation of impurities is carried out in self-alignment with the transfer electrode as a mask, excluding an overflow barrier. Further, the ion implantation to form the profile aims to prevent channeling to Si crystal. In order to offset a p+ region and an n+ region with each other intentionally, the ion implantation is generally carried out at an inclined (tilt) angle of several degrees with respect to the surface of the Si substrate from a predetermined direction. This offset can reduce a read-out voltage when charges accumulated in the sensor portion are read out to a vertical transfer channel, and also adjust suppression of the blooming phenomenon that charges overflow into the vertical transfer channel at the time when a large amount of light is incident. However, the optimum offset combination (p+, n+ implantation directions) is three-dimensionally determined by a unit cell size and the potential of the sensor portion and the vertical transfer channel, and thus the optimum combination of the ion implantation directions would be varied in accordance with a profile design. Besides, the shape of the transfer electrode of the vertical transfer portion simultaneously determines the shape of the sensor area. At this time, for an ideal shape, it is actual that constriction occurs in the opening shape of the sensor portion due to working problems such as a photolithography resolution problem, a matching precision problem, etc. as shown in FIG. 7. Accordingly, if impurities are doped from one direction by ion implantation, the impurities would not be doped into the constriction site, and thus an impurities-unformed area occurs. Further, since the transfer electrode of the vertical transfer portion has a thickness of about 300 nm to 700nm, shadow occurs due to the film thickness by the effect of the tilt angle when the ion implantation is carried out. Therefore, an area into which no impurities are implanted necessarily occurs at the edge portion of the vertical transfer electrode although the area is small as shown in FIG. 8. Particularly, in cooperation with the lower implantation energy of the ion doping of boron used to form the p+ layer on the surface of the substrate, a p+ unformed area is liable to occur at the edge of the vertical transfer electrode. It is somewhat expected that impurities are diffused in the lateral direction due to a thermal treatment after the ion implantation. However, since the impurities-unformed area exists at the edge of the transfer electrode of the vertical transfer portion, it is liable to be depleted when a positive voltage is applied to the transfer electrode concerned. Therefore, the pinning effect would be insufficient if no countermeasure is taken. As a result, dark current is liable to occur, and also the dark current characteristic becomes unstable due to dispersion of the work shape of the vertical transfer electrode, so that the image quality is adversely effected. The present invention has been implemented in view of the above circumstances, and has an object to provide a method of manufacturing a solid-state image pickup device which can stably suppress dark current occurring in a sensor portion. In order to achieve the above object, a method of manufacturing a solid-state image pickup device having a virtual gate structure that a substrate surface side of a sensor area is pinned, is characterized in that when ion implantation of impurities to form a profile for pinning the substrate surface side of the sensor area is carried out at a predetermined implantation angle with respect to the surface of the substrate, the ion implantation is carried out while its operation is divided to plural times (stages) and also carried out from multiple ion implantation directions. In the solid-state image pickup device having the virtual gate structure, when impurities to form the profile for pinning the substrate surface side of the sensor area are doped by ion implantation in the process of forming the sensor portion, channeling can be prevented by inclining the ion implantation direction with respect to the surface of the substrate by several angles. The ion implantation is carried out while being divided into plural times (plural sub ion implantation operations) and also the respective sub ion implantation operations are carried out from different directions (i.e., the ion implantation is divisively carried out from multiple ion implantation directions), whereby no impurities-unformed area occurs in any area of the sensor area.
// Copyright 2019 <NAME>. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // Package zlib implements the encoded.Codec interface to apply zlib // compression to blobs. package zlib import ( "bytes" "compress/flate" "compress/zlib" "io" ) // A Codec implements the encoded.Codec interface to provide zlib compression // of blob data. A zero value is ready for use, but performs no compression. // For most uses prefer NewCodec. type Codec struct{ level Level } // Level determines the compression level to use. type Level int // Compression level constants forwarded from compress/flate. const ( LevelNone Level = flate.NoCompression LevelFastest Level = flate.BestSpeed LevelSmallest Level = flate.BestCompression LevelDefault Level = flate.DefaultCompression ) // NewCodec returns a Codec using the specified compression level. // Note that a zero level means no compression, not default compression. func NewCodec(level Level) Codec { return Codec{level} } // Encode compresses src via zlib and writes it to w. func (c Codec) Encode(w io.Writer, src []byte) error { z, err := zlib.NewWriterLevel(w, int(c.level)) if err != nil { return err } else if _, err := z.Write(src); err != nil { return err } return z.Close() } // Decode decompresses src via zlib and writes it to w. func (c Codec) Decode(w io.Writer, src []byte) error { z, err := zlib.NewReader(bytes.NewReader(src)) if err != nil { return err } defer z.Close() _, err = io.Copy(w, z) return err } // DecodedLen reports the decoded length of src. func (c Codec) DecodedLen(src []byte) (int, error) { var n int err := c.Decode(lengthWriter{&n}, src) return n, err } type lengthWriter struct{ z *int } func (w lengthWriter) Write(data []byte) (int, error) { *w.z += len(data) return len(data), nil }
Polymicrobial Interactions Induce Multidrug Tolerance in Staphylococcus aureus Through Energy Depletion Staphylococcus aureus is responsible for a high number of relapsing infections, which are often mediated by the protective nature of biofilms. Polymicrobial biofilms appear to be more tolerant to antibiotic treatment, however, the underlying mechanisms for this remain unclear. Polymicrobial biofilm and planktonic cultures formed by S. aureus and Candida albicans are 10- to 100-fold more tolerant to oxacillin, vancomycin, ciprofloxacin, delafloxacin, and rifampicin compared to monocultures of S. aureus. The possibility of C. albicans matrix components physically blocking antibiotic molecules from reaching S. aureus was ruled out as oxacillin, ciprofloxacin, delafloxacin, and rifampicin were able to diffuse through polymicrobial biofilms. Based on previous findings that S. aureus forms drug tolerant persister cells through ATP depletion, we examined nutrient deprivation by determining glucose availability, which indirectly correlates to ATP production via the tricarboxylic acid (TCA) cycle. Using an extracellular glucose assay, we confirmed that S. aureus and C. albicans polymicrobial cultures depleted available glucose faster than the respective monocultures. Supporting this finding, S. aureus exhibited decreased TCA cycle activity, specifically fumarase expression, when grown in the presence of C. albicans. In addition, S. aureus grown in polymicrobial cultures displayed 2.2-fold more cells with low membrane potential and a 13% reduction in intracellular ATP concentrations than in monocultures. Collectively, these data demonstrate that decreased metabolic activity through nutrient deprivation is a mechanism for increased antibiotic tolerance within polymicrobial cultures. INTRODUCTION Globally, 1 in 20 patients are currently suffering from a nosocomial infection (;a,b), with Staphylococcus aureus being a prevalent organism associated with these infections (). S. aureus is a leading cause of infective endocarditis, osteomyelitis, skin and soft tissue infections, and prosthetic device-related infections (). A number of S. aureus mediated infections can be attributed to the contamination of the device surface with a biofilm (). Interestingly, biofilm mediated S. aureus infections are difficult to eradicate, yet are caused primarily by drug-susceptible strains (Conlon, 2014;). Moreover, in polymicrobial biofilms, S. aureus is interacting with other pathogens, including the fungus Candida albicans. Polymicrobial infections are of concern as they result in a higher mortality rate than monomicrobial infections (;). However, underlying mechanisms for these observations remain inconclusive (McKenzie, 2006;;;). Polymicrobial biofilms have been reported to increase pathogen virulence, antibiotic resistance, and biofilm robustness (Harriott and Noverr, 2009, 2010). More specifically, tolerance to vancomycin in polymicrobial biofilms with C. albicans through an increase in biofilm robustness due to the extracellular matrix products secreted by the C. albicans, which restricted vancomycin penetration into the biofilm (;). However, similar results were found in S. aureus monomicrobial biofilms treated with vancomycin (); therefore, it is difficult to make any direct inferences about the underlying causes of tolerance to antibiotics. Until recently, literature on the mechanisms of persister cell formation was limited to two themes, toxin-antitoxin (TA) modules and stringent response (Lewis, 2010;). However, it was recently demonstrated that TA modules did not have a role in S. aureus persister cell formation (), and the stringent response, when disrupted in S. aureus had no effect on persister formation. Instead, it was observed that S. aureus cells exhibiting lower intracellular ATP had increased persister formation and tolerance to antibiotics (). Additional work confirmed an association between decreased metabolic activity in the TCA cycle and membrane potential with S. aureus persister formation (). The metabolic status of S. aureus and nutrient acquisition has become of interest for explaining bacterial survival during chronic infection and more recently has been associated with antibiotic tolerance in S. aureus. Nutrients such as amino acids, iron, nitrogen, and carbon metabolism have been a focal point of recent in vivo investigations (Haley and Skaar, 2012;;). While glucose is required for initial infection, in mature abscesses, non-preferred carbon sources are often a limiting factor (Kelly and O'Neill, 2015;;). Similarly, bacteria appear to form more robust biofilms when grown in the presence of abundant glucose. As the biofilm matures, glucose is exhausted leading to the formation of persisters (Amato and Brynildsen, 2014). These environments provide examples where glucose is required for initial establishment of infection, but as the infection progresses glucose availability becomes less important. Furthermore, nutrient sparse environments are frequently associated with relapsing chronic infection following antibiotic therapy. This points to a need for further exploration of the role of nutrient depletion in relapsing infections. In this study, glucose exhaustion and the subsequent decrease in energy availability was explored as a mechanism for multidrug tolerance within S. aureus and C. albicans polymicrobial cultures. It was found that polymicrobial cultures depleted glucose more rapidly compared to monomicrobial cultures. Additionally, S. aureus grown in polymicrobial cultures demonstrated decreased intracellular ATP concentrations as well as lower membrane potential when compared to cultures lacking C. albicans. Evidence for increased antibiotic tolerance within polymicrobial cultures due to matrix composition or biomolecules secreted by C. albicans was not found. Overall, these studies highlight the importance of metabolism in bacterial persistence, and demonstrate a potential mechanism for relapse in polymicrobial infection following antibiotic treatment. Strains and Growth Conditions The methicillin susceptible S. aureus strain HG003 was used in all assays (). The community acquired C. albicans strain SC5314 was used for all experiments (;). For experiments demonstrating this phenotype occurs across staphylococcal species, Staphylococcus epidermidis 1457, S. aureus UAMS-1, and S. aureus JE2 were used. S. epidermidis was grown to late log (∼1 10 9 CFU/mL) as this species is more sensitive to antibiotics an eradication occurs at early log phase. S. aureus JE2 is highly tolerant to antibiotics in later phases of growth and therefore assays were performed in early log (3 10 7 CFU/mL). S. aureus UAMS-1 and HG003 are similar in persister formation and assays were performed in mid-log (2-5 10 8 CFU/mL). SC5314 was grown to ∼3 10 6 CFU/mL for each biofilm and time-dependent kill assay where polymicrobial cultures were utilized. The Pspa:gfp plasmid was provided by Kim Lewis (). For construction of the PfumC:gfp reporter, the promoter of fumC was amplified (5 -gggcccgaattcttgatgatgttaatgcgcaaa-3 and 5gggccctctagatcaatttctccccttatcac-3 ) and cloned upstream of gfp into the EcoRI and XbaI sites in pALC1434 (). Once cloned, PfumC:gfp was electroporated into S. aureus RN4220 and subsequently transduced into HG003 using 11 phage. Unless otherwise stated, all growth steps and timedependent kill assays were grown in 3 mL Tryptic Soy Broth (TSB) at 37 C at 225 rpm in 14 mL snap cap tubes. Planktonic Time-Dependent Kill Assays Planktonic cultures were grown to mid-exponential phase in 3 mL TSB and challenged with antibiotics (10-100 MIC) as described previously (;). Cultures were placed in a shaking incubator at 225 rpm at 37 C. 100 L aliquots were removed from samples, washed to remove antibiotic, and surviving bacteria were enumerated at 18, 24, 48, and 72 h by serial dilution and plating on TSA containing amphotericin B (25 g/mL). Antibiotic Diffusion Through Mono-and Polymicrobial Biofilms Polycarbonate filters (13 mm) were sterilized by UV light for 30 min per side and placed on a TSA plate. Overnight S. aureus cultures were diluted 1:1000, C. albicans overnight cultures were diluted 1:100 in TSB. 100 L of this solution was placed onto the filter and grown statically for 24 h. Biofilms were placed on fresh TSA plates seeded with 1 10 6 CFU S. aureus. A 13 mm polycarbonate disk was placed on the biofilm, followed by a diffusion disk. Each respective antibiotic (1 mg/mL ciprofloxacin, 10 mg/mL oxacillin, 1 mg/mL rifampicin, 10 mg/mL vancomycin) was added (10 L) to the disk and plates were incubated for 24 h. The diameter of the zone of inhibition was then measured in millimeters. The average and standard deviation was obtained from biological triplicates. Significance was determined using a t-test, P ≤ 0.05. Visualization of Antibiotic Diffusion Throughout a Biofilm Using Confocal Scanning Laser Microscopy To visualize antibiotic diffusion through single and polymicrobial biofilms, fluorescently labeled vancomycin and delafloxacin were used as described previously with modification (). Biofilms were grown on 8-chambered glass coverslips (cat. 154941, MatTek Co.) for 24 h at 37 C statically in TSB containing 1% glucose. Following incubation, non-adherent cells were washed gently with 1% NaCl, and stained for 1 h. In order to visualize vancomycin, the fluorescent vancomycin BODIPY FL conjugate (ex488/em511) was added (5 g/mL). Delafloxacin was visualized using the intrinsic fluorescence of the molecule (ex405/em450) at a concentration of (10 g/mL). Concanavalin A (ex488/em545) was added (50 g/mL) to visualize the biofilm matrix. The coverslip was mounted on the slides using Prolong Diamond Antifade (ThermoFisher) according to the manufactures recommendation. Biofilms were observed using a 60 oil immersion objective and an Olympus FV3000 laser scanning confocal microscope (Olympus, Tokyo, Japan). Images were acquired at a resolution of 512 by 512 pixels. To analyze the biofilms, a series of images at ≤ 1 m intervals in the z axis were acquired through the depth of the biofilm. For each condition, at least three fields of view were imaged and processed equally using cellSens Dimension Desktop V1.18 (Olympus). Representative images are displayed. Analysis of Matrix Coating and Antibiotic Accessibility Using Flow Cytometry To determine whether coating of S. aureus by C. albicans matrix components blocked antibiotic access to the cell we utilized vancomycin BODIPY FL and the intrinsic fluorescence of delafloxacin. Cultures were grown to mid-exponential phase, fluorescent compounds were added 10 6 CFU/mL in 1% NaCl at the same concentration that was used in the confocal experiments for 1 h at room temperature. Samples were analyzed using a Sony SH800 cell sorter. Concentrated Supernatant Time-Dependent Kill Assay Cultures (25 mL) of each strain (HG003 and SC5314) were grown in a shaking incubator overnight. These cultures were then pelleted and the supernatant removed. Supernatants were passed through a 0.45 micron filter then spun through a 3000 MW filter and concentrated to approximately 1500 L. Concentrated supernatant (300 L) was then added to planktonic HG003 cultures and incubated for 4 h. These cultures were challenged with rifampicin (0.8 g/mL). The bacteriostatic antibiotic, chloramphenicol (4 g/mL), was added to prevent rifampicin resistant cells from regrowing. Aliquots (100 L) were removed from samples, and surviving bacteria were enumerated at 18, 24, 48, and 72 h by serial dilution and plating on TSA. Spent Media Time-Dependent Kill Assay Overnight cultures were diluted 1:1000 and were grown to mid-exponential phase in 3 mL of either HG003 or SC5314 spent media collected from overnight cultures via centrifugation and challenged with rifampicin (0.8 g/mL) and chloramphenicol (4 g/mL). Bacteria were cultured and enumerated as described above. Determination of Intracellular ATP Concentration Intracellular ATP concentration was measured using the Promega BacTiter-Glo Microbial Cell Viability Assay according to manufacturer's instructions. Late exponential phase cultures were filtered through a 5 M filter to remove C. albicans. The remaining S. aureus cells were pelleted and washed with 1% NaCl prior to measuring luminescence. A sample was also taken for serial dilution and enumeration of bacteria. Luminescence was divided by surviving cells to account for any growth differences. Six replicates were used for obtaining averages and standard deviation. Significance was determined using a student's t-test, P ≤ 0.05. Measurement of Membrane Potential in Individual Cells Membrane potential was measured using BacLight Bacterial Membrane Potential Kit according to manufacturer's instructions. Briefly, samples were taken from mid-exponential phase (t = 5 h) in S. aureus either grown alone or in the presence of C. albicans. Samples were diluted to 1 10 6 cells in PBS and were stained with DiOC 2 for 30 min and analyzed by flow cytometry. Carbonyl cyanide m-chlorophenylhydrazone (CCCP) was used to dissipate membrane potential and was used to gate low membrane potential cells. Bacterial cells were separated from fungal cells and debris using back scatter (BSC) and forward scatter (FSC) parameters with 50,000 events collected for each sample. DiOC 2 was excited at 488 nm and emissions of the green and red fluorescence were detected with bandpass filters of 525/50-and 600/60-nm, respectively. Samples were analyzed using FlowJo software. The average and standard deviation was obtained from six biological replicates. Significance was determined using a student's t-test, P ≤ 0.05. Quantifying Extracellular Glucose Availability Overnight cultures of S. aureus (1:1000) and C. albicans (1:100) were diluted in TSB and placed in a shaking incubator. Every hour, 500 uL media was removed and pelleted. Supernatant was then used to measure glucose concentration using an Invitrogen glucose detection colorimetric assay kit according to manufacturer's instructions. Averages and standard deviation were calculated using six biological replicates. Measuring Fumarase C Expression Overnight cultures of S. aureus (1:100) containing PfumC:gfp or Pspa:gfp plasmids and C. albicans (1:50) were diluted in Mueller Hinton Broth (MHB) in a microtiter plate. Growth and fluorescence were (485ex/528em) were monitored over 22 h in a Biotek microplate reader at 37 C with continuous shaking. Averages and standard deviation were calculated from biological triplicates. Polymicrobial Cultures Demonstrate Increased Tolerance to Antibiotics in Both Biofilm and Planktonic Environments Polymicrobial infections are more tolerant to antibiotic therapy than single organism infections, though the underlying mechanisms remain unclear. Recent work has demonstrated that the presence of C. albicans increases S. aureus tolerance to vancomycin within a biofilm (). We sought to determine if interactions between C. albicans and S. aureus lead to multidrug tolerance. Since mature biofilms often do not respond to antibiotics, it is often impossible to observe decreased antibiotic effectiveness between various cultures. In order to overcome this, immature biofilms were used. It is important to note that two distinct phenotypes of the wild type HG003 strain were observed. Following antibiotic treatment, cultures showed up to 3 logs of killing, or little to no effect. Polymicrobial biofilms led to significantly more survival in six of eight antibiotic treatments (ciprofloxacin p = 0.006, oxacillin p = 0.019, rifampicin p = 0.006, rifampicin/ciprofloxacin p = 0.001, rifampicin/gentamicin p = 0.003, and vancomycin/ciprofloxacin p = 0.008) compared to S. aureus monomicrobial biofilms (Figure 1). Interestingly, no increase in tolerance was observed when biofilms were challenged with vancomycin. To determine if the increase in tolerance was specific to biofilms, planktonic cultures were challenged with antibiotics during the mid-exponential growth phase. Following antibiotic challenge with rifampicin, ciprofloxacin, oxacillin, delafloxacin, and vancomycin, S. aureus exhibited 10 to 100-fold more persisters when grown in the presence of C. albicans (Figure 2). To further determine these effects were not strain or species specific, another MSSA strain, a MRSA strain, and a S. epidermidis strain were tested for antibiotic tolerance in the presence and absence of C. albicans ( Supplementary Figures S1A-C). With all three strains, there was increased antibiotic tolerance when the staphylococcal species was grown in the presence of C. albicans. These experiments demonstrate the presence of C. albicans increases S. aureus persister cells FIGURE 1 | Polymicrobial biofilms show increased tolerance to a variety of antibiotic. Overnight cultures of S. aureus were diluted 1:1000 and C. albicans overnight cultures were diluted 1:100 in TSB using a microtiter plate. Plates were incubated for 8 h at 37 C statically. Non-adherent cells washed, fresh media was added, and biofilms were subsequently challenged with antibiotics (10-100 MIC) for 24 h. S. aureus growing in polymicrobial biofilms (red) had significantly higher survival compared to biofilms only containing S. aureus (blue). Experiment was performed in biological triplicate and error bars represent standard deviation. Significance (as indicated by * ) was determined using a t-test (p < 0.05). when challenged with most antibiotics, regardless of whether growing in planktonic or biofilm environments. With the Exception of Vancomycin, Antibiotics Diffuse Freely Through Polymicrobial Biofilms One mechanism that could explain the increased tolerance in polymicrobial biofilms is that antibiotics are not able to completely penetrate the polymicrobial biofilm matrix. To determine whether this was the case for other classes of antibiotics, antibiotic penetration assays were performed. The respective zone of inhibition for oxacillin, rifampicin, delafloxacin, and ciprofloxacin indicated that these antibiotics are not impeded by the biofilm matrix created by S. aureus, C. albicans, or the combination of both organisms ( Table 1). As previously demonstrated, vancomycin diffusion was inhibited by the polymicrobial biofilm (p = 0.041). A decrease in vancomycin diffusion was also seen with S. aureus monoculture biofilms, although this was found to not be significantly different (p = 0.052) from diffusion in the absence of biofilm. To confirm these findings, vancomycin and delafloxacin penetration throughout the biofilm were examined using confocal scanning laser microscopy (Figure 3). S. aureus, C. albicans, or polymicrobial biofilms were grown. Formed biofilms were visualized by staining the polysaccharide matrix with concanavalin A (ConA, red). To visualize vancomycin, a fluorescent BODIPY conjugate was used (green). For delafloxacin, its intrinsic fluorescence was used (ex405/em450, blue). Contrary to the biofilm penetration assay and previously published work, vancomycin diffusion did not appear to be inhibited by the polymicrobial biofilm. The only exception to this observation is a slight decrease in vancomycin fluorescence in basal layers of the biofilm that reached 30 M in height. However, a similar decrease in fluorescence was observed in biofilms formed by S. aureus alone. Despite this very modest phenotype, vancomycin was able to diffuse throughout the biofilm and reach all of the cells growing within the biofilm. Similarly, although the intrinsic fluorescence only produced a weak signal, delafloxacin was not inhibited by either single or polymicrobial biofilms. With the possible exception of vancomycin, the increased tolerance does not appear to be due to limited penetration of the antibiotic through the biofilm matrix. Vancomycin Binding in Planktonic Cultures Is Not Inhibited by Matrix Coating Confocal imaging revealed ConA binding S. aureus within polymicrobial cultures. To determine whether this coating was enough to inhibit antibiotics from accessing the cell, flow cytometry was used to measure the amount of antibiotics able to bind to the bacteria. Vancomycin was found to bind similarly to S. aureus cells regardless of whether they were grown in monomicrobial cultures or polymicrobial cultures (Figure 4). To confirm that the growth to mid-exponential phase was long enough for matrix coating to occur, polymicrobial cultures were stained with ConA. Matrix coating did occur during this time as indicated by the fluorescence associated with S. aureus cells in polymicrobial cultures. Unfortunately, the intrinsic fluorescence of delafloxacin was too weak for analysis and could not be properly assessed. Nevertheless, it is clear that despite coating of bacterial cells, vancomycin was still able to bind to S. aureus, and physical inhibition of the antibiotic is not the reason for increased tolerance. S. aureus and C. albicans Co-cultures Is Not Affected by Secreted Products Secreted C. albicans products larger than 3,000 MW were concentrated and added to cultures prior to antibiotic challenge to examine whether a specific virulence factor or biomolecule was influencing tolerance within S. aureus. After an incubation period, cultures were challenged with rifampicin. Cultures containing concentrated supernatant showed no difference in antibiotic tolerance compared to cultures incubated without the added supernatant ( Figure 5A). Previously, farnesol was shown to influence antibiotic tolerance (;). According to recent work, at high concentrations (100-150 M), farnesol appears to enhance antibiotic effectiveness. Conversely, lower concentrations (40 M) of farnesol appear to result in increased antibiotic tolerance. Therefore, we tested the possibility that increased tolerance is from farnesol secretion by C. albicans. Following the addition of farnesol (40 M), no effect on antibiotic tolerance was observed when cultures were challenged with rifampicin ( Figure 5B). We considered the possibility that secreted products smaller than 3,000 MW were being excluded from these kill assays. To confirm previous findings, cultures were grown in spent media prior to antibiotic challenge. Growth in spent C. albicans media increased tolerance within S. aureus, however, growth in spent S. aureus media also increased tolerance to a similar extent FIGURE 4 | Matrix coating does not inhibit vancomycin binding. (A) Vancomycin BODIPY FL conjugate was added to planktonic cells in either polymicrobial (red) or monomicrobial (blue) cultures. Vancomycin was able to bind S. aureus similarly in both conditions. Unstained cells were included as a control (gray). (B) Fluorescence from delafloxacin either in polymicrobial (red) or monomicrobial (blue) cultures was unable to be differentiated from unstained cells (gray) due to the weak signal produced. (C) To ensure matrix coating occurred, ConA was added to polymicrobial (red) cultures and compared to unstained polymicrobial cultures (gray). Data is representative of three independent replicates. ( Figure 5C). These results cast doubt on the ability of secreted C. albicans products to increase antibiotic tolerance. Instead, the increase in tolerance in both environments suggests that a common cause, such as nutrient depletion, is responsible for increased tolerance. Polymicrobial Cultures Consume Glucose at an Increased Rate, Leading to Lower Intracellular ATP Concentrations Glucose, a preferred source of carbon for S. aureus, serves as the major substrate for glycolysis. This leads to NADH generation and subsequent ATP synthesis. Glucose concentration was measured over time to determine if a polymicrobial culture could deplete available glucose at an increased rate. As one would expect, glucose was consumed faster in the polymicrobial culture than the S. aureus monoculture (Figure 6). To confirm that the lower concentrations of extracellular glucose affect the energy status of bacterial cells, the intracellular ATP in S. aureus from single and mixed cultures was measured. Previous work demonstrated that antibiotic tolerance is increased when intracellular ATP is depleted (). During late exponential phase, S. aureus cells from mixed cultures exhibited lower intracellular ATP concentrations compared to S. aureus from single cultures (Figure 7). Moreover, membrane potential is closely linked with the energy status of the cell, and therefore it is likely altered in polymicrobial cultures. S. aureus cells grown in the presence of C. albicans exhibited a reduced membrane potential compared to S. aureus monocultures (Figure 8). This indicates that polymicrobial cultures consume nutrients more rapidly than monomicrobial cultures, resulting in lower intracellular ATP and membrane potential. Cells in Polymicrobial Biofilms Show a Decrease in Metabolic Gene Activity Recent work has implicated an association between the TCA cycle and membrane potential and persister cell formation in FIGURE 6 | Extracellular glucose availability. Glucose concentrations were measured over time. Glucose was more rapidly consumed in polymicrobial cultures (red) compared to S. aureus monocultures (blue); both cultures completely exhausted the glucose in the media by 6 h. Experiments were performed in triplicate and error bars represent standard deviation. FIGURE 7 | Staphylococcus aureus grown in polymicrobial cultures has lower intracellular ATP. Planktonic cultures were grown to late exponential phase, pelleted and washed, and intracellular ATP was measured. Bacterial numbers were determined by standard serial dilution technique and ATP concentrations were normalized to CFU. Data is represented by the mean of six independent replicates and error bars represent standard deviation. Significance was determined using a t-test ( * p < 0.05). (;). To examine whether a similar mechanism was occurring in polymicrobial cultures, TCA cycle activity was measured using a promoter-gfp fusion construct, PfumC:gfp. In the presence of C. albicans, fluorescence was notably lower over a period of 22 h (Figure 9). In order to assess if this effect was from a generalized reduction in transcription or specific to genes in central metabolism, Pspa:gfp was used as a control reporter. The spa gene encodes the virulence factor, protein A. The spa reporter had no difference between S. aureus cells grown alone compared to those cells grown in a polymicrobial culture, thus indicating decreased transcription was specific to metabolic processes. DISCUSSION It is estimated that fifty percent of all infections involve biofilms (). Biofilm infections are notoriously difficult to eradicate completely, despite being caused primarily by drug-susceptible pathogens (;Lewis, 2010;Conlon, 2014). Further complications arise when biofilms involve more than one organism, resulting in increased mortality (Gabrilska and Rumbaugh, 2015). Reasons for this increased mortality remain unclear but a number of studies have focused on individual antibiotic treatment as well as specific reasons for therapy failure. Increased antimicrobial resistance has been observed for a limited number of antibiotics (Harriott and Noverr, 2009), but this fails to explain recurring infections caused by drug-susceptible organisms. Our data provide an explanation for multi-drug tolerance by a broad acting energy-dependent mechanism. This is in accordance with recent work published on the mechanism of persister formation in S. aureus (;;). Polymicrobial biofilms were consistently more tolerant to antibiotics with the exception of vancomycin and gentamicin. This contradicts other findings, where the presence of C. albicans increased tolerance to both of these antibiotics (;;;). This does not mean that there is no difference, and may be a result of little to no killing observed in either the monomicrobial or polymicrobial biofilms challenged with these antibiotics. Higher concentrations of antibiotics may show results similar to previously published work. However, a more interesting phenomena demonstrated here is that antibiotic diffusion through the biofilm did not appear to be a significant cause of increased antibiotic tolerance within the biofilm. In most cases, there was no significant difference between the zone of inhibition following diffusion through a polymicrobial or monomicrobial biofilm. Vancomycin diffusion was variable between replicates with one assay exhibiting no diffusion and the other replicates having impeded diffusion. The possibility exists that vancomycin is simply defusing through the biofilm at a slower rate than the other antibiotics. Confocal analysis also provided evidence that delafloxacin was not impeded by biofilm matrix. While the intrinsic fluorescent signal was faint, it is clearly present in the deeper biofilm layers. Further support that physical inhibition is not the primary mechanism for increased tolerance is provided by experiments performed in a planktonic setting. It could be assumed that if physical inhibition was the primary mechanism for increased tolerance, there would be little difference in tolerance to non-cell wall acting antibiotics in a planktonic environment. However, the large increases in tolerance were consistent across all classes of antibiotics used, indicating that physical inhibition is not a likely explanation for multidrug tolerance. Further evidence against physical inhibition was provided by flow cytometry analysis. While the fluorescence from delafloxacin was too weak to be FIGURE 8 | The presence of C. albicans decreases membrane potential in S. aureus cells. (A) Membrane potential was measured during mid-exponential phase in S. aureus either grown alone (blue) or in the presence of C. albicans (red). 1 10 6 cells in PBS were stained with DiOC 2 for 30 min and analyzed by flow cytometry. (B) Carbonyl cyanide m-chlorophenylhydrazone (CCCP) was used to dissipate membrane potential (gray) and was used to gate low membrane potential cells. The mean ± SD is shown, n = 6 for the graph on the left. The figure on the right is representative of six independent replicates. Significance was determined using a t-test ( * p < 0.05). FIGURE 9 | Fumarase C Expression. GFP expression of PfumC:gfp (A) and Pspa:gfp (B) was measured over time using a Biotek microplate reader. Overnight cultures of S. aureus (1:100) and C. albicans (1:50) were diluted in MHB in a microtiter plate. C. albicans decreased expression of the TCA cycle gene, fumarase (red) compared to expression observed when S. aureus was grown alone (blue). This effect was specific to fumarase and not the result of a generalized reduction in S. aureus transcription as indicated by the Pspa:gfp control. C. albicans did not affect fluorescence outside of gene expression (black). Experiments were performed in biological triplicate; error bars represent standard deviation. detected with our flow cytometer, vancomycin was clearly not inhibited by matrix coating from C. albicans. Previous work has found the C. albicans quorum sensing molecule, farnesol, may both increase and decrease antibiotic susceptibility depending on its concentration (;). The effects of secreted products, including farnesol, on antibiotic tolerance were tested. Neither concentrated C. albicans nor S. aureus supernatant affected tolerance, indicating that extracellular byproducts larger than 3000 MW are not influencing antibiotic tolerance in S. aureus. However, this still leaves the possibility of smaller molecules influencing the bacteria. Small products other than farnesol were further investigated by growing cultures in the presence of spent media. Growth in C. albicans conditioned media did increase antibiotic tolerance, however, the same phenotype was observed when grown in spent S. aureus media. Unexpectedly, the increase on antibiotic tolerance does not appear to be specific to a product secreted by C. albicans, rather, nutrient exhaustion was a more likely explanation for the observed increase in antibiotic tolerance. Recent work on the mechanism of persister formation has implicated decreased intracellular ATP and membrane potential with an increase in antibiotic tolerance (;;;). Results from the spent media assay suggest that the increased tolerance in polymicrobial cultures can be explained by a similar mechanism. It follows that if C. albicans is decreasing available nutrients within the biofilm, S. aureus cells will have to compete for the same nutrients. Those cells, which are unable to find adequate nutrients will create a population of S. aureus cells in a low energy state, leading to an increase in tolerance to antibiotics with active targets. Available glucose was depleted faster in polymicrobial cultures and, fittingly, both ATP and membrane potential were lower in S. aureus cells grown in a mixed culture compared to monocultures. A specific mechanism with the TCA cycle was recently suggested (;), and results with the TCA cycle reporter, PfumC:gfp, support those observations. Together these results demonstrate a decrease in S. aureus metabolism as a direct result of nutrient depletion by C. albicans. Metabolism is becoming a focal point in the investigation of chronic S. aureus infections. While glycolysis is required for initial abscess formation in mice, upon maturation of the abscess glucose concentrations become a limiting factor ;). Similarly, during initial stages of biofilm formation, glucose is likely readily available and preferentially consumed. Later on in the process, the biofilm becomes a glucose-limited environment before subsequent dispersal of the biofilm (Boles and Horswill, 2008;). These examples are niches where antibiotic treatment of S. aureus is likely to fail. Furthermore, these niches often lead to chronic infections that the immune system is unable to manage (;;;). Nutrient depletion leading to an antibiotic tolerant state may hold broader implications with parallels in chronic infections with other microorganisms. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/Supplementary Material. AUTHOR CONTRIBUTIONS DN, BL, and AN contributed to the conception and design of the study. DN and AN performed the statistical analysis. DN wrote the first draft of the manuscript. DN, SS, BL, and AN wrote sections of the manuscript. All authors performed the experiments, generated data appearing in the manuscript, contributed to the revision of the manuscript, and read and approved the submitted version. FUNDING This work was funded by National Center for Research Resources (5P20RR016469) and the National Institute for General Medical Science (8P20GM103427). Funding for this work was also provided by the Nebraska EPSCoR Undergraduate Research Experience at Small Colleges and Universities program and the Nebraska Research Initiative for equipment used in this project. Funding for the open access publication fees were provided by UNK Biology Department. ACKNOWLEDGMENTS We would like to thank Kim Lewis, Northeastern University, for the PfumC:gfp and Pspa:gfp plasmids. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmicb. 2019.02803/full#supplementary-material FIGURE S1 | Increased antibiotic tolerance associated with polymicrobial cultures are not strain or species specific. Planktonic cultures were grown to early to mid-exponential phase in TSB and challenged with vancomycin (100 MIC), the surviving bacteria were enumerated over 72 h by plating on TSA containing amphotericin B (25 g/mL). The presence of C. albicans increases S. aureus UAMS-1 (A), S. aureus JE2 (B), and S. epidermidis 1457 (C) (red) antibiotic tolerance compared to S. aureus monocultures (blue). Experiment was performed in biological triplicate and error bars represent standard deviation.
I am what some people might call a bit of a curmudgeon. If I am in a mood, it doesn’t take much to frustrate me and set me on edge. Thankfully, my son is the perfect antidote for that tendency, with his unique ability to refocus my perspective and make me happy. He’s not a panacea, but my experience this past weekend visiting Sesame Place, the Sesame Street themed amusement park outside of Philadelphia, showed me just how much good he does for my daily mood and overall outlook. Because holy shit that place is a nightmare. Just kidding. Partially. The kid loved the place…except when he didn’t. Which was basically any time an 8-foot-tall version of Ernie and his unibrow came within spitting distance. (Watching my son react to the proximity of the larger-than-life Sesame Street characters reminded me of my first encounters with the opposite sex: from a safe distance the excitement and anticipation was off the charts, but as soon as things looked like they might get physical, I needed a diaper and my mommy. Especially if she had a beak like Big Bird.) Being an adult, it wasn’t the muppets that freaked me out. It was the humans. My nightmares won’t feature the Cookie Monster’s googly eyes. The visions that scarred my psyche involved hordes of shirtless, soaking men parading the park’s grounds, assaulting me with their hairy backs and misshapen tattoos. Because Sesame Place isn’t merely a replica of the Sesame Street we see on TV, or a collection of recycled carnival rides emblazoned with the faces of those familiar muppets. Half of the grounds are dominated by an enormous waterpark. There is no demilitarized zone. There is something deeply unsettling about seeing half-naked adults sharing the same pavement with infinite numbers of screaming children. And trust me, the children were ALL screaming, either at that one naked guy’s resemblance to Jabba the Hutt or the fact that someone vomited on the tea cup ride or a glimpse into the Cookie Monster’s lifeless eyes. Black eyes. Like a doll’s eyes. The juxtaposition of the water rides with the dry, less-urine soaked rides; and the cafeteria; and the various theaters that allow kids to sit in the stands and recoil in fear as Abby Cadabby and Elmo march into the crowd like unrepentant Frankensteins, made for a bizarre, often disgusting experience. Do you want to sit on aluminum bleachers, with a toddler on your lap, when some guy, fresh off the water slide, plops his drenched body down next to yours, then stands, dances and shakes off droplets of water, sweat and other when the Cookie Monster breaks into his rap routine (not a joke.)? Because I didn’t. On top of all the general, obvious annoyances about amusement parks – the crowds, the prices, the parking lots, the children, the lines – there were bizarre little touches that made Sesame Place a walking fever dream. It didn’t help that I was hungover and running on little sleep. But I’m a parent. And a curmudgeon. And I’ve seen worse at college tailgates. Besides, this wasn’t about me; it was about the kids, and they had a great time. We went with some friends who have a son just a few months younger than ours, and both boys were amazingly well-behaved. My kid and his friend were too young for many of the rides (the ones we did brave were a big hit), they loved seeing their favorite characters in the flesh fur, so long as it was from a safe distance. One nice touch is that there are a few “rides” that are really just jungle gyms on steroids, and even a little toddler-based play area that was the perfect place for parents to collapse on the sidelines and let their kids burn off some energy. For little kids that can’t go on the roller coasters and other fast rides, that romper room, and the constant character-based shows, are a great way to get your money’s worth – and once you pay, you’ll want to make sure you get your money’s worth. Because it ain’t cheap: $60 bucks per parent ($50 online)! Kids under two are free (thank God). It was expensive, and crowded (we geniuses went on Labor Day weekend), and a little bit gross, but it was definitely worth it. Because any nightmares my son might have won’t last, and the negative memories I have of the attendant frustrations will eventually fade in favor of happier ones, of my son excitedly pointing out Elmo and Bert and Cookie and Grover, and dancing beside his friend during the “Elmo Rocks” concert, and falling asleep from exhaustion on the way home. And also because we got plenty of pictures of him smiling and not a single shot of a naked adult. Well, none that are safe for work. Share this: Facebook Email Reddit Pinterest Pocket Twitter Print More Tumblr Like this: Like Loading...
def configure_service(service, case_name, service_name): service.http_client.v2_http_client = atom.mock_http_core.MockHttpClient() service.http_client.v2_http_client.cache_case_name = case_name auth_token_key = 'service_%s_auth_token' % service_name if (auth_token_key not in options.values and options.get_value('runlive') == 'true'): service.http_client.v2_http_client.cache_test_name = 'client_login' cache_name = service.http_client.v2_http_client.get_cache_file_name() if options.get_value('clearcache') == 'true': service.http_client.v2_http_client.delete_session(cache_name) service.http_client.v2_http_client.use_cached_session(cache_name) service.ClientLogin(options.get_value('username'), options.get_value('password'), service=service_name, source=case_name) options.values[auth_token_key] = service.GetClientLoginToken() service.http_client.v2_http_client.close_session() if auth_token_key in options.values: service.SetClientLoginToken(options.values[auth_token_key])
Bayesian inference as a tool for analysis of first-principles calculations of complex materials: an application to the melting point of Ti2GaN We present a systematic implementation of the recently developed Z-method for computing melting points of solids, augmented by a Bayesian analysis of the data obtained from molecular dynamics simulations. The use of Bayesian inference allows us to extract valuable information from limited data, reducing the computational cost of drawing the isochoric curve. From this Bayesian Z-method we obtain posterior distributions for the melting temperature Tm, the critical superheating temperature TLS and the slopes dT/dE of the liquid and solid phases. The method therefore gives full quantification of the errors in the prediction of the melting point. This procedure is applied to the estimation of the melting point of Ti2GaN (one of the so-called MAX phases), a complex, laminar material, by density functional theory molecular dynamics, finding an estimate Tm of 2591.61 ± 89.61 K, which is in good agreement with melting points of similar ceramics.
/* Author: <NAME> * created: 2005.04... */ #include <simplefield.h> #include <grid.h> #include <iostream> #include <vector> #define ENERGY 1 #define POWER 2 #define FLUXDENSITY 1 #define FIELDINTENSITY 2 using namespace std; class CFieldEnergy { protected: double max_value; vector<SimpleField*> fields; Grid *grid; double equation( double mat_val, double v ); public: int calcType; int fieldType; bool DEBUG; double fNormVal; int iNormSub; bool bNorm; double dOver; bool bEX; vector<int> sub_idx; vector <double> mat; bool bOver; CFieldEnergy(); ~CFieldEnergy(); double calcMax(int isub); double val(int ie); void calculateScalar(Grid *g, SimpleField *f); void calculate3D( Grid *g, SimpleField *f1, SimpleField *f2, SimpleField *f3 ); };
This article was updated on Sunday, Feb. 5, 2017, following the return of all affected members of the MIT community to the United States. Two MIT undergraduates who were denied re-entry to the United States last weekend landed at Logan Airport on Friday afternoon. The undergraduates are now back on campus. Both were prevented from boarding flights back to Boston last weekend after spending the winter break with their families. Two MIT researchers and a visiting student have also returned to the U.S. since Friday. With their arrival, no members of the MIT community are known to remain stuck abroad following President Trump's Jan. 27 executive order restricting citizens of Iran, Iraq, Libya, Somalia, Sudan, Syria, and Yemen from entering the U.S. “We can all be glad that our affected undergraduates have overcome their immediate immigration difficulties and are back with us,” MIT President L. Rafael Reif wrote Friday in an email to the MIT community. “I am proud that the community has stood behind them and reached out to help. If they and others confront such challenges in the future, you can be sure MIT will be by their side.” Beyond the five members of the MIT community who have now returned, MIT is aware of eight other researchers from the affected nations who had received offers to come to MIT, but who now cannot enter the U.S. The MIT affiliates’ return to the U.S. was made possible by a temporary order issued by the Massachusetts federal district court last weekend, restraining the government from enforcing the executive order to detain holders of valid visas or green cards who travel from the seven affected countries to the U.S. through Logan Airport. On Thursday, an airline announced that it would begin allowing passengers affected by the executive order to board international flights bound for Logan. MIT officials moved quickly to get the affected members of the MIT community — with whom they had been in regular touch — onto this airline’s flights back to Boston. “After a week of round-the-clock work pursuing every avenue to bring our students home, we are very pleased to be able to welcome them back to MIT,” Chancellor Cynthia Barnhart said on Friday. “This is a direct result of their positivity, tenacity, and resilience, as well as the tireless efforts of scores of my colleagues. While we celebrate this moment, we are also fully aware that more must be done to understand the executive order’s full impact on our community. That work will continue in earnest, as we seek to ensure that MIT remains open and accessible to talent from anywhere in the world.” More than 100 at MIT directly impacted While no members of the MIT community are believed to remain stuck abroad, more than 100 students and researchers from the seven affected nations are now on campus; their immigration status is unclear in the wake of the executive order. Over the past week, officials have pursued various approaches on behalf of all of MIT’s affected students and researchers. The Institute’s governmental affairs teams in Cambridge and Washington have advocated with relevant federal agencies on their behalf; enlisted help from the Massachusetts congressional delegation; and pursued exemptions to the executive order for the stranded students and scholars. MIT connected those stuck abroad with legal and travel resources and engaged its global alumni network to amplify calls for their return. MIT joins in filing amicus brief On Friday, MIT joined seven other Massachusetts universities to submit an amicus brief to the federal court in Boston. The brief was filed in support of a lawsuit asking the court to order the U.S. government to stop enforcement of the executive order. In the submission, MIT and the other universities sought to educate the court about the vital role that international faculty, scholars, and students play in our communities, as well as the importance of their contributions to the nation and the world. The universities — MIT, Boston College, Boston University, Brandeis University, Harvard University, Northeastern University, Tufts University, and Worcester Polytechnic Institute — also described the serious consequences that the executive order has had and will continue to have not only on members of their communities, but on the nation as a whole. MIT’s international students, postdocs, and researchers can contact the International Students Office and the International Scholars Office for immediate assistance. These offices stand ready to provide direction and assistance to members of the MIT community who need it.
Activists and rights experts have long argued that such state activities and threats can have a significant chilling effect on our rights and freedom. Though skepticism persists about the existence of such chilling effects—they are often subtle, difficult to measure, and people are unaware how they are impacted—several recent studies have documented the phenomenon. My own research, which received media coverage last year, examined how Edward Snowden's revelations about NSA surveillance chilled people's Wikipedia use. Yet significant gaps remain in our understanding, including how certain people, groups, or specific online activities may be chilled more so than others, or the comparative impact of different state activities or regulatory threats. As it turns out, these threats likely do have a chilling effect on things we do online every day—from online speech and discussion, to internet search, to sharing content. And certain people or groups—like women or young people—may be affected more than others. These are among the key findings I discuss in my new chilling effects research paper, published in the peer-reviewed Internet Policy Review, based on an empirical case study from my doctorate at the University of Oxford. The study involves a first-of-its-kind survey, administered to more than 1,200 US-based adult internet users. It was designed to explore multiple dimensions of chilling effects online by comparing and analyzing responses to hypothetical scenarios that, in theory, may cause chilling effects or self-censorship. The internet users who participated in the survey were relatively representative of US internet users more generally, with a few biases—it was gender balanced, but respondents were somewhat younger, less wealthy, and more educated than the average user. Responses were compiled, compared, and statistically analyzed. My findings suggested that once people were made aware of different online threats, they were less willing to engage in a range of activities online. For example, when made aware of online surveillance by the government, noteworthy percentages of respondents were less likely to speak or write about certain things online, less likely to share personally created content, less likely to engage with social media, and more cautious in their internet speech or search. In other words, there was a clear chilling effect. There was a comparable impact for other hypothetical online threats like a law that criminalized certain kinds of online speech, or a scenario where internet users personally receives a legal threat for content they had posted online. For example, in terms of online speech, 62 percent of respondents indicated they would be "much less likely" (22 percent) or "somewhat less likely" (40 percent) to "speak or write about certain topics online" due to such online surveillance by government. And 78 percent of respondents "strongly" (38 percent) or "somewhat" agreed (40 percent) they would be more cautious about what they say online due to the surveillance. Similarly, 75 percent of respondents indicated they would be "much less likely" (40 percent) or "somewhat less likely" (35 percent) to "speak or write about certain topics online" after receiving a personal legal threat about something they had previously posted online. Eighty-one percent of respondents indicated they "strongly agreed" (50 percent) or "somewhat agreed" (31 percent) that they would be more cautious or careful about their online speech. I even found evidence of indirect chilling effects—that is, internet users were less likely to speak or share when a friend in their online social network was targeted by a legal threat. Protesters from the Anonymous India group of hackers wear Guy Fawkes masks as they protest against laws they say gives the government control over censorship of internet usage in Mumbai Thomson Reuters I also found that participants with greater awareness of NSA news stories were more likely to be chilled by government surveillance. Though consistent with other studies, including my own, suggesting chilling effects associated with NSA surveillance, no previous study has documented this statistically significant relationship. But that is not all. My statistical findings also suggest a greater chilling effect on women and younger internet users. In every scenario examined, I found a statistically significant age effect: The younger the participant, the greater the chilling effect. This association was strongest in the scenario involving government surveillance. This is noteworthy given the common perception that young people care little about privacy or surveillance. My findings suggest otherwise—if younger internet users care little for privacy, why would they be more likely chilled? Rather, as social media researchers like Danah Boyd have argued, young people do care about privacy. They just navigate those concerns differently than adults. Indeed, in the post-Snowden era, it may be—as Boyd and colleagues Alice Marwick and Claire Fontaine recently suggested—that the language and theory of surveillance, rather than privacy, best explains the behavior of youth in response to the complex ecosystem of surveillance and similar data threats they encounter online. I also found female internet users in the study were more likely to be chilled in scenarios involving surveillance and personal legal threats for content posted online, with the statistical association strongest in the latter scenario. Besides being more often the victims of online harassment, my findings suggest women may also be more negatively affected when targeted with legal and regulatory threats.
On Jan. 5, a skier participating in an avalanche course in Senator Beck Basin on Red Mountain Pass was buried under three meters of avalanche debris and killed. Two in-bounds skiers were caught in a slide on Kachina Peak, at Taos Ski Valley, on Jan. 17. Both were trapped under the snow for around 20 minutes and eventually died. On Jan. 21, a skier in the Ashcroft area, near Aspen, was buried on a slope near where he had been skiing for the previous two days. The Colorado Avalanche Information Center (CAIC) listed 193 avalanches in 10 days last month, between Jan. 19-29 — and those were just the slides that were reported. By Feb. 1, the numbers went up again, and the CAIC documented three more people caught in avalanches. These are “impressive and scary” statistics, the forecasters wrote. Nine out of every 10 avalanches is caused by a backcountry traveler, and this year’s cycle seems far fiercer than usual. Is it because there are more people out there than usual? Is this going to be the year that snow riders recall as extremely volatile and/or especially deadly? Such characteristics make for what he calls a “conditionally unstable” snowpack. “You can’t ski the San Juans aggressively. You might get away with it for awhile, based on dumb luck, but eventually you’ll get caught,” Roberts said. “Ask many of the locals who’ve been buried numerous times. Pick your posse carefully — not one filled with Type A personalities, but a group with a variety of personalities. Beware of the ‘Expert Halo’ trail boss. No one wants to be the timid one, but you should always speak up and question a decision in a group if you are uncomfortable.” Snowpacks have regional climate characteristics, and Colorado’s is known as a “Continental” snowpack. This type of snowpack tends to be shallow, with temperature gradients that create faceted, depth-hoar snow crystals — and can result in long-term, unstable layers often buried near the ground. A Continental snowpack is considered extremely capricious and unpredictable when it comes to skiing or boarding safely, and snow in the San Juans, in particular, is generally the sketchiest of the lot. Peter Lev is a contemporary of Roberts, having worked in Little Cottonwood Canyon, Utah, for the highway department, the Alta ski area, and as a forecaster and lead guide for Mike Wiegele’s helicopter operation in Canada. Red Mountain Pass seems to have been discovered this year: the number of skiers appears to have increased, along with new huts and yurts that accommodate overnight skiers and those looking for guides and avalanche courses. Roberts recalled when three cars parked on the pass was a big day; now Red is going the way of Teton Pass in Wyoming and Berthoud Pass on the Front Range, with their backcountry crowds and all the complications that go with finding parking, to having groups skiing above you in hazardous places, to turning a potentially untracked paradise into a tracked up, ski-area-like wasteland. The question is whether avalanche training will be enough, given the increasing crowds. We’ll see if that holds true. Lev believes that the moon’s gravitational pull influences not only tides but also snowpacks — and that as a result, there is more tension in the layers of a snowpack at certain times than there is at others. “Sometimes the snowpack’s more relaxed,” he said — and people can get by with skiing things that might not otherwise be safe. By contrast, in conditions like we’ve had the past few weeks, he advises pulling back to “pet runs,” places that you know well and that are safe to ski. The slide on Jan. 21 that killed a skier south of Ashcroft happened on the third day of the trip. The group had been doing laps on a nearby slope with an angle of less than 30 degrees, according to a CAIC report. On Jan. 21, the skiers moved to a slightly steeper slope that tilted about 35 degrees. In a season with this much instability, those extra four or five degrees made all the difference, and a skier triggered a release 400-foot wide and 2-to-4-feet deep that ran 600 linear feet. The angle of the slope that failed in Senator Beck Basin was between 32 and 34 degrees, according to the post accident assessment by the CAIC. The probability of a slope avalanching over 30 degrees goes up significantly, especially in seasons where there is buried weakness in a snowpack (and there is almost always some weakness). The Utah Avalanche Center offers an online training program called Know Before You Go. In the section entitled “Get Out of Harm’s Way,” there is crucial information about angles and the importance of slope steepness. Statistically, most avalanches are triggered on slopes between 30 and 45 degrees. When the Ashcroft skier decided to transition from a northeast-facing slope below 30 degrees to a slope of 35 degrees, on a day where there had been an avalanche warning specific to northeast slopes, he was moving into a perfect storm. For 25 years, Roberts taught snow science/avalanche courses to students from Prescott College. “We have to remember, there are hundreds of slopes out there that can behave differently,” he pointed out. As accurate as forecasts can be, there are still endless variations not only in big ski lines, but also on small slopes and terrain traps (features that can bury you just as easily). Roberts advises stepping aside from the group and making your own “environmental observations.” He calls the personal sphere that your inner snow safety specialist engages in “Nowcasting.” Use your “patroller’s legs” to feel whether the snow density is collapsing or fracturing, or for anything in your immediate surroundings that hints at instability. It’s all part of the package that makes for a safe day out in winter. “Every year is a new experiment,” Roberts summed up. The snow depth is different, the wind is from an unexpected direction and the water content (CWE) varies from storm to storm. It is hard to standardize and make generalizations. Hawse tells her winter clients, “This isn’t the place to ski the gnar.” They are encouraged to come back in spring if they want to ski bigger lines. She says her “Spidey sense gets up” when she is on slopes of about 33 degrees. When Hawse worked for Helitrax, and the slope had been mitigated with explosives, 35 degrees might have been OK to ski. But if the slope hasn’t had the shock of explosives, she said, she is going to ski lower angled terrain. She also reads the full day’s forecast, and any hazards that may exist, on avalanche sites. “People don’t do a very good job of alerting properly to the yellow colored ‘Moderate’ rating on avalanche sites. They don’t even take the rating ‘Considerable’ seriously enough,” she said. Reading all the analysis is critical, she said — not just skimming the colors on the hazard tabs. You start to understand forecasting when you can put all the components together: weather, snowpack history and weaknesses, wind, slope angles and surrounding terrain. Hawse also agrees with her mentor, Jerry Roberts, that “hasty pits” — quickly dug snowpits — are an effective way to assess conditions as you travel. They’re efficient, and can tell you a lot about how the snowpack is holding together. She keeps a probe clipped to her pack so it is readily at hand, and she can poke around in snow that she is unsure of. Although there are apps you can download on your phone that show a slope’s angle, Hawse uses a compass (an old LifeLink Slope Meter accomplishes the same thing). All three skiers have spent decades managing avalanche risks for the safety of untold numbers of people, whether they were skiing and riding in hazardous mountain terrain or merely driving up Red Mountain Pass or Little Cottonwood Canyon. And they all have their own version of pulling back to Lev’s “pet slopes” — places with angles of 30 degrees and lower when things are unstable and risky. “We are not done with this season — not even a chance,” Lev emphasized. Roberts reiterated the words of the Buddha: “Kill the ego, kill the desire” when it comes to skiing or riding the big line that might kill you. Before you leave the house, check out the Friends of the CAIC’s Instagram page, which has photos of recent slides (an education in and of itself). State weather stations can be found here: avalanche.state.co.us/observations/weather-stations/. The Utah Avalanche Center’s excellent online education program, ‘Know Before You Go,’ is free of charge. The videos they’ve produced offer a good sense of getting caught in a slide. Find both at utahavalanchecenter.org.
def fullname(self): if self._fullname is None: self._fullname = '{0}.{1}'.format(self.name(), self.attr()) return self._fullname
This invention relates to a showerhead adaptor means for releasably holding a hand-held showerhead so that the showerhead may be manually manipulated if desired, the showerhead may be used in a fixed position if desired and in such position stored when not in use. Shower stalls or compartments are frequently made of ceramic tile, metal tile, and often of integral molded plastic shower walls and shower pans. A water outlet pipe protrudes through the shower wall at a selected height from five to six feet, and projects outwardly and downwardly to provide a free externally threaded outlet end to which is usually mounted a showerhead. Such fixed showerheads were provided with universal mountings to direct the stream of water at limited angles from the free end of the outlet pipe. Since a fixed showerhead has limited adaptability as to the direction of the shower spray, it has been found desirable to attach a showerhead to an elongated handle and connect the end of the handle through a flexible tube of selected length to the water outlet pipe by a suitable coupling. The shower spray of such an elongated showerhead handle may be readily manually directed against the body at different heights and in virtually any selected direction. Since the showerhead is at the end of a handle connected to a flexible tube, various prior proposed devices have been used to store the showerhead when not in use and in some instances temporarily fix the location of the showerhead to direct the shower spray in a desired fixed direction. One of such prior proposed constructions have included the provision of a vertically extending bar secured to the shower wall in spaced relation thereto and with a device for holding the handle of the showerhead in selected position along the bar. The disadvantages of such a mounting means for a hand-held showerhead are that the mounting of the bar on the sidewall of the shower requires piercing the shower wall with bar securing means In tile walls, care must be used in making a hole to prevent cracking adjacent tiles, and a suitably sized escutcheon plate must be used. In plastic walls, a problem exists in providing a suitably sealed and strong mounting for the bar. In another prior proposed mounting for a hand-held showerhead, the end of the water pipe is provided with a universal coupling member providing a connection to one end of the flexible tube and also providing a part cylindrical socket for holding the handle of the hand-held showerhead. Such handles are often oval in cross-section and the receiving socket on the universal mounting comprises a longitudinal slot adapted to receive the handle, which is then secured in the slot by slightly turning the handle about its axis. When such a prior construction is used as a fixed showerhead, the adjustment of the direction of the shower stream is relatively limited.
An important field of molecular biology relates to the revealing of sequence variations in mixtures of largely homologous nucleic acids. The sequence comparison between DNA molecules for identifying variations not only adds to what is known about the molecular bases of phenotypic differences, for example hereditary diseases, but also permits continuous monitoring of NA populations, for example virus populations, during an infection. NA population is to be understood as meaning a plurality of NA molecules with an identical, the same or a different sequence. Furthermore, the sequence comparison also serves as a quality assurance characteristic when producing genetically engineered, bacterial or viral products or for detecting the occurrence of minute quantities of differing sequences in a population of homologous sequences. The prior art knows several methods for tracking down sequence variations. Arguably the most laborious method is direct sequencing (Sanger F., Nicklen S., Coulson A. R., 1977, Proc. Natl. Acad. Sci. USA 74, 5463 et seq.; Maxam A. M., Gilbert W., 1977, Proc. Natl. Acad. Sci. USA 74, 560-564). This method does not allow a statistically significant number of individuals of an NA population to be tested for the occurrence of mutations. The application of indirect hybridization methods such as Southern (Southern E. M., 1975, J. Mol. Biol. 98, 503-517) or Northern (Alwine J. C., Kemp D. J., Stark G. R., 1977, Proc. Natl. Acad. Sci. USA 74, 5350-5354) only allow massive quantitative variations to be detected. Methods such as the ribonuclease protection assays (D. J. Freeman, A. S. Juan, 1981, J. Gen. Virol. 57, 103-117; E. Winter et al., 1985, PNAS 82, 7575-7579) for ribonucleic acid (RNA)-RNA heteroduplexes or for RNA-deoxyribonucleic acid (DNA) heteroduplexes (R. M. Myers et al., 1985, Science 230 , 1242-1246) are slightly more sensitive. Denaturing gradient gel electrophoresis (DGGE) has been made markedly more sensitive in recent years by employing "polymerase chain reaction"(PCR) technology and by using specific primers which facilitate separation in the gradient gel (V. C. Sheffield et al., 1992, Biofedback 12, 386-387). To separate the reaction products, even the differing sequences must be present in reasonable quantities. A further disadvantage of this method is the fact that, after separation and detection of a mutant, the site of the mutation cannot be specified, so that further identification reactions, for example sequencing, are subsequently required. "Chemical cleavage reactions" using hydroxylamine and osmium tetroxide have the disadvantage that a large number of experimental manipulations with toxic chemicals and complex procedures are required (R. G. H. Cotton, 1989, Biochemistry 263, 1-10). In addition, they only work if substantial amounts of mutants are present. Finally, only certain mutations can be identified using this method. A further prior-art method is based on the "single strand conformation polymorphism" (SSCP) reaction. Both this method and DGGE are carried out ((M Urita et al., 1989, PNAS 86, 2766-2770). A disadvantage of the SSCP reaction is that it leads to the identification of wrongly-positive samples. Moreover, this method fails in at least 10% of all cases if large amounts of mutant molecules are present. Other prior-art methods only allow testing for the presence of a specific mutation, i.e. the verification of the presence, or absence, of an individual nucleotide (MAPREC: Chumakov K. M., Powers L. B., Noonan K. F., Roninson L. B., Levenbook I. S., 1991, Proc. Natl. Acad. Sci. USA 88, 199-203). Methods in which carbodiimide is used have hitherto not proved popular in practice because this substance is difficult to handle and the methods lack sensitivity (D. F. Novack, 1986, Proc. Natl. Acad. Sci. USA, 83, 586 590; A. Ganguly, 1991, J. Biol. Chem. 266, 1235-1240; Offenlegungsschrift [Published Specification] DE 36 29 190 A1, A. Ganguly and D. J. Prockop, 1993, Nucl. Acids Res. 18 No. 13, 3933-3939). A further method known from the prior art (WO 93/02216) is the method for detecting "mismatch" in heteroduplexes. A "mismatch-binding protein" is used, which is bound by first antibodies. These first antibodies, in turn, are recognized by second antibodies. Again, the method is complicated to carry out and can only be employed within limits. In total, the lack of sensitivity relative to the minimum amount of mutants present within an NA population is the main disadvantage of the methods known from the prior art. Also, quantification of the NA molecules revealed is not possible. A further problem of the known methods is their unduly high failure rate. It is an object of the present invention to provide a method, a device and a composition of means which overcome the disadvantages of the prior art.
Technology, methodology & information systems: a tripartite view A brief examination is made of the relationship of information system, design methodologies, and associated information processing technology. Historical perspectives are highlighted. The data processing systems is first examined from the standpoint of the application of technology. Areas of data management, centralization and distribution, data integrity and controls, cost-capacity progress, and applications software development are reviewed. The user system is next considered as a locus of a large complement of current research activities. The perspectives related to the user role in system development and the evaluation criteria in the design and development process are addressed. Finally, development and operating methodologies are considered as an important dimension of the information systems impact in organizations. The methodological views and development procedure approaches are presented as critical factors impacting the evolution of information systems in organizations.
import sys import datetime import json from pathlib import Path import dateutil.relativedelta import fire import pygal import pandas as pd from terminaltables import SingleTable class ESIDataWrapper: """Convenience class for the background work related to the ESI xlsx.""" ENTITY_CODES = [ 'eu', 'ea', 'at', 'be', 'dk', 'de', 'el', 'es', 'fr', 'it', 'nl', 'pl', 'pt', 'fi', 'se', 'uk' ] # Two digit country/entity codes (as used in the ESI) mapped to # countries/entities. ESI_ENTITIES = { 'eu': 'Europe', 'ea': 'Euro Area', 'at': 'Austria', 'be': 'Belgium', 'dk': 'Denmark', 'de': 'Germany', 'el': 'Greece', 'es': 'Spain', 'fr': 'France', 'it': 'Italy', 'nl': 'Netherlands', 'pl': 'Poland', 'pt': 'Portugal', 'fi': 'Finland', 'se': 'Sweden', 'uk': 'United Kingdom' } # The columns corresponding to each entity in the ESI xlsx file. ENTITY_COLS = dict( eu='A,C:H', ea='A,K:P', at='A,FO:FT', be='A,S:X', dk='A,AQ:AV', de='A,AY:BD', el='A,BW:CB', es='A,CE:CJ', fr='A,CM:CR', it='A,DC:DH', nl='A,FG:FL', pl='A,FW:GB', pt='A,GE:GJ', fi='A,HK:HP', se='A,HS:HX', uk='A,IA:IF' ) # These are used in the ESI xlsx as column headers. ESI_COMPONENTS = [ '.INDU', # <Entity Code>.INDU '.SERV', # <Entity Code>.SERV '.CONS', # <Entity Code>.CONS '.RETA', # <Entity Code>.RETA '.BUIL', # <Entity Code>.BUIL '.ESI' # <Entity Code>.ESI ] def __init__(self, data_dir=Path('.'), esi_filename='main_indicators_nace2.xlsx', esi_sheet_name='MONTHLY'): self.data_dir = data_dir self.esi_filename = esi_filename self.esi_sheet_name = esi_sheet_name def _entity_csv_filename(self, entity_code): """Constructs a path to an entity's CSV file.""" return Path(self.data_dir) / '{}_esi.csv'.format(entity_code) def _create_esi_csv_tables(self, esi_tables): """Creates a CSV file for each country/entity.""" for code in self.ENTITY_CODES: # Each value of the esi_tables dict is a Pandas DataFrame. esi_tables[code].to_csv( self._entity_csv_filename(code), encoding='utf-8' ) def _import_esi_tables_from_xlsx(self): """Imports the ESI numbers for each country/entity we're interested in into DataFrames. """ # ESI xlsx file and relevant sheet. esi_file_path = Path(self.data_dir) / self.esi_filename # This will hold a DataFrame for each country's/entity's ESI numbers. esi_tables = {} for entity, cols in self.ENTITY_COLS.items(): esi_tables[entity] = pd.read_excel( esi_file_path, sheet_name=self.esi_sheet_name, header=0, index_col=0, usecols=cols ) return esi_tables def _load_esi_tables_from_csv(self): esi_tables = {} for ec in self.ENTITY_CODES: esi_tables[ec] = pd.read_csv( self._entity_csv_filename(ec), index_col=0, parse_dates=[0] ) return esi_tables def _fetch_esi_tables(self): """Returns a dict where each key is an entity code and its corresponding value is a pandas DataFrame with the ESI measurements for this entity. """ # Check if we have CSV files for the ESI tables. we_have_csvs = True for ec in self.ENTITY_CODES: if not Path(self._entity_csv_filename(ec)).is_file(): we_have_csvs = False break if we_have_csvs: esi_tables = self._load_esi_tables_from_csv() else: esi_tables = self._import_esi_tables_from_xlsx() self._create_esi_csv_tables(esi_tables) # Convert date indices to monthly frequency. for ec in self.ENTITY_CODES: esi_tables[ec].index = esi_tables[ec].index.to_period(freq='M') return esi_tables def get_latest_rankings(self, date=None): """Returns a dict where keys are the ESI components and their values are lists with country measurements. For example: {'construction_confidence': [('Netherlands', 4.2), ('Germany', 1.8), ('Austria', 0.6), # ... ('Sweden', -28.7), ('Greece', -52.1), ('United Kingdom', nan)], 'consumer_confidence': [('Sweden', 2.7), ('Denmark', 0.5), # ... ('Finland', -5.9), ('Portugal', -28.6), ('Greece', -41.0)], 'esi': [('France', 96.6), ('Germany', 95.5), # ... ('Denmark', 80.6), ('Poland', 77.9)], 'industrial_confidence': [('Sweden', -1.4), ('Netherlands', -9.8), ('Greece', -18.1), # ... ('Poland', -20.3), ('United Kingdom', -21.5), ('Finland', -21.7)], 'retail_confidence': [('Sweden', 11.8), ('Denmark', 11.2), # ... ('Netherlands', 6.2), ('Spain', -24.6)], 'services_confidence': [('Germany', 0.5), ('Austria', -5.2), # ... ('Spain', -35.8), ('United Kingdom', -36.7)]} """ INDU_idx = 0 # INDU - Industrial confidence indicator (40%) SERV_idx = 1 # SERV - Services confidence indicator (30%) CONS_idx = 2 # CONS - Consumer confidence indicator (20%) RETA_idx = 3 # RETA - Retail trade confidence indicator (5%) BUIL_idx = 4 # BUIL - Construction confidence indicator (5%) ESI_idx = 5 # ESI - Economic sentiment indicator, composite. if date is None: now = datetime.datetime.now() month_ago = now + dateutil.relativedelta.relativedelta(months=-1) start_date = '{}-{}'.format(month_ago.year, month_ago.month) end_date = '{}-{}'.format(now.year, now.month) else: start_date = date end_date = date esi_tables = self._fetch_esi_tables() latest_values = {} for ec in self.ESI_ENTITIES.keys(): try: latest_values[ec] = ( esi_tables[ec][start_date:end_date] .tail(1) .values.tolist()[0] ) except IndexError: sys.exit('Date given is out of range') industrial_ranking = {} services_ranking = {} consumer_ranking = {} retail_ranking = {} construction_ranking = {} esi_ranking = {} for ec, label in self.ESI_ENTITIES.items(): industrial_ranking[label] = latest_values[ec][INDU_idx] services_ranking[label] = latest_values[ec][SERV_idx] consumer_ranking[label] = latest_values[ec][CONS_idx] retail_ranking[label] = latest_values[ec][RETA_idx] construction_ranking[label] = latest_values[ec][BUIL_idx] esi_ranking[label] = latest_values[ec][ESI_idx] rankings = { 'industrial_confidence': industrial_ranking, 'services_confidence': services_ranking, 'consumer_confidence': consumer_ranking, 'retail_confidence': retail_ranking, 'construction_confidence': construction_ranking, 'esi': esi_ranking } for ranking, values in rankings.items(): rankings[ranking] = sorted( values.items(), key=lambda x: x[1], reverse=True ) return rankings def get_historical_values(self, esi_component, months=12): """Returns a data structure with historical values for a given ESI component. For example: {'countries': {'at': [100.7, 98.8, # ... 87.0, 89.4], 'be': [94.3, 93.9, # ... 83.5, 88.8], 'de': [98.2, 98.6, 99.1, # ... 94.3, 95.5], 'dk': [96.3, 100.9, # ... 76.2, 77.7, 80.6], 'ea': [100.2, 100.7, # ... 87.5, 91.1], 'el': [107.8, 108.1, # ... 90.7, 89.5], 'pt': [107.1, 108.2, # ... 85.9, 87.1], 'se': [95.8, 95.0, # ... 88.9, 94.3], 'uk': [88.9, 89.7, # ... 75.1, 83.0]}, 'dates': [Period('2019-10', 'M'), Period('2019-11', 'M'), # ... Period('2020-08', 'M'), Period('2020-09', 'M')]} """ if esi_component not in self.ESI_COMPONENTS: esi_component = '.ESI' esi_tables = self._fetch_esi_tables() values = {'countries': {}, 'dates': []} for ec in self.ENTITY_CODES: col = '{}{}'.format(ec.upper(), esi_component) values['countries'][ec] = esi_tables[ec][col].tail(months).tolist() values['dates'] = ( esi_tables[self.ENTITY_CODES[0]].tail(months).index.tolist() ) return values def display_latest_rankings(date=None, json_output=False, data_dir=None, esi_filename=None, esi_sheet_name=None): """Display ESI rankings in the console or output as JSON.""" BOLD = '\033[1m' ENDC = '\033[0m' GREEN = '\033[92m' RED = '\033[91m' indicators = [ 'esi', 'industrial_confidence', 'services_confidence', 'consumer_confidence', 'retail_confidence', 'construction_confidence' ] esi = ESIDataWrapper() if data_dir: esi.data_dir = data_dir if esi_filename: esi.esi_filename = esi_filename if esi_sheet_name: esi.esi_sheet_name = esi_sheet_name num_entries = len(esi.ESI_ENTITIES) rankings = esi.get_latest_rankings(date=date) if json_output: print(json.dumps(rankings)) else: table_data = [ # Headers. [ BOLD + 'ESI' + ENDC, 'Industrial Confidence (40%)', 'Services Confidence (30%)', 'Consumer Confidence (20%)', 'Retail Trade Confidence (5%)', 'Construction Confidence (5%)' ] ] for i in range(num_entries): if i == 0: tmpl = GREEN + '{} ({})' + ENDC elif i == 15: tmpl = RED + '{} ({})' + ENDC else: tmpl = '{} ({})' row = [ tmpl.format( rankings[indicator][i][0], rankings[indicator][i][1] ) for indicator in indicators ] table_data.append(row) rankings_table = SingleTable(table_data) rankings_table.inner_heading_row_border = False rankings_table.inner_heading_row_border = True if date: rankings_table.title = 'Rankings for {}'.format(date) print(rankings_table.table) def historical_esi_values_chart(esi_component, title, filename=None, months=12, data_dir=None, esi_filename=None, esi_sheet_name=None): """Generates an SVG chart with historical values for an ESI component.""" disable_xml_declaration = True if filename is not None: disable_xml_declaration = False title = 'ESI - {} (past {} months)'.format(title, months) esi = ESIDataWrapper() if data_dir: esi.data_dir = data_dir if esi_filename: esi.esi_filename = esi_filename if esi_sheet_name: esi.esi_sheet_name = esi_sheet_name values = esi.get_historical_values(esi_component, months) chart = pygal.Line( dots_size=1, show_y_guides=False, x_label_rotation=90, disable_xml_declaration=disable_xml_declaration ) chart.title = title chart.x_labels = map(lambda d: d.strftime('%Y-%m'), values['dates']) for country, val in values['countries'].items(): chart.add(country, val) if filename: chart.render_to_file(filename) else: return chart.render() # Convenience functions. def industrial_esi_chart(filename=None, months=12, data_dir=None, esi_filename=None, esi_sheet_name=None): """Render an SVG chart with ESI Industrial Confidence data.""" return historical_esi_values_chart( '.INDU', 'Industrial Confidence', filename=filename, months=months, data_dir=data_dir, esi_filename=esi_filename, esi_sheet_name=esi_sheet_name ) def services_esi_chart(filename=None, months=12, data_dir=None, esi_filename=None, esi_sheet_name=None): """Render an SVG chart with ESI Services Confidence data.""" return historical_esi_values_chart( '.SERV', 'Services Confidence', filename=filename, months=months, data_dir=data_dir, esi_filename=esi_filename, esi_sheet_name=esi_sheet_name ) def consumer_esi_chart(filename=None, months=12, data_dir=None, esi_filename=None, esi_sheet_name=None): """Render an SVG chart with ESI Consumer Confidence data.""" return historical_esi_values_chart( '.CONS', 'Consumer Confidence', filename=filename, months=months, data_dir=data_dir, esi_filename=esi_filename, esi_sheet_name=esi_sheet_name ) def retail_trade_esi_chart(filename=None, months=12, data_dir=None, esi_filename=None, esi_sheet_name=None): """Render an SVG chart with ESI Retail Trade Confidence data.""" return historical_esi_values_chart( '.RETA', 'Retail Trade Confidence', filename=filename, months=months, data_dir=data_dir, esi_filename=esi_filename, esi_sheet_name=esi_sheet_name ) def construction_esi_chart(filename=None, months=12, data_dir=None, esi_filename=None, esi_sheet_name=None): """Render an SVG chart with ESI Construction Confidence data.""" return historical_esi_values_chart( '.BUIL', 'Construction Confidence', filename=filename, months=months, data_dir=data_dir, esi_filename=esi_filename, esi_sheet_name=esi_sheet_name ) def esi_chart(filename=None, months=12, data_dir=None, esi_filename=None, esi_sheet_name=None): """Render an SVG chart with ESI data.""" return historical_esi_values_chart( '.ESI', 'ESI', filename=filename, months=months, data_dir=data_dir, esi_filename=esi_filename, esi_sheet_name=esi_sheet_name ) if __name__ == '__main__': fire.Fire( { 'latest_rankings': display_latest_rankings, 'industrial_chart': industrial_esi_chart, 'services_chart': services_esi_chart, 'consumer_chart': consumer_esi_chart, 'retail_trade_chart': retail_trade_esi_chart, 'construction_chart': construction_esi_chart, 'esi_chart': esi_chart } )
Art Fowler Career Fowler was born in Converse, South Carolina. His brother Jesse pitched for the 1924 St. Louis Cardinals. Jesse was nearly 24 years older than Art, and the Fowlers hold the record for the largest age difference between brothers who played Major League baseball. Art Fowler pitched 10 years in the minor leagues with a record of 140–94. He led Southern Association pitchers in games pitched (54), innings pitched (261), hits allowed (273), and ERA (3.03) while playing for the Atlanta Crackers in 1953, and led Carolina League pitchers with 23 wins while playing for the Danville Leafs in 1945. Finally reaching the major leagues at the age of 31, Fowler made his major league debut in relief on April 17, 1954 against the Milwaukee Braves at Milwaukee County Stadium. His first big league win came in his first start, a 3–2 victory over the Chicago Cubs on April 25 at Crosley Field. He had a good rookie season, finishing 12–10 with a 3.83 earned run average. He ranked ninth in the National League with 227​²⁄₃ innings pitched. In 1955 and 1956, his last years as a regular starter, he combined for a 22–21 record with an ERA of 3.97. He started seven games for Cincinnati in 1957, and then appeared almost exclusively in relief thereafter. After a poor year with the Dodgers in 1959, Fowler resurfaced in the major leagues in 1961 at age 38 with the expansion Los Angeles Angels. He, along with Tom Morgan, and later Jack Spring and Julio Navarro, were the Angels' most reliable pitchers out of the bullpen during their first three seasons. Fowler's combined record from 1961 to 1963 was 14–14 with 26 saves and a 2.96 ERA in 158 games. He was released by the Angels on May 15, 1964 at age 41, the oldest player to appear in an American League game that season. His major league career totals include a 54–51 record in 362 games pitched, 90 games started, 25 complete games, 4 shutouts, 134 games finished, 32 saves, and an ERA of 4.03. He spent the rest of 1964 as a batting practice pitcher for the Angels, but his active playing career was not over. In 1965, he signed with the Triple-A Denver Bears as a pitcher-coach, and between 1965–68 and in 1970 he worked in a total of 211 games pitched and compiled a 27–15 won-lost record. On May 27, 1968, Billy Martin became manager of the Bears, and he and Fowler began a long friendship and professional association. Fowler served as Martin's pitching coach with the Minnesota Twins (1969), Detroit Tigers (1971–73), Texas Rangers (1974–75), Yankees (1977–79, 1983, 1988), and Oakland Athletics (1980–82). Under his tutelage, Ron Guidry won the Cy Young Award in 1978. Fowler died on January 30, 2007 at age 84 in Spartanburg, South Carolina. He is buried in Greenlawn Memorial Gardens, Spartanburg, Spartanburg County, South Carolina. In the 2007 ESPN miniseries The Bronx is Burning, Fowler was portrayed by actor Bill Buell.
import pytest import bedtool_helper as bh import numpy as np import pandas as pd import tempfile from pybedtools import BedTool # test data gtf_data = {'chrom': ['1', '1', '1', '2'], 'source': ['test', 'test', 'test', 'test'], 'feature': ['gene', 'exon', 'exon', 'gene'], 'start': [100, 100, 200, 100], 'end': [250, 150, 250, 400], 'score': ['.', '.', '.', '.'], 'strand': ['+', '+', '+', '+'], 'frame': ['.', '.', '.', '.'], 'attribute': ['', '', '', '']} def test_subset_featuretypes(): gtf = pd.DataFrame.from_dict(gtf_data) g = BedTool.from_dataframe(gtf) exons = BedTool(bh.subset_featuretypes(g, 'exon')) assert [e[2] for e in exons] == ['exon', 'exon'] def test_add_strand(): bed = {'chrom': ['1', '1'], 'start': [100, 200], 'end': [150, 250]} bed = pd.DataFrame.from_dict(bed) b = BedTool.from_dataframe(bed) bex = b.each(bh.add_strand, '+') assert [x.strand for x in bex] == ['+', '+'] def test_get_block_seqs(): with tempfile.NamedTemporaryFile() as fa_tmp: seq = np.random.choice(list('AGTC'), 400) seq = ''.join(seq) fa_tmp.write(bytes('>1\n', 'utf-8')) fa_tmp.write(bytes(seq, 'utf-8')) fa_tmp.flush() gtf = pd.DataFrame.from_dict(gtf_data) g = BedTool.from_dataframe(gtf) g = g.sequence(fi=fa_tmp.name, s=True) block_seqs = bh.get_block_seqs(g) assert len(block_seqs) == 3 def test_get_merged_exons(): gtf = gtf_data.copy() gtf['gene'] = ['A', 'A', 'A', 'B'] gtf = pd.DataFrame.from_dict(gtf) with tempfile.NamedTemporaryFile() as fa_tmp: seq = np.random.choice(list('AGTC'), 400) seq = ''.join(seq) fa_tmp.write(bytes('>1\n', 'utf-8')) fa_tmp.write(bytes(seq, 'utf-8')) fa_tmp.flush() blocks, block_seqs = bh.get_merged_exons(['A'], gtf, fa_tmp.name, '+') assert len(blocks) == 2 assert len(block_seqs) == 2
The project will create a new school on the site of Archbishop McHale College within the historic Irish town (pictured). The winning team will deliver a masterplan for the entire project and complete detailed designs for the scheme’s first phase. The 1,032m² first phase will include five classrooms, a science lab, an home economics area, toilets and a special educational needs suite. Participating teams must include an architect, quantity surveyor, civil engineer, structural engineer and building services engineer. The deadline for applications is 14 November.
// Title checks the match against the page's title func (s *Sequence) Title() *TitleMatch { return &TitleMatch{ s: s, } }
The Profile of Questioning and Reinforcement Skills of Pre-Service Teachers in Biology Learning of the Tenth Grade Article Info Received: August 18, 2021 Revised: January 15, 2022 Accepted: January 22, 2022 Published: January 31, 2022 Abstract: This study aims to find out how to apply the components of pre-service biology teachers questioning and reinforcement skills. This study was conducted at UPT SMA Negeri 8 Ogan Ilir. The design used in this study is descriptive research. The instruments used were observation sheets and documentation in the form of video recordings. The application of basic questioning skills was seen from 7 aspects while for the application of advanced questioning skills, it is seen from 5 aspects. As for the application of reinforcement skills, it was seen from 4 aspects. The results showed that there was a difference in the number of applying questioning and reinforcement skills during the learning process. In general, pre-service teachers have implemented all components of basic and advanced questioning skills. However, the components of basic questioning skills of giving turns and distributing questions and advanced questioning skills of increasing interaction among students still rarely appear. Meanwhile, the application of the reinforcement skill components of pre-service teachers has been applied in the learning process but still tends to be monotonous. Introduction According to the Government Regulation of the Republic of Indonesia Number 57 of 2021 concerning National Education Standards, the minimum criteria for teacher competence include pedagogic, personality, social, and professional competence. A teacher must be able to manage a fun, active, and creative learning process through extensive knowledge and master skills to create a teacher that can meet the demands (Barus, et al., 2016). Teachers have a significant role in the learning process. To carry out their role properly, various basic teaching skills are needed. Through various basic teaching skills, they are expected to carry out their duties in the learning process because these skills determine the quality of the learning process (Agustina & Saputra, 2017). Some basic teaching skills to be mastered by teachers include questioning, reinforcement, variation, explaining, opening and closing learning, classroom management, small group discussion leadership, and small group and individual teaching skills. The most important thing for a teacher is how these skills are applied properly so that the learning process can run well. Questioning skills are one of the basic teaching skills that are most often carried out during the learning process in the classroom. During the learning process, most of the interactions between teachers and preservice teachers are carried out through question-andanswer activities. In addition, the use of these questions can support other basic skills (Ermasari & Sudria, 2014;Hussin, 2006;Ralph, 1999). Through the application of good questioning skills, the teacher can make his students actively participate in learning, direct them to understand the lesson, increase their curiosity, stimulate their imagination, motivate them, focus their attention, and keep them engaged during the learning process so 317 that they play an active role in learning. Therefore, preservice teachers must understand good questioning skills (Ermasari & Sudria, 2014) According to the results of previous research by (Ermasari & Sudria, 2014) on the application of teacher questioning skills in junior high school science learning in Singaraja, it was stated that teacher skills were still low. This is caused by the teacher's questioning technique that is not optimal and low-cognitive level questions. A study by Agustina & Saputra shows that the questioning skills of pre-service teachers are not good yet. This is due to the less evenly distribution of questions that tend to be directed to certain students. Based on the study by Rasidi, et al., on questioning skills in the micro-teaching practice of elementary school teacher education students at Muhammadiyah University of Magelang, the score of teaching practice in questioning skills during microteaching was 69.41%. Basic questioning skills can be classified as good enough and basic advanced questioning skills are still relatively low in the component of providing a sequence of questions. Based on the results of previous studies, the questioning skills of teachers outside Sumatra were still poor. In addition, previous studies did not include reinforcement skills because questioning skills cannot be separated from reinforcement skills. In addition to questioning skills, the skills to provide reinforcement are also essential for teachers. Reinforcement is any form of response, both verbally and non-verbally, which is a modification of the behavior of students to provide information or feedback to students for their actions as an encouragement or correction. Reinforcement can affect the psychological behavior of students who receive it. Reinforcement is given as a positive response so that the good behavior will be repeated or improved. In the process of educative interaction, giving such a response is called reinforcement. Meanwhile, research on teacher reinforcement in science learning at SMA Bukit Barisan Padang states that the application of reinforcement still seems monotonous (not varied) and not optimal yet. This is due to the lack of skills in managing the class, causing students to be bored and not understand what the teacher is saying so that classroom learning becomes less active. The fact that only a few students are active makes unfair reinforcement so that some students feel neglected to result in a lack of enthusiasm and motivation to learn. Hisni, et al., stated that teachers faced some obstacles when giving reinforcement, namely, the lack of student response resulting in other students responding to the reinforcement with something else or joking, the teacher being confused about the suitable reinforcement for his students, and the use of monotonous reinforcement techniques causing noise during the learning process. It can be concluded from previous research that the teacher reinforcement skills are still not good yet. This can also happen in the pre-service teachers at UPT SMA Negeri 8 Ogan Ilir. Based on the explanation above, it can be concluded that the mastery of questioning and reinforcement skills is necessary to prepare pre-service teachers. To meet the demands of the minimum criteria for teacher competence, these basic skills should be mastered by memorizing theoretically and applying them continuously. No studies explain such results. In addition, there is no data on the profile of questioning and reinforcement skills for pre-service biology teachers at Sriwijaya University implementing the teaching practice program in Ogan Ilir Regency, South Sumatra. Therefore, researchers are interested in researching the profile of questioning and reinforcement skills of senior high school pre-service teachers in biology learning to improve their skills. The problems of this research are formulated as follows: First, how to apply the components of questioning skills proposed by pre-service biology teachers? Second, how to apply reinforcement by preservice biology teachers? Based on the formulation of the problem presented, this study aims to find out how to apply the components of questioning skills proposed by pre-service biology teachers and their skills to provide reinforcement. Method The design of this study is descriptive research. Descriptive research is a method that describes what is happening and a situation, condition, activities, and so on. Flow of the research is in the Figure 1. This study was conducted at UPT SMA Negeri 8 Ogan Ilir. This school was chosen because the researchers had teaching practice activities there and there were 6 participating pre-service teachers. This study was conducted from November 6 to 28, 2019. The population in this study were 6 students of the Biology Education Study Program at Sriwijaya University who were carrying out teaching practice activities at UPT SMA Negeri 8 Ogan Ilir. The sample in this study was 2 pre-service teachers teaching the tenthgrade students of science. The form of data in this study is the overall results of observations, both speech and actions, related to the skills of questioning and reinforcement for pre-service biology teachers. The data can be in the form of the number of questions asked by pre-service teachers, the number of questioning skill components that appear, the amount of reinforcement, and the type of reinforcement provided by pre-service teachers. Data collection was carried out in the field 3 times using an observation sheet that had been prepared previously. The data were collected by using observation and documentation techniques. The observation technique was non-participatory. The researcher only acted as an observer during the learning activities. The type of observation carried out in this study was systematic observation using guidelines as an observation instrument. The documentation used in this study was in the form of photos and video recordings of teaching activities for pre-service teachers during the learning process. The data analysis technique used was descriptive analysis which is explorative. The video recording obtained was then transcribed into a conversation. Furthermore, the transcript of the conversation was analyzed including questioning and reinforcement skills. Then, the results inputted to the observation sheet were analyzed to find out whether each component had been applied to each skill. The questions were identified based on the use of 5W+1H and voice intonation when the teacher asked questions and showed gestures to stimulate their students to answer. Result and Discussion In this section, the results are divided into 2 about the basic skills of questioning and reinforcement carried out by pre-service teachers. This study was conducted from November 6 to 28, 2019 at UPT SMA Negeri 8 Ogan Ilir. The study was carried out three times for each preservice teacher in the two learning hours. Questioning Skills during Learning Activities Good questions were noted during the opening, core, and closing activities. The questions were identified based on the use of 5W + 1H and voice intonation when the teacher asked questions or showed gestures by raising his hand to stimulate the students to answer. The number of pre-service teacher questions obtained is shown in Table 2. Table 2. shows that there are differences in the number of questions in each activity in each lesson. In the opening and core activities, both teachers asked varying numbers of questions. However, in the closing activity, they seemed very lacking in asking questions. In several meetings, they seldom asked questions in the closing activities. Based on the results of observations on the two preservice teachers, there were differences in the number of questions asked during the learning process. The difference in the number of questions asked can be caused by several factors. Factors that influence include differences in the level of mastery of learning materials, characteristics of students, and time constraints (Martino & Maher, 1994). A low level of mastery of the material can result in differences in the number of questions asked. The lack of mastery of the material is indicated by the lack of preparation of teaching materials so that preservice teachers cannot prepare the questions well to be asked during learning. This is supported by the interviews with the pre-service teachers who said that the source of the teaching materials used still depended on the student textbooks. Based on the observations during learning, the differences in the characteristics of students also resulted in a different number of questions. For example, when the teacher asked a question, some students immediately sought the answers independently and some were even busy with themselves so that the questions given could not be developed. This made pre-service teachers spend a lot of time telling or guiding the students to focus on ongoing learning. In addition, time constraints resulted in teachers being unable to ask questions in the closing activity in some meetings. The pre-service teachers also often asked questions by completing sentences. They gave complementary questions during the observation. Meanwhile, teacher B gave 20 incomplete sentences. This was done because such questions were considered to guide the students and provoke them to answer more easily. The Application of Basic Questioning Skill Components The results of the analysis of the application of the basic questioning components were then converted into percentages so that the following results were obtained (Table 3). Table 3 shows that overall, the basic skill components have been applied by teacher A. The results of the application of basic questioning skills of teacher A are categorized as less skilled. Meanwhile, the application of teacher B's basic questioning skills is shown in Table 4. Based on Table 4, overall, the basic questioning skill components have been implemented by teacher B. Both teachers have implemented the components of expressing questions clearly and briefly. Based on the observation made, the questions posed can be directly answered by the students. This shows that the questions of the teacher can be understood by students. This is in line with Nurlaili stating that the questions posed by pre-service teachers are simple and understandable for students so that they are stimulated to think. The application of providing reference by teacher A is categorized as skilled enough with a percentage of 55.14%. Based on the observation, this was caused by teacher A's lack of mastering the material taught. Teacher A only used the teaching materials. Teacher A tends to ask questions directly without conveying information first. Meanwhile, teacher B was categorized as skilled with a percentage of 73.71%. The third component is focusing. Focusing questions is categorized as skilled enough with a percentage of 64.76%. Meanwhile, teacher B applied this component skillfully with a percentage of 78.79%. This can be seen when the two teachers asked questions that lead to certain answers. This is as stated by Nurlaili that pre-service teachers also focus questions on one answer. The questions asked by the teacher have a narrow scope so that they require students to pay more attention to specific things. The application of the fourth basic questioning component is giving turns and distributing questions. In the application of this component, the two teachers were the least skilled. During class observations, when asking the students, the teachers only asked one student. If the student's answer is inaccurate, they add and complete the answer instead of asking other students to respond to the first student's answer. This finding is in line with that stated by Luzyawati, that the questions asked by pre-service teachers are not evenly distributed. They tend to ask another student who is considered able to answer because, through his answer, the other students will listen to the answer that is easier to understand. The fifth component is giving time to think (wait time). It was done very well by the two pre-service teachers. They waited for about 5 seconds or until the students raised their hands first before asking one of them to answer. The next component is guiding. Both teachers applied it very poorly. If the students find it difficult to answer questions, both teachers often ask the same question and do not explain the material related to the question. It is not uncommon for the teachers to directly answer the questions asked. This is in line with the finding of Agustina & Saputra, showing that when students tend to be silent when asked a question, the teacher immediately answers it. They should be able to encourage the students to be more active, creative, and think independently to find the answer. Meanwhile there will be positive effects when the wait time after teacher questions (and student responses) is extended (Heinze & Erhard, 2006) For the implementation of avoiding bad habits, both teachers are still very lacking. They often repeat the answers given by the students. Many questions they asked triggered the students to answer together. In addition, on several occasions, pre-service teachers appointed certain students to answer new questions. The Application of Advanced Questioning Skill Components Basic questioning skills are followed by advanced questioning skills. The application of advanced questioning skill components by teacher A can be seen in Table 5. Based on the data from Table 5, teacher A has implemented all components of advanced questioning skills. Overall, teacher A is skilled enough. For the application of teacher B, it is shown in Table 6. Based on the data from Table 5, teacher B has implemented all components of advanced questioning skills. Like teacher A, teacher B is also quite skilled. The components of changing cognitive guidance and cognitive sequencing of the two teachers have been applied quite skillfully. Based on the observations, they asked more difficult questions. However, the level of questions asked is only at the level of applying. This finding is in line with Luzyawati, stating that the questions asked by pre-service teachers have a low level of difficulty. Fewer higher-order cognitive questions were asked because the teacher did not plan it. Even novice teacher have good questioning skill, but advanced questioning skills need to be improved due to requires an in-depth knowledge (Martino & Maher, 1994;Sari & Hasibuan. Darmadi, et al., stated that pre-service teacher tent to ask low-level questions. They were not accustomed to asking questions with the evaluating and creating level. The application of the tracking question component of teacher A can be categorized as skilled enough while that of teacher B is the least skilled. During the observation, the questions given led to a single correct answer. When a student's answer was inaccurate or they had difficulty answering, the pre-service teacher did not direct him to find the right answer but will immediately correct it. For the component of increasing interaction among students, the two teachers were less skilled. When the students asked questions, the teachers immediately answered; they did not ask other students. In addition, the students tend to be passive in asking questions during the learning process. This is in line with Asmira, et al., stating that the increase in interaction is not optimal because, during learning, students are still less active in asking questions and tend to be silent and do not participate when asked questions. Reinforcement Skills during Learning Activities The reinforcement given during the learning process was observed from the video. The reinforcement given tends to be varied during the opening, core, and closing activities (Table 7). Based on Table 7, there are differences in giving reinforcement in each learning activity. In the opening and core activities, both teacher A and teacher B provided reinforcement in varying amounts during learning. However, in the closing activity, teacher A rarely gave reinforcement while teacher B gave some. These differences occur due to differences in the characteristics of students in responding to something. The Application of Reinforcement Skill Components Reinforcement is giving a positive response in learning to the positive behavior of students. The analysis of the application of the reinforcement component was then formulated in the form of a percentage so that the following results were obtained (Table 8). Slightly different from teacher A, teacher B performed verbal and non-verbal reinforcement in a balanced way. The application of reinforcement by teacher B can be seen in Table 8. Table 9 shows that the application of reinforcement by teacher B is in the form of not only verbal but also non-verbal reinforcement. Non-verbal reinforcement is mostly done by body movements in the form of nodding and clapping. Overall, the skills to provide reinforcement have been carried out skillfully. Both pre-service teachers have implemented verbal reinforcement. However, the use of verbal reinforcement tends to be monotonous. The words used are correct, right, yes. This is in line with the finding of Hisni, et al., stating that the diction used by the teacher in verbal reinforcement is limited. No other verbal reinforcement diction was found other than the good, correct, right, and pretty good. Monotonous diction tends to be used because it is easy and very common. In addition, the use of diction alone can make students feel happy. The application of verbal reinforcement is often followed by non-verbal reinforcement as shown in Figures 5 and 6. The most frequently applied non-verbal reinforcements are body movement and approaching students. However, there are slight differences for each pre-service teacher. Teacher A often applies body movement reinforcement in the form of thumbs up to students while teacher B applies body movements in the form of clapping to appreciate students. The application of reinforcement by approaching is often done by both teachers when students answer questions and have discussions or difficulties when answering questions. This is in line with Hisni, et al., stating that teachers do non-verbal reinforcement in the form of approaching when appointing students to answer the questions asked. The use of approaching reinforcement is to motivate the students to think about finding answers and be active in learning. The finding of Aida & Antoni showing that pre-service teachers used of praise word and statement in the class. Fitrianti et al. state that teachers in the present study employ three type of reinforcement. The reinforcement are verbal reinforcement, token reward, and tangible reward. Symbolic reward and activity reward did not use in their teaching and learning process. These strategies are not used as they appear to spend much money and take time. Conclusion Based on the finding, it can be concluded that there are differences in the number of skills in the application of questioning and reinforcement during learning. In general, the two pre-service teachers have applied the questioning components quite skillfully. In the application of basic questioning skills, pre-service teacher A obtained a percentage of 53.94% (less skilled) and pre-service teacher B obtained 60.25% (skilled enough). In advanced questioning skills, pre-service teacher A obtained 57.78% (skilled enough) and preservice teacher B obtained 56.62% (skilled enough). Both teachers have applied the reinforcement skill components in the learning process, but they are not skilled yet. Pre-service teachers apply more verbal reinforcement than non-verbal. The reinforcement has fulfilled the principles of warmth, enthusiasm, and meaning.It is recommended that pre-service biology teachers improve their experience in applying questioning and reinforcement skills, avoiding bad habits during questioning, as well as preparing for better learning, and increasing the application of non-verbal reinforcement.
Devann Yao Club career Born in New York to an Ivorian father and Italian mother, Yao moved to Europe at a young age. He first joined FC Metz in France at age of 13. He spent four years in their youth system before joining AS Livorno in Italy, where he played for their youth and reserve teams. Yao then signed with St. Mirren of the Scottish Championship but left due to lack of playing time with the first team. Yao briefly returned to the United States to trial with the New York Red Bulls but failed to secure a contract, returning to the United Kingdom to sign with Ipswich Town. However, he was released from Ipswich Town in 2010 without playing any match with the first team. In 2011, Yao went to Borinage in the Belgian Second Division. In 2013, he signed with UR La Louvière Centre. In 2014, after three years in Belgium, Yao headed to Germany, where he joined TSG Neustrelitz in the German fourth tier. After scoring eight goals in 23 matches for Berliner AK 07 in the 2016–2017 season, he signed with SV Meppen in the German 3. Liga. Yao joined FC Victoria Rosport on 1 January 2019, after signing for the club in December. He left the club already on 10 April 2019. On September 20, 2019, Yao returned to the United States, joining USL Championship side Fresno FC for the remainder of their 2019 season.
Percolation transition of the vortex lattice and c-axis resistivity in high-temperature superconductors We use the three-dimensional Josephson junction array system as a model for studying the temperature dependence of the c-axis resistivity of high temperature superconductors, in the presence of an external magnetic field H applied in the c-direction. We show that the temperature at which the dissipation becomes different from zero corresponds to a percolation transition of the vortex lattice. In addition, the qualitative features of the resistivity vs. temperature curves close to the transition are obtained starting from the geometrical configurations of the vortices. The results apply to the cases H greater than 0 and H=0. Strong thermal fluctuations and anisotropy make the physics of the vortex lattice in high-Tc materials be much more rich and complicated than predictions of mean field theories. This shows up, in particular, in the complicated structure of the field-temperature (H-T ) phase diagram of the high-Tc's. It seems to be clear that there is a line in the H-T phase diagram that separates a low-temperature phase (known as the vortex glass phase ) where vortex lines are frozen in space, and a high-temperature phase in which the vortex lines move through the material due to thermal activation. The passage to the normal state when increasing temperature is likely to be a crossover, instead of a well-defined transition. The curve that separates the low-and hightemperature phases is named irreversibility line (IL). The V -I characteristics when an external current is applied perpendicularly to the magnetic field is different above and below the IL. Below IL the V -I curves are well fitted by V ∼ exp with and I c being two parameters, and in particular, the resistivity of the system -which is defined as = lim I→0 (V /I)-is strictly zero. Above IL the behavior is ohmic, i.e., V ∼ I. When the current is applied parallel to the field, the mean force exerted on the vortices is zero. However there are local forces -due to misalignment of the local magnetic field-that may give rise to dissipation. The most important mechanism for dissipation in this configuration at intermediate temperatures is the thermal activation of vortex loops, which gives a voltage V c ∼ exp , implying zero resistivity. In this work we show that when temperature is increased there is a phase transition at a temperature T p that reflects a thermodynamic property of the vortex system and is signed by the occurrence of a non-zero resistivity. In fact, the I − V characteristic for YBaCuO, when current and magnetic field are parallel to the c-axis show the following behavior : for small currents and high temperature the response is ohmic, the range of currents that gives a linear response is reduced as the temperature decreases, and at a well-defined tem-perature T p the linear behavior disappears. Moreover, the I − V curves can be scaled on two universal curves, corresponding to T > T p and T < T p respectively. This behavior -that is similar to what occurs when the current is applied in the ab-plane-supports the idea of a thermodynamic transition that we identify with a percolation transition of the vortex system. Experimentally, it is observed that the dissipation in the c-axis appears at different temperatures than in the ab-plane. This implies that the 'irreversibility line' for a current parallel to the field is different than the corresponding to the abplane. Here we explore the following idea. At zero temperature, the vortices are straight lines, and the net force on each of them when a small current in the c-axis is present is zero. At low temperature, vortex lines start to wander and vortex loops are created due to thermal activation. However, if temperature is not too high, vortex loops and vortex lines are still isolated from each other and dissipation in the linear regime is zero -except for surface effects (see below). When increasing temperature, vortex lines and thermally generated vortex loops start to touch each other and for temperatures greater than a critical value T p, there will be a vortex path crossing the sample along the ab-plane. The net force exerted by the current on this path is different from zero, and a finite dissipation will be observed. In this way we qualitatively see that the existence of paths perpendicular to the current in the sample -i.e., the transversal percolation of the vortex lattice-is crucial for the dissipation in the c-axis. The model used to test this idea is the three dimensional Josephson junction array on a discrete lattice, that has been described in detail elsewhere. The dynamics of the model is contained in the evolution of the phases i (t), which are defined on the nodes of a cubic lattice and represent the phase of the order parameter. Between nearest neighbors nodes there are Josephson junctions characterized by a critical current I 0 and a normal resistance R 0. The equations describing the model are Eq. 1 gives the current j ii between nearest neighbors nodes i and i with phases i and i. Here A ii is the vector potential of the external magnetic field, and ii (t) is an uncorrelated gaussian noise which incorporates the effect of temperature. Eq. 2 assures the current conservation on each node, and j i ext is the external current applied at node i. The model allows for the existence of vortices, which consist in singularities of the phases (t) around a given closed path. Self-inductance and disorder effects are not considered and the system is taken isotropic for simplicity -i.e., I 0 and R 0 are taken constant throughout the lattice. We numerically integrate Eqs. 1,2 in time. Voltages at different points of the sample are calculated as the temporal mean value of the time derivative of the phases. The resistivity of the sample in a given direction is calculated by injecting a small external current (typically around ∼ 1/20 of the critical current of the junctions) by one of the faces of the sample and withdrawing it from the opposite face. The small value of the external current is chosen in order to be in the linear regime, in which the voltage drop is proportional to the applied current. The boundary conditions (BC) are taken open in the ab-plane. However, if open BC in the c-axis are used, there will be a finite force on an isolated vortex at finite temperature if the top and bottom ends of the vortex are not aligned. The dissipation -that is non-zero even in the linear regime-caused by this net force turns out to be independent of the thickness of the sample, and in this sense, it is only a surface effect. In order to eliminate this spurious surface effect it is crucial to use BC for the c-direction that assure that each vortex line leaving the sample at a given point of the bottom plane re-enters at the same point of the top plane. Strict periodic BC on the phases have this property, however we would obtain that the voltage difference between top and bottom planes is identically zero. We use, instead, open BC for the mean value of the phases in the top ( T )and bottom ( B ) planes, and periodic BC for all the phase differences i T − j T and i B − j B. This guarantees the periodicity of the vortex configurations and permits the calculation of the c-axis resistivity. We have to define a criterium for percolation: in our model there is a typical length which is the lattice parameter a. Distances smaller than a cannot be resolved. Flux conservation implies that every flux line going into a unit cell of our lattice also goes out of the cell. When two vortices go into the same elemental cell we cannot tell which one of the two outgoing vortices correspond to each one of the ingoing vortices. We interpret this situation as the meeting of two vortex lines. In a real material this corresponds to two vortex lines being at a distance lower than the core size of the vortex. At high enough temperatures the vortex structure may percolate perpendicularly to the applied field: starting from one side of the sample we can follow a vortex line and arrive to the opposite side of the sample. Due to the finite size of the systems used, and to the dynamical evolution, percolation is not expected to occur at every time, but only at a given fraction of the total time, which depends on temperature. We evaluate the probability that there exists a vortex line crossing the system from one side to the opposite as a function of temperature. Because a sharp percolation transition can only be seen in the thermodynamic limit, we do scaling with the size of the system. In Fig.1(a) we show the resistivity of a cubic (L ab L ab L c, L ab = L c ≡ L) sample for an external field of 0.2 (in units of quantum fluxes per plaquette) as a function of temperature (which is measured in units of the Josephson energy of the junctions) for three different sizes of the system: L = 8, 16 and 24. For comparison, the resistivity when the current is applied perpendicularly to the field is also shown for the case L = 8. It is clearly seen that the onset temperature for the dissipation in the c-axis T p is higher than the corresponding to the ab-plane. Fig.1(b) shows the probability that the vortex lines have percolated through the sample along the ab-plane. We see a percolation transition around T p that becomes narrower the greater the size of the system. This indicates that there exists a sharp percolation transition in the thermodynamic limit. As an additional check, in Fig. 1(b) (inset) the data of Fig. 1(b) are plotted vs a rescaled variablex:x = L ab 1/2 − (1 − exp (−∆/T )) Lc, where = 0.7, and ∆ = 3.75 are numerically found parameters. This scaling comes up by using a simple model for the percolation. It strongly suggest that a (percolative) thermodynamical phase transition is occurring in the system. By comparing Figs. 1(a) and (b) it can be seen that the temperature T p where c-axis resistivity starts to be different from zero is the same temperature at which the percolation probability becomes finite. This indicates -as qualitatively discussed above-that the percolation transition is a necessary condition for the existence of dissipation in the c-direction. In addition, we would like to have a more quantitative estimation of the resistivity, based on the geometrical configurations of the vortex system. This can be accomplished in the following way: Let us consider a sample of size L c (L ab ) in the c(ab)-direction The resistivity of the sample in the c-direction is proportional to the number of paths n that cross the sample in the ab-plane per unit of area, times the velocity v this paths acquire under the external force, divided by the external current density j: ∼ nv/j. The velocity v is given -using a viscous fluid argument-by the external force F divided by a total viscosity, which is equal to a specific viscosity coefficient, 0 times the total length of the vortex path, that we will call l, i.e., v = F/ 0 l. The force F is given in term of the external current and the size of the system: F ∼ jL ab. We obtain ∼ nL ab / 0 l. The coefficient 0 depends on temperature, however, on small ranges near the percolation threshold we will take it as a constant. The determination of n and l is a difficult task, because the percolation paths across the sample are not uniquely defined due to the crossing of vortex lines (see Fig.2(a)). We will use the following estimation: we assume that n l L ab L c is the volume S of the percolation cluster in a sample of volume L ab L ab L c. The value of S can be easily evaluated from the numerical simulation. We obtain ∼ S/ 0 l 2 L c. It remains to estimate the value of l. This length l depends both on temperature and the size of the system. As we said, a direct numerical determination of l is difficult due to indeterminacies at the crossing points of the vortex lines. We will use the most crude estimation (see Fig.2): when the magnetic field H is close to zero -i.e. H < H cross, where H cross is a crossover field which is defined below-we take l ∼ L ab. However, for H > H cross, percolation proceeds via the external field generated vortices and the length of a percolation path is much larger, and can be estimated to be l ∼ L c L ab /H −1/2. In this way we obtain the following scaling for the resistivity near the percolation threshold This scaling is expected to be valid only close to the percolation threshold. The crossover field H cross is estimated as H cross ∼ 1/L 2 c, and corresponds to the zero-temperature lattice parameter of the vortex structure being equal to the thickness of the sample. (The value of this field is about 20 G for a 1 m thick sample). For the value H = 0.2 used in Fig. 1 we are in the case H ≫ H cross for all the values of L z considered. In order to check the previous estimations, in Fig.3(a) we compare the values of L 2 ab L 3 c and S vs temperature when L ab is varied between 16 and 30, for H = 0.2. In Fig. 3(b) L 2 ab L 3 c and S vs temperature are compared when L z is varied between 12 and 24, for the same field H = 0.2. The only free parameter of the fitting is a global factor, which is the same in Figs. 3(a) and 3(b). The agreement between the numerically calculated values and the estimated ones close to the threshold is fairly good if we take into account all the approximations made in order to obtain Eqs. 3 and 4. A more precise estimation of the resistivity using only the geometrical configurations of vortex lines seems to be difficult because of the following facts: The percolation paths across the sample are not uniquely defined (see Fig. 2(a)), and the real movement of vortex lines under the external force will depend on the cutting energy. The viscosity 0 is not a constant, but a function of temperature. In addition, the supposition of a phenomenological viscous motion of vortex lines may not be accurate at low temperatures, when vortices creep. The existence of two resistive transitions (in the c-axis and the ab-plane) has been experimentally observed in YBaCuO. The values of the two characteristic temperatures depend on the pinning, vortex elasticity and magnetic field. In YBaCuO, as the thickness of the sample increases the two temperatures become closer to each other. In our simulations we find that the temperature at which the percolation transition occurs decreases as ∼ 1/ ln(L c ), as it can be deduced from the scaling in Fig. 1(b) (inset). The thermal excitations in the form of vortex lines crossing the sample along the ab-plane destroy the phase coherence along the c-axis. For T > T p the coherence length c is of the order of the mean distance between percolation paths, i.e., c ∼ L c /n 1/2. We conclude that the mechanism that leads to the 2D-3D transition in high-T c materials with moderate anisotropy is the percolation of vortex line perpendicular to the external field. In summary, for a model high temperature superconductor we have shown by using qualitative arguments and numerical simulations, that the onset of the resistivity in the c-direction is related to a percolation transition of vortex lines in the ab-plane. The results hold for H = 0 and H = 0. A qualitative estimation of the resistivity near the threshold, and its finite size scaling has been given. For the sizes of the isotropic systems used, the percolation transition occurs at higher temperature than the resistive transition in the ab-plane, and corresponds to a new thermodynamic transition that should be characterized by new critical exponents different from those obtained for the vortex glass transition when current is applied parallel to the ab-plane. We expect these results to be valid also for anisotropic systems, at least in the case of moderate anisotropy, as in YBaCuO. We acknowledge D. Lpez and F. de la Cruz for helpful discusions and critical reading of the manuscript. E. A. J. is supported by CONICET. C. A. B. is partially supported by CONICET.
Turbulence models impact on the flow and thermal analyses of jet impingement. Accurate numerical reconstruction of heat and mass transfer processes in particular applications, such a jet impingement, is difficult to obtain even with the use of modern computational methods. In the proposed paper, the flow and thermal phenomena occurring during single minijet impingement on the flat, concave and convex, heated surfaces were considered. Problem of impingement on non-flat surface, still not common and purely described in the literature, can be of big importance in engineering applications, such as the heat exchangers. Numerical analyses, based on the mass, momentum and energy conservation laws, were conducted with the OpenFOAM software. Focus was placed on the proper model construction, in which turbulence and boundary layer modelling was crucial, due to their significance in the heat transfer processes. Analysis of results obtained by RANS models focused mostly on the comparison of turbulent and hydrodynamics parameters. Introduction Modern world and its technical development is connected strictly with the trials to improve the ways of energy resources utilization. One of the trends is to apply methods known from one branch of engineering in other. An example can be a jet impingement. It was being used in the cooling systems of electronics or in some metallurgical processes. Recently, however, it was proposed in the novel construction of cylindrical heat exchanger, which resulted in very promising values of transferred heat rates and lower than in other similar devices hydraulic resistance. In abovementioned device, about 1000 orifices were generating minijets with the core diameter of ~1 mm. Due to complexity of the system, experimental investigation of it can be very difficult to perform, especially, when the detailed flow phenomena in the small scale is the main goal of research. Until now only general data was available, then, concerning macroscale parameters, such as already mentioned pressure losses or overall heat transfer efficiency. Understanding of the phenomena occurring in the device requires more detailed analysis, concerning also microscale. For such cases numerical methods are very helpful. However, even now they demonstrate lack of accuracy in some scientific problems. Jet impingement unfortunately is one of the examples. According to Zuckerman proper predicting of flow and thermal parameters in the case of jet impingement and its numerical modelling depends significantly on the type of turbulence model chosen for the simulation. In his paper it can be found, that the best results for RANS simulations can be obtained with v 2 -f model, which is the four-equation, enhanced k- type, model. While hydrodynamics of the phenomena is predicted with decent level of accuracy by RANS methods, heat transfer causes problems. Hadiabdi in his doctoral thesis confirmed this statement, moreover he concluded, that even LES methods exhibited lack of accuracy when used to predict heat transfer in jet impingement. Before analyzing complex systems, it is essential to identify the advantages and disadvantages of possible numerical methods to be applied. The following paper regards the analysis of single minijet that impinge various surfaces -flat, convex and concave. Their geometry was chosen in the basis of previous research, as well as data from. While the available information for the flat case is generally broad, non-flat impingement is still not a popular and widely described scientific topic. Correlations do not exist, which can be used in the problems regarded in. The goal of paper was to identify the turbulence model impact on the results and describe the differences between impingement on various surfaces -since such knowledge would be essential to correctly analyze arrays of minijets, existing for example in heat exchangers ]. Mathematical model and geometry Steady-state, single-phase, two-dimensional axisymmetric analyses were performed. High resolution, second order discretization schemes were applied to provide sufficient results. The conservation laws of mass (Equation 1), momentum (Equation 2) and energy (Equation 3) were applied, using Reynolds averaging approach. All variables marked with overline, such as (), would represent time-averaged value. They were coupled with various turbulence models, both high-(k- and v 2 -f) and low-Reynolds (SST k-) types: where u is the velocity, m/s; is the density, kg/m 3 ; p is the pressure, Pa; is the dynamic viscosity, Pas; Sij is the strain rate tensor, 1/s, '' ij uu is the Reynolds stress term, m 2 /s 2 ; E is the total energy, J; T is the temperature, K; ef is the effective thermal conductivity, W/(mK). As it can be seen, all cases were axisymmetric. Moreover, values of H, which was the height between orifice exit and stagnation point, and R, which was the radius of curvature, were chosen to be multiplications of orifice diameter D. Multiplication factors used in the paper are listed in Table 1. Flat surface, validation case Numerical analyses of jet impingement could be compared for example with the ERCOFTAC benchmark cases, describing the experimental work by Cooper et al. and Yan. They represented the situation, in which air is impinging the flat surface, heated with constant heat flux. Orifice to surface distance H was equal to two diameters of the orifice, 2D. Heat flux at the surface was equal to 1000 W/m 2. Reynolds number, defined as: at the exit of the orifice was equal to 23000, and, in addition, the flow there was fully developed, which was achieved with the mapped inlet boundary condition. Parameter D in Equation 4 is the jet impingement characteristic length, denoting the orifice diameter. Table 1 presents the boundary conditions for validation case. They were also used for non-flat cases, from Section 4. Numerous publications were based on those results, among which the papers by Behnia et al. were chosen as the reference case. Both papers confirmed, that v 2 -f model presents the best performance when analyzing jet impingement. Moreover, the main drawback of standard k- model, which is overproduction of turbulence kinetic energy in the stagnation zone, was also confirmed. Results obtained with v 2 -f model were compared with standard k- and SST k- models. First two were used in low-Reynolds mode, while SST k- was used in the combination with wall functions, to check their performance. Initially, for the validation case, it was important to fulfil the mesh requirements regarding jet impingement simulations. Mesh construction process was time consuming, required not only knowledge of phenomena but also the trial-error method. Regular mesh independence checks were performed as well. The final space division was chosen in the basis of the comparison of obtained results with. Table 2 presents number of mesh elements that were chosen to analyze the process. where is the convective heat transfer coefficient, W/(Km 2 ) and is the thermal conductivity, W/(Km). As can be seen, depending on the mesh size and settings, very different results were obtained. Figure 5 shows selected results from Figure 4, compared with benchmark data, experimental results by Yan and numerical results by Behnia et al.. Also results obtained with SST k- are presented there, to check the performance of wall-functions. Only the values calculated with Mesh 2 were included, as they exhibited the best agreement with. Moreover, results presented in Section 4 were also obtained with utilization of Mesh 2 construction process. As it can be noticed, far from the stagnation region all presented results are almost the same. However, large discrepancies occur in the stagnation zone. The k- model overpredicted the heat transfer significantlybut this effect was expected. On the other hand results obtained with OpenFOAM v 2 -f model also did not reflect the ones obtained by Behnia et al.. Especially, in the region, where the distance from the stagnation point is slightly higher than the orifice radius. The answer for that discrepancy can be found in the paper of Billard and Lawrence. From theoretical point of view they analyzed evolution of the v 2 -f models, because, depending on particular implementations, various results could be obtained. They described, that some models were adjusted to use very robust Dirichlet boundary condition at the wall for the elliptic relaxation function, f. It led to the possibility of their usage in the segregated solvers, commonly applied in many commercial codes or OpenFOAM. However, such method led to omitting one term in the relaxation equation and, as a result, another overpredictionof the velocity scale, v 2. It explains the issue with Nusselt number values, presented in Figure 5. In the papers by Behnia et al., their own solver was used, so they were able to use the original v 2 -f model, not having such drawbacks. Still, though, the difference between their results and the experiment can be noticed that may possibly be never fully avoided, as described in. Another reason of presented differences may lay in the characteristics of the fully developed flow. Behnia et al. in included Figure 14, in which the impact of different orifice-exit velocity and turbulence profiles on the Nusselt number distribution at the impinged surface was very significant. Another conclusion can be related with results obtained with SST k- model. In that case the heat transfer was underpredicted in the stagnation region and proper far from it. However, because of the wall functions consideration, which are just a simplification of actual heat and mass transfer processes, this model would not be analyzed and described in the next sections. Apart from the thermal parameters, also a flow prediction by particular numerical models is very important for results validation. Figure 6 presents obtained numerically and compared with experimental profiles of the velocity in various locations far from the stagnation point, x/D. At the bulk Reynolds number in the orifice equal to 23000, bulk air velocity ub was equal to 34.5 m/s. Analysis of Figure 6 leads to conclusion, that in contrary to thermal parameters, flow behavior was predicted well by the v 2 -f model implemented in OpenFOAM and used in presented studies. Differences between it and standard k- are also clearly visible. Profiles of velocity obtained in were very similar, including the noticeable difference between the experimental and numerical results starting at ratio H/D higher than 0.2, for x/D = 1 and 2.5. In Figure 7, the turbulence kinetic energy distribution for validation case is presented. Its budget can be written as follows: where t is the eddy viscosity; Pas, is turbulence dissipation rate, m 2 /s 3. Two terms, production and dissipation, are emphasized, as they were used in Section 4 for data presentation. The k- model was characterized by overproduction of turbulence kinetic energy in the stagnation region. Its maximum was located there. On the other hand, usage of v 2 -f model caused the maximum to move outside this region. It reflects the real-life situation that can be observed in [5,. As mentioned at the beginning of Section 3, the most common implementation of v 2 -f model is not able to limit the extensive production of turbulence in stagnation zone properly. That is the reason of relatively high values of turbulence kinetic energy, visible in Figure 7(b), which do not occur in [5,. Nevertheless, the v 2 -f model was chosen for the next analyses, presented in Section 4. Before analyzing arrays of jets, that impinge non-flat surfaces, it is important to define the influence of the surface shape on the flow behavior. In Authors tried to define the critical curvature radius to orifice diameter ratio R/D, for which the curvature effect plays a role. It was concluded, that depending on the type of surface and mentioned ratio, the difference in heat transfer occurs between the particular non-flat and flat surface impingement. Moreover, with increasing ratio, the values tended to vary lesssince the curvature of stagnation zone was almost negligible. However, did not contained hydrodynamic data, which also is very important. In it was noticed, that for ratio of surface curvature and orifice diameter equal to 4, the curvature effect is noticeable. Therefore, this ratio was selected for representation in the following studies. In Figure 8, comparison of normalized (in the same manner, as in Figure 5) velocity profiles, depending on the type of surface: flat, concave or convex, is presented. For the radial distance x/D higher than 1 only slight differences occur. However, when analyzing results in Figure 8(a) and 8(b), it can be seen, that depending on the type of surface, the flow behaves in different way. Therefore, the stagnation zone is the area, where the most noticeable discrepancies takes place. It is especially visible in Figure 8(b), for height values H/D ≥ 0.25. To be able to provide data described above, it was important to propose the method of comparison between flat and non-flat surfaces. For that purpose, the distance x was measured as the straight chord connecting the stagnation point and particular point on the curve. Moreover it was measured, that difference between its length and the length of the arc connecting both points was negligible. Table 3. In Figure 9, distribution of the turbulence kinetic energy during impingement on concave, Figure 9(a) and convex, Figure 9(b), surfaces is shown. Its maxima are located outside the stagnation zone, as in the flat case. In Table 3, locations of turbulence kinetic energy maxima are listed, in relation with orifice diameter D. They were established for flat and non-flat cases in the same manner, as data from Figure 8. Differences between each case existit is an important aspect to mention, because it may influence the results when the jets array impingement occurs, such as in. Multiple jets might strongly interfere with each other. In Figure 10, two terms of turbulence kinetic energy budget are presented, along the curvature or segment distant from stagnation point, at height normal to impinged surface, where maxima from Table 3 occurred. In the stagnation zone, the highest production exists in the convex case, followed by flat and concave ones. For the dissipation, situation is opposite. However, in the regions close to the highest values of turbulence kinetic energy, located at the distances x/D mentioned in Table 3, for both production and dissipation highest values were obtained for convex surface. In general, behavior presented in plots for each situation is quite similar. Summary In this paper, thermal and hydrodynamic analyses of jet impingement on flat and non-flat surfaces were presented. Both boundary conditions and scope of interest were based on. Mesh and software configurations were determined by ERCOFTAC data. Velocity profiles and turbulence kinetic energy budgets comparison revealed important differences. In Authors opinion those variations would matter, when the whole jets array impinging flat and non-flat surfaces would be analyzed. Different turbulence model were considered, however, as it was proved in, v 2 -f has given the best results. While RANS models, presented in the following paper, can reveal important information, they should be verified and extended also by more extensive methods, such as LES approach.
Fostering Interdependence to Minimise Political Risks in a European-North African Renewable Electricity Supergrid Abstract The option of decarbonisation of the European power sector with the help of significant imports of renewable electricity from North Africa via a trans-continental electricity Supergrid is increasingly gaining attention. In this paper, we investigate the geopolitical risks to European energy security in such a future, and discuss cornerstones for possible policy strategies to reduce these risks. The strategies are rooted in the interdependence between exporter and importer. We come to the conclusion that fostering and deepening, as opposed to reducing, the dependence of both sides on each other may be a valuable and powerful way to reduce the geopolitical risks of renewable electricity trade between Europe and North Africa.
def l2_pixel_loss(self, matches_b, non_matches_b, M_pixel=None): if M_pixel is None: M_pixel = self._config['M_pixel'] num_non_matches_per_match = len(non_matches_b)/len(matches_b) ground_truth_pixels_for_non_matches_b = torch.t(matches_b.repeat(num_non_matches_per_match,1)).contiguous().view(-1,1) ground_truth_u_v_b = self.flattened_pixel_locations_to_u_v(ground_truth_pixels_for_non_matches_b) sampled_u_v_b = self.flattened_pixel_locations_to_u_v(non_matches_b.unsqueeze(1)) norm_degree = 2 squared_l2_pixel_loss = 1.0/M_pixel * torch.clamp((ground_truth_u_v_b - sampled_u_v_b).float().norm(norm_degree,1), max=M_pixel) return squared_l2_pixel_loss, ground_truth_u_v_b, sampled_u_v_b
<filename>packages/logger/src/options.ts export interface LoggerOptions { /** * 前缀 */ prefix: string; /** * 是否显示类型 */ type: boolean; } export const defaultOptions = { prefix: "", type: true, };
package de.fuberlin.wiwiss.d2rq.values; /** * Custom translator between database values and RDF values. * Implementations of this interface can be used within d2rq:TranslationTables. * <p> * A Translator defines a 1:1 mapping between database and RDF values. * Mappings that are not 1:1 in both directions are not supported. * <p> * The type of the RDF node (URI, blank node, literal) is not specified by the translator, * but by the d2rq:ClassMap or d2rq:PropertyBridge that uses the d2rq:TranslationTable. * <p> * Translator implementations can have two kinds of constructors: * <ul> * <li>A constructor that takes a single argument, a Jena {@link org.apache.jena.rdf.model.Resource Resource}. * A resource representing the d2rq:TranslationTable will be passed to the * constructor and can be used to retrieve further setup arguments from the mapping file.</li> * <li>A constructor that takes no arguments.</li> * </ul> * Translators are instantiated at startup time, not at query time. * Performance is not critical. * * @author <NAME> (<EMAIL>) */ public interface Translator { Translator IDENTITY = new Translator() { @Override public String toRDFValue(String dbValue) { return dbValue; } @Override public String toDBValue(String rdfValue) { return rdfValue; } @Override public String toString() { return "identity"; } }; /** * Translates a value that comes from the database to an RDF value (URI, literal label, or blank node ID). * The mapping must be unique. * * @param dbValue a value coming from the database * @return the corresponding RDF value, or <tt>null</tt> if no RDF statements should be created from the database value */ String toRDFValue(String dbValue); /** * Translates a value that comes from an RDF source (for example a query) to a database value. The mapping must be unique. * * @param rdfValue a value coming from an RDF source * @return the corresponding database value, or <tt>null</tt> if the RDF value cannot be mapped to a database value */ String toDBValue(String rdfValue); }
Differential Vascularity in Genetic and Nonhereditary Heterotopic Ossification Introduction. Nonhereditary heterotopic ossification (NHO) is a common complication of trauma. Progressive osseous heteroplasia (POH) and fibrodysplasia ossificans progressiva (FOP) are rare genetic causes of heterotopic bone. In this article, we detail the vascular patterning associated with genetic versus NHO. Methods. Vascular histomorphometric analysis was performed on patient samples from POH, FOP, and NHO. Endpoints for analysis included blood vessel (BV) number, area, density, size, and wall thickness. Results. Results demonstrated conserved temporal dynamic changes in vascularity across all heterotopic ossification lesions. Immature areas had the highest BV number, while the more mature foci had the highest BV area. Most vascular parameters were significantly increased in genetic as compared with NHO. Discussion. In sum, both genetic and NHO show temporospatial variation in vascularity. These findings suggest that angiogenic pathways are potential therapeutic targets in both genetic and nonhereditary forms of heterotopic ossification.
Alfred Vierkandts notion of the social group German sociologist Alfred Vierkandt is hardly remembered today. This may seem surprising. Several prominent sociologists from the German-speaking countries contributed to the Handwrterbuch der Soziologie, which Vierkandt edited and published. However, Vierkandt did not interact with any of them significantly, and this publication brought no recognition of the importance of his sociological oeuvre in Germany, the United States, or elsewhere. His key notion of the social group found no acknowledgment among other contemporary or later sociologists, even though several of them used this notion and discussed social groups in their own writings. Moreover, those who paid close attention to his writings, like Abel and Hochstim, evaluated them quite critically. Both before and after World War II, Vierkandt remained a solitary and relatively unknown author.