content
stringlengths 7
2.61M
|
---|
Scalable QoS Degradation Locating from End-to-End Quality of Flows on Various Routes Methods to infer the locations of QoS degradation from end-to-end quality of flows have been proposed. These methods find the minimum set of links that covers all the bad quality flows and infer the links as the locations of QoS degradation. Since the computational complexity to find the minimum set cover is high, these methods have a difficulty in the scalability for the large-scale networks. In this paper, we propose a scalable locating method in which a network is (logically) divided into subnetworks. Bad quality flows going across the sub-networks create the dependency among sub-networks in inferring. Resolving this dependency, our proposed method enables each sub-network to run inferring algorithm independently in parallel. Simulation results show that the proposed method can reduce the inferring time significantly while the accuracy of inferring is not degraded |
Australia's unemployment rate edged higher in November to a seasonally adjusted 5.1 per cent, despite market expectations the rate would remain unchanged.
Part-time employment drove a 37,000 increase in the number of people with jobs during the month, better than a consensus expectation of 20,000 extra jobs, but Thursday's data from the Australian Bureau of Statistics showed overall unemployment increased by 12,500 people to 683,100.
This included a 6,400 drop in the number of people in full-time employment.
Overall male unemployment increased by 11,500 persons, and female unemployment increased by 1,000 persons.
Meanwhile, the monthly seasonally adjusted underemployment rate increased to 8.5 per cent, while the monthly underutilisation rate increased to 13.6 per cent.
The November participation rate increased slightly to 65.7 per cent.
The Australian dollar was largely unchanged on the release of the data, buying 71.22 US cents at 11:58am. |
What do we know about the Balkan endemic nephropathy and the uroepithelial tumors? Balkan endemic nephropathy (BEN), a familial chronic tubulo interstitial disease with a slow progression to terminal renal failure, affects people living in the alluvial plains along the tributaries of the Danube River. One of its most peculiar characteristics is a strong association with upper urothelial cancer. An increased incidence of upper urinary tract (UUT) transitional cell cancer (TCC) was discovered among the inhabitants of endemic settlements and in families affected by BEN. In areas where BEN is endemic, the incidence of upper tract TCC is significantly higher, even 100 times, than in non-endemic regions. Until now, several hypotheses have been introduced about the etiopathogenesis of BEN. Only the toxic effect aristolochia clematidis has been confirmed as a factor in the occurrence of the disease. We don't have specific biomarkers for an early diagnosis of BEN and UUT-TCC. With application of modern molecular and genetic methods in investigation of etiopathogenesis and diagnosis of BEN and UUT-TCC we should expect improvement in the study of BEN. |
<reponame>sappenin/java-ilp-plugin<gh_stars>1-10
package org.interledger.plugin.lpiv2.support;
import java.util.concurrent.Callable;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionException;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Executor;
/**
* Helper functions for {@code CompletionStage}.
*
* @author <NAME>
* @see "https://stackoverflow.com/questions/49705335/throwing-checked-exceptions-with-completablefuture/"
*/
public final class Completions {
/**
* Prevent construction.
*/
private Completions() {
}
/**
* Returns a {@code CompletionStage} that is completed with the value or exception of the {@code CompletionStage}
* returned by {@code callable} using the supplied {@code executor}. If {@code callable} throws an exception the
* returned {@code CompletionStage} is completed with it.
*
* @param <T> the type of value returned by {@code callable}
* @param callable returns a value
* @param executor the executor that will run {@code callable}
*
* @return the value returned by {@code callable}
*/
public static <T> CompletionStage<T> supplyAsync(Callable<T> callable, Executor executor) {
return CompletableFuture.supplyAsync(() -> wrapExceptions(callable), executor);
}
/**
* Wraps or replaces exceptions thrown by an operation with {@code CompletionException}.
* <p>
* If the exception is designed to wrap other exceptions, such as {@code ExecutionException}, its underlying cause is
* wrapped; otherwise the top-level exception is wrapped.
*
* @param <T> the type of value returned by the callable
* @param callable an operation that returns a value
*
* @return the value returned by the callable
*
* @throws CompletionException if the callable throws any exceptions
*/
public static <T> T wrapExceptions(Callable<T> callable) {
try {
return callable.call();
} catch (CompletionException e) {
// Avoid wrapping
throw e;
} catch (ExecutionException e) {
throw new CompletionException(e.getCause());
} catch (Throwable e) {
throw new CompletionException(e);
}
}
/**
* Wraps or replaces exceptions thrown by an operation with {@code CompletionException}.
* <p>
* If the exception is designed to wrap other exceptions, such as {@code ExecutionException}, its underlying cause is
* wrapped; otherwise the top-level exception is wrapped.
*
* @param runnable an operation that returns a value
*
* @return the value returned by the runnable
*
* @throws CompletionException if the runnable throws any exceptions
*/
public static void wrapExceptions(Runnable runnable) {
try {
runnable.run();
} catch (CompletionException e) {
// Avoid wrapping
throw e;
} catch (Throwable e) {
throw new CompletionException(e);
}
}
/**
* Returns a {@code CompletionStage} that is completed with the value or exception of the {@code CompletionStage}
* returned by {@code callable} using the default executor. If {@code callable} throws an exception the returned
* {@code CompletionStage} is completed with it.
*
* @param <T> the type of value returned by the {@code callable}
* @param callable returns a value
*
* @return the value returned by {@code callable}
*/
public static <T> CompletionStage<T> supplyAsync(Callable<T> callable) {
return CompletableFuture.supplyAsync(() -> wrapExceptions(callable));
}
public static CompletionStage<Void> supplyAsync(Runnable runnable) {
return CompletableFuture.supplyAsync(() -> {
wrapExceptions(runnable);
return null;
});
}
}
|
import java.util.*;
import java.io.*;
public class D {
public static void main(String[] args) throws IOException {
Output out = new Output();
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
StringTokenizer st;
// three cases:
// 1. (n-1)k < s
// 2. (n-1)(k-1) > s-(n-1)
// 3.
st = new StringTokenizer(br.readLine());
int n_houses = Integer.parseInt(st.nextToken()),
n_steps = Integer.parseInt(st.nextToken());
long total_dist = Long.parseLong(st.nextToken());
int max_dist = n_houses-1;
if ((long)max_dist*n_steps < total_dist || n_steps>total_dist) {
out.println("NO");
out.flush();
return;
}
out.println("YES");
long max_multiple = get_multiple(max_dist, total_dist, n_steps);
long steps1 = max_multiple / max_dist;
long remaining_steps = n_steps-steps1;
int last_house = 1;
for (int i=0 ; i<steps1 ; ++i) {
last_house = print_step(last_house, max_dist, n_houses, out);
}
if (remaining_steps == 0) out.flush();
else if (remaining_steps <= 1) {
print_step(last_house, total_dist-max_dist*steps1, n_houses, out);
out.flush();
} else {
long remaining_dist = total_dist - max_dist*steps1;
last_house = print_step(last_house, remaining_dist-(remaining_steps-1), n_houses, out);
--remaining_steps;
for (int i=0 ; i<remaining_steps ; ++i)
last_house = print_step(last_house, 1, n_houses, out);
out.flush();
}
}
public static long get_multiple(int distance, long total_dist, int n_steps) {
long steps = total_dist / distance;
while (true) {
long remaining_steps = n_steps - steps;
long remaining_dist = total_dist - distance*steps;
if (remaining_steps <= remaining_dist) {
return distance * steps;
}
--steps;
}
}
public static int print_step(int current_house, long dist, int n_houses, Output out) throws IOException {
int dist_to_first = current_house - 1;
int dist_to_last = n_houses - current_house;
if (dist_to_first == 0) {
current_house += dist;
out.print(Integer.toString(current_house) + ' ');
return current_house;
}
if (dist_to_last == 0 ) {
current_house -= dist;
out.print(Integer.toString(current_house) + ' ');
return current_house;
}
if (current_house - dist < 0) {
current_house += dist;
out.print(Integer.toString(current_house) + ' ');
return current_house;
}
if (current_house + dist > n_houses) {
current_house -= dist;
out.print(Integer.toString(current_house) + ' ');
return current_house;
}
if (dist_to_first < dist_to_last) {
current_house -= dist;
out.print(Integer.toString(current_house) + ' ');
return current_house;
} else {
current_house -= dist;
out.print(Integer.toString(current_house) + ' ');
return current_house;
}
}
}
class Output {
private BufferedWriter out;
Output() {
this.out = new BufferedWriter(new OutputStreamWriter(System.out));
}
void print(String s) throws IOException {this.out.write(s);}
void println(String s) throws IOException {this.out.write(s + "\n");}
void println(long n) throws IOException {this.out.write(n + "\n");}
void println(int n) throws IOException {this.out.write(n + "\n");}
void println() throws IOException {this.out.write('\n');}
void flush () throws IOException {this.out.flush();}
} |
A Revolution in Military Affairs (RMA) versus “Evolution”: When Machines are Smart Enough!
Tom Keeley
There is a general perception that the Operational Environment (OE) will “evolve” as technology evolves at an incremental rate: smaller, cheaper, faster… The “evolution” term is used throughout the Mad Scientist call to action. But there have been revolutions or significant paradigm shifts that have transformed the military in the past: The transition to bows and arrows from clubs allowed engagement away from the target. The transition to guns from bows allowed more accuracy and more power. The emergence of radar from visual sighting allows early detection. Mobile communication extended the reach of command. Airplanes allowed engagement from above. Ballistic missiles delivered more destructive power, allowed engagement from even further away and kept the missile user out of harm’s way. Remotely piloted drones allow the delivery of ordnance into rapidly changing target areas, while still keeping warfighters out of harm’s way. The Internet of Things (IoT) suggests the potential for more connected devices to be used to more rapidly share information. So one could hypothesize a war room where information was coming from everywhere; allowing battle space commanders to allocate resources to achieve their goals while sitting in the comfort of their bunkers (potentially far from the battle zone). A futurist of the past may have suggested pursuing better and better bows and arrows.
This paper suggests that this is an obsolete picture of the future. The previous scenario includes humans-in-the-loop. There is still a perception that humans need to be making all the critical decisions: When should force be applied? How much force should be applied? How much collateral damage is acceptable? … There is a perception that only humans can effectively handle this level of complexity. The fog of war is perceived to be too complex for any machine to handle. A human has approximately 100 billion neurons (brain cells) and 1,000 trillion synaptic connections. We are far from packaging that level of computing into a chip. Right?
There are two ways to approach the future:
Look at where you are today, and consider how you want to invest your money to create a solution for tomorrow (evolution). Pick a point in the future and identify the hurdles that have to be overcome to get there. Jump over those hurdles and jump past the evolutionary models.
So, the future capability that will be explained later in this document is that machines will have the ability to remove humans from the real-time decision-making loop. This will greatly speed up the decisions (offensive and defensive). And these decisions are not just the point decisions: how, what, when, where, and why. They are the adaptive command and control decisions: how much, how much over time, how much and where over time. This is an adaptive, analog distribution of force over space and time.
Before we get to the “new” approach, let’s look at a picture of the battlespace as it could be delivered today.
Today’s Commercial-Off-The-Shelf Technology
The hobbyist community has effectively commoditized the drone. While there will be enhancements made to the power system that allows for longer flight, remotely piloted vehicles can be purchased by anyone. With a small upgrade to the drone controller, anyone can use open-source Mission Planner[i] software to plan and execute missions for individual drones based on Google Earth / GPS data. These are not completely adaptive goal seeking systems, but if all goes well, they can perform their human defined mission by moving from point to point and performing selective actions. Another example: The self-driving car developers have learned how to package more real-time observation skills into their ground systems. Even as presently developed, they could be used as weapons.
Another view of Human Intelligence
There may be a perception that since the human brain is so complex, it must be used for a machine to accomplish the same tasks. There are researchers that are pursuing goals to create a machine with the ability to fully emulate a human; however this is not necessarily required for machines of war. A human is an example of a greatly adaptive machine that can be challenged with an almost infinite variety of goals. It can use almost any tool. It can conceive of, and build, new tools. But if we look at the creation of war “systems”, they don’t have to be that capable. And even if you looked at individual humans and assigned them only a selective set of tasks, then what they do is greatly simplified. And if you give up pro-creation responsibilities things get even simpler. In fact, one might suggest that during working hours, a human is really limited in what he/she can do with the tools and information that are available to them. Humans are not responsible for all positions and all knowledge in order to make all tactical and strategic decisions. A pilot flying an aircraft has only a few controls at his/her disposal. The pilot has to decide where to go and what to do, but their options are really limited.
The Conventional Technology Options and Issues (or - Why isn’t everything automated today?)
Now that we have explained why the problem is not so big, it is important to understand why everything has not already been automated by now. Perhaps it has been the technology that has been applied…?.
First there is conventional IF THEN ELSE logic. Whatever programming language you might choose, this stuff works. If you ask a programmer if he/she can write a program to solve a problem the likely answer is “yes”, if you can explain the problem and the solution. It is simply a matter of time and money. And, even for simple systems we are asking the machines to interpret complex information sets in order to pursue goals on their own. That brings us to the mathematicians that use predicate calculus. Here is what predicate calculus provides: a formal way of defining functional relationships between information items. But the domain expert is probably not a mathematician, or a programmer. So now we have a cost and schedule issue. Neither schedules, nor pocketbooks are infinite. In addition, the domain expert is not likely to be able to explain the problem (and the desired solution) in a manner that can be understood “easily” by the software engineer or the mathematician. Usually every time a concept is transferred between one individual and the next, something is lost in the translation. Again, this results in long development and debug cycles. This is likely why complex problems have not been addressed with conventional programming.
Then there are the neural net designers. Using the human brain model they expose the neural net to patterns and teach it value systems. The resulting neural net system interpolates between what it was taught, and what it sees. Unfortunately, teaching neural net systems takes a lot of time. And if the systems are not appropriately taught, then just like humans, bad judgmental decisions can be made. In addition, if you want to add new sensors (information sources) to the system, the neural net system may have to be completely retaught. There are also researchers that want systems to learn on their own (just like humans). There are people who are concerned that weapon systems might learn on their own how to switch sides (because those systems decided on their own that it was the right thing to do at the time). Another issue with neural nets is that they cannot easily explain why they did what they did. Perhaps we are not quite ready to turn human evolution over to a self-learning machine.
Threats and Opportunities and a Value System
Now back to our problem space. If we expect to automate the battlespace at all levels, what would this system look like? Whether it is offense or defense based, one is dealing with competing capabilities and competing goals. For any kind of automation, we are dealing with measurable entities (measurable information items). These items can be treated as either threats or opportunities. Sometimes they can be both threats and opportunities when applied to different parts of the problem space. All items are measurable. Example: When humans perform tasks during scheduled working hours, the human is constantly balancing opportunities and allocating resources at his/her disposal in order to accomplish multiple simultaneous tasks. The human is inhaling / exhaling, eating, moving toward/away from obstacles, and performing tasks as appropriate. The human is operating alone, or with others (as appropriate). The human is collaborating as needed, and when asked. The human’s value system (their needs) is used to prioritize tasks and allocate resources. The human’s value system changes and adapts to its situation. The human’s history, biases, knowledge, and risk tolerance applies a weight (a value) to the different factors. What we have just described is an analog system. Now, for our autonomous battlespace, we have goals and objectives (offensive and defensive). In the future machines may set their own goals and strategies, however, in the near term humans will likely retain control. So, in the interim it will be up to humans to create policies that control the behavior of machines.
The Paradigm Shift
The paradigm shift that we are describing in this paper is that a new information model will be needed to both define and execute information. This model will keep humans in control (humans-on-the-loop), while at the same time keeping them out-of-the-real-time-loop. Policies will be created by humans that understand the capabilities of their machines and how those capabilities should be deployed. A hierarchy of battlespace management will be deployed from individual units (devices) through a loosely coupled chain of command. Policies will be created that define how organizations of machines can come together and break apart to fulfill the broad objectives of the battlespace. Since the policies will be created by humans, the decisions and actions of units, teams, and battalions will be traceable to the policies, and then to the humans that created the policies. Organizations that control these systems (and systems of systems) will be able to monitor their competition and decide how to spend their money (offense and defense, small and large, short term and long term).
What if Accomplishing this is Easy?
If we stopped at this point, one could ask: “What is new? We have just continued our evolutionary work and automated more ‘stuff’”. However, the technology we are describing exists now, and it is simple to use. It is platform and architecture independent (so it is not tied to any specific hardware platform or software development environment). It occupies a very small memory footprint, which means it can be implemented in the small hobbyist drones available today.
When we say simple, we mean that you can be productive in a week and working on complex policies the next week. And, even if you are not a policy expert (defining who to shoot and when, or choosing between one tactic or another, or … it is easy to create the policy - and test as you go. You don’t need a team to support the process. So anyone that wants to build and test a policy that describes how machines want to pursue goals can start the process and see results in a very short time. This may not be impressive if you are competing with a brick, because the brick will not change its tactics or strategies. But in a conflict domain, the tactic that may be working one day will have to change the next day to keep up. So the primary paradigm shift comes with ease-of-use. The secondary driver is that complex behaviors can be deployed in very low cost platforms.
Knowledge Enhanced Electronic Logic (KEEL) Technology[ii]
The technology that allows humans to package policies to control the behavior of our battlespace systems and devices is KEEL Technology. It was introduced to NATO in an offensive role in 2014[iii]. It was introduced to NATO in a defensive role in 2015[iv]. KEEL Technology allows domain experts / Subject Matter Experts (SMEs) to create and test policies and create (auto-generate) conventional code that can be handed off to the software engineers for insertion into the target system / device. No “calculus” (in the conventional definition) is required. Several years ago a 13 year old learned and used KEEL in only a few hours. More recently, a 15 year old created adaptive policies for an Arduino (hobbyist) drone. KEEL Technology is supported with the KEEL Dynamic Graphical Language which makes it easy to create and test policies and to “see the information fusion process” in action. You can “see the system think” through a process called language animation.
It’s simple: If you know what a bar graph looks like, and if you understand that a taller bar is more important than a shorter bar, then you are on your way to being able to create an adaptive KEEL-based policy that can run in a device, in a computer, and distributed across the cloud.
Decisions and actions are of three types:
Go/No-go (do something, or refrain from doing something) Select the best option Allocate resources (do so much of some number of things)
More complex decisions are combinations of all of the above. These decisions can be distributed and shared across the hierarchy of systems and devices. The policies will define how and when to share information, and how to operate when information links are broken. (Just like policies for human organizations.)
Summary
Given that humans “in” the battlespace can be replaced by software applications and devices, how will the questions posed by the Megacities/Dense Urban Areas theme be addressed?
Situational understanding: Information will be collected and abstracted into measurable terms. Confidence will be determined and assigned by weighted factors in the system. The KEEL-based systems will be answering these questions throughout the hierarchy:
What does it all mean?
What should I {the element within the battlespace} do about it?
Human created policies will define how the entities should (and are allowed to) adapt. They will decide if they can operate on their own. They will decide if they can ask for help. They will switch objectives. All of this can be accomplished according to the human created policies. This doesn’t mean humans are without responsibility. Opposing forces will be continually updated with new tactics and strategies (value systems). New sensors will be introduced to provide better and better information. With KEEL it is easy to add new information items into the policy, but it is still work for humans. Plus there will be after-mission reviews. Is the value system correct? Did the system (system of systems) perform as desired? Could it have performed better if the policy was adjusted? Was the system tricked? How can this be avoided in the future? How has the human population in the battlespace responded? Are the political, social, economic impacts appropriately considered in the system policies? Humans are still in control.
Freedom of movement and protection: Mathematically explicit policies will be created and executed. The future will be similar to a chess match because adversaries will adjust their tactics and strategies (and acquire devices with different capabilities) to probe the weaknesses of the opposition. There will be a transition from training humans (starting over with every warfighter) to continual refining of the operational policies.
Expeditionary operations: Policies of every type will be developed. They will be constantly upgraded and changed. It will be a war of information and disinformation.
Future training challenges: Training individuals in the use of KEEL will be easy. Much more emphasis will be on tactics and strategies and information warfare (trickery and deceit) where the effort will be to try and convince the opposition to shoot itself in the foot. Some of this will be social warfare to teach humans how to interact with the machines.
The platforms of conflict: By creating KEEL-based policies for autonomous systems the systems/devices can operate independently, they can automatically decide when a group or team would be more/or less effective and self-organize, automatically create a command hierarchy, and pursue goals based on mathematically explicit policies. They can use a value system understood across the entire spectrum of devices that understand and can operate almost immediately to change anywhere in the battlespace. Unlike humans, these machines can have self-value determined by their owners. Unlike some present organizations where they have to recruit suicide bombers, devices can be used (by the “system”) to probe defenses. Their destruction will be assumed and be an accepted part of the overall human controlled strategy. The result is that one ends up with platforms executing the “best” tactics and the “best” strategies as determined by the human chess players.
NOW (For Organizations That Can Accept New Ideas)
KEEL Technology is available now, not 20 years from now. It is possible to create these adaptive policies today. In the past, governments have had the luxury of fighting wars with individual humans. They trusted that those humans would behave in a desired manner. When some humans fail, failure is accepted, because they were human. When machines fight the wars, it will not be acceptable to mass produce bad behaviors. KEEL allows policies to be created and executed with mathematical precision, and those policies are 100% auditable. Organizations that understand the potential of KEEL Technology first will have an advantage compared to those “late to the table”. This is just like an experienced chess player who competes against a novice. Granted, it will take time to automate the behaviors of the entire battlespace hierarchy. However, it will take a whole lot more time to do this using conventional approaches- if KEEL Technology isn’t used.
The challenge to the US military is this: whether they want to be a leader or a follower using KEEL Technology in their autonomous systems. If a new technology is available that is so easy to use, and that technology can change how conflicts are fought, and determine who wins and who loses, then the balance of power can shift, almost over-night. New platforms can be created and deployed in months, rather than years. New platforms can cost hundreds of dollars, not billions. An organization with what appears to be superior capabilities one day can be following a small terrorist cell or individual anarchists the next day. Government “experts” who are paid for their knowledge may reject the idea that anything new can be invented that they don’t know about, or that they haven’t invented themselves (it’s called “cognitive dissonance”).
KEEL is not “artificial intelligence”. It is an enabling technology that makes it easy to package human judgment and reasoning skills (expertise) into machines so the machines can behave as if the subject matter experts, operating with their rules of engagement, organizational structures, tactics and strategies were deployed in small inexpensive disposable devices.
Are you ready?
End Notes |
/*
* readMatrix
* read a file into a matrix object
* file must have matrix dimension as first line
* matrix must be square
*/
void readMatrix(const char* filename, MatrixXd& m) {
ifstream fin(filename);
if (!fin) {
cout << "Cannot open file." << endl;
return;
}
int n_rows, n_cols;
fin >> n_rows >> n_cols;
MatrixXd temp(n_rows, n_cols);
for (int i=0; i<n_rows; i++) {
for (int j=0; j<n_cols; j++) {
fin >> temp(i,j);
}
}
fin.close();
if (n_rows == n_cols) {
m = temp;
}
else {
float max_elem = temp.maxCoeff();
float dim = max(n_rows, n_cols);
m.resize(dim,dim);
for (int i=0; i < dim; i++) {
for (int j=0; j < dim; j++) {
if (i >= n_rows || j >= n_cols) m(i,j) = max_elem;
else m(i,j) = temp(i,j);
}
}
}
} |
N.C. Wesleyan’s Alec Titmus throws the ball to first base during a game in 2018. Titmus is one of 29 returning players this season for the Bishops.
With 11 seniors and 29 players returning overall to the N.C. Wesleyan men’s baseball team this season, the faces on the diamond will largely look the same.
The biggest difference for the Bishops will be in the dugout, with Greg Clifton, the former Faith Christian coach coming off two straight state titles, who took over for longtime coach Charlie Long this summer.
From sifting through and collecting compliance forms, to sitting down with his inherited senior class this past fall, to looking after the ball field, Clifton’s been hard at work since taking the job over in June.
“Writing the lineup’s going to be the easy part,” he said on Thursday.
But at long last, he’ll be able to ink his first lineup card this Saturday, with the Bishops hosting Eastern Mennonite at Bauer Field in a doubleheader. The first pitch of the first game of the 2019 season will come at 11 a.m.
The Bishops last season finished at 22-23, with their season coming to an end with a 12-7 loss in the USA-South tournament against LaGrange.
As NCWC’s roster stands right now, there are two Nash Central players — Cameron Taylor and Noah Shrock — and one from Faith Christian, in Luke Mills.
The roster next season will have more than double the amount of local players.
So far, Clifton and the Bishops have received commitments for the 2020 season from three players from Nash Central (Trey Whitley and Drifton and Chandler Padgett), one from Rocky Mount High (Sam Mills), another from Faith Christian (Trysten Edwards) and, finally, one more from Ben Lewis, who played last year for Louisburg College and was a Patriot before that.
Mike Fox, N.C. Wesleyan’s coach from 1983 to 1998, who’s now in charge at UNC Chapel Hill, won the Bishops their first NCAA Division III title in 1989.
Charlie Long, who took over for Fox and coached NCWC for 21 years, added another national title in 1999.
With Long retiring this summer, Clifton became just the third coach for the Bishops in the last 36 years, with the previous two reaching the mountain top. No pressure.
Growing up in Roanoke Rapids, Clifton admired those successful Bishops teams of the late 1980s and ‘90s from afar.
In all, the Bishops will play Eastern Mennonite three times this weekend, with the third game scheduled for Sunday at 1 p.m.
From there, USA-South play won’t be far off. The first conference action of the season for NCWC comes Feb. 23, when the Bishops host LaGrange at home. |
/*
* Licensed to the OpenAirInterface (OAI) Software Alliance under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The OpenAirInterface Software Alliance licenses this file to You under
* the OAI Public License, Version 1.1 (the "License"); you may not use this file
* except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.openairinterface.org/?page_id=698
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*-------------------------------------------------------------------------------
* For more information about the OpenAirInterface (OAI) Software Alliance:
* <EMAIL>
*/
/*! \file PHY/NR_TRANSPORT/nr_mcs.c
* \brief Some support routines for NR MCS computations
* \author
* \date 2018
* \version 0.1
* \company Eurecom
* \email:
* \note
* \warning
*/
#ifndef __NR_TRANSPORT_COMMON_PROTO__H__
#define __NR_TRANSPORT_COMMON_PROTO__H__
// Functions below implement minor procedures from 38-214
/** \brief Computes Q based on I_MCS PDSCH and when 'MCS-Table-PDSCH' is set to "256QAM". Implements Table 5.1.3.1-2 from 38.214.
@param I_MCS */
uint8_t get_nr_Qm(uint8_t I_MCS);
/** \brief Computes Q based on I_MCS PUSCH. Implements Table 6.1.4.1-1 from 38.214.
@param I_MCS */
uint8_t get_nr_Qm_ul(uint8_t I_MCS);
#endif
|
// Evaluate takes the given input, processes it and returns an evaluation result.
func (eval *Evaluater) Evaluate(input string) string {
var cmd commandFunction
var result string
for i := 0; i < len(eval.commands) && cmd == nil; i++ {
cmd = eval.commands[i](input)
}
if cmd != nil {
result = cmd(eval.target)
} else {
result = eval.style.Error()("Unknown command: [", input, "]")
}
return result
} |
Complication during transportation and 30days mortality of patients with acute coronary syndrome Background Patients with acute coronary syndrome (ACS) who present to hospitals without interventional facilities frequently require transfer to another hospital equipped with a cardiac catheterization laboratory. This retrospective cohort study evaluates the association of the type of medical transport with patient outcomes. Methods A retrospective analysis of medical records of patients with ACS transported by basic (BT) and specialist transfer (ST) by emergency medical teams (EMTs). We analyzed age, gender, hemodynamic parameters, type of the emergency medical team, and complications during transport as well as patient survival to hospital admission, survival time and the 30-day mortality rate. Results Of 500 patients who underwent transfer, ST transported 368 (73.6%) and BT 132 (26.4%) patients (p<0.001). Complications during transportation occurred in 3 (1%) in the ST group and 2 (1.5%) in and BT group. Cardiac arrest during transfer occurred in no (0%) patients in the ST group, and 2 (1.5%) in the BT group (p=0.118). Survival to admission was recorded in all patients in the ST group and 131/132 (0.8%) patients in the BT group (p=0.592). 40 (12%) of patients in the ST group and 13 (11%) patients in the BT group (p=0.731) died within 30days of transfer. Conclusions Complications during medical transport of ACS patients from hospitals without a cardiac catheter lab to hospitals equipped with such a lab were rare and their incidence was not associated with the type of transporting EMT. The type of EMT was not associated with 30-day patient mortality. Background Acute coronary syndromes (ACS) are common and carry significant short and longer term risks to the patient. For patients with ST-segment elevation myocardial infarction (STEMI) guidelines recommend timely access to a hospital capable of performing percutaneous coronary intervention (PCI-capable hospital). For patients presenting with non-STEMI ACS, guidelines recommend a primary PCI strategy in cases of haemodynamic instability or shock, refractory ischaemic pain, mechanical complications or recurrent dynamic ST-segment or T-wave changes. The onset of ACS symptoms prompts help-seeking behavior by patients. Depending on the severity of the symptoms, their characteristics and the assessment of the threat to health or life made by the patient and/or their family, the patient has the following options: calling an ambulance, ordering a house call by a physician or nurse, seeing the primary care physician on their own, going to a hospital admissions room or a hospital emergency department (ED). Given that in Poland the patient has several options regarding how to contact the health care facilities, it is not surprising that such contact is frequently made from outside a hospital equipped with an interventional cardiology (catheterization) unit. In Poland, just over half (56%) STEMI patients are admitted directly to a PCI-capable hospital. Therefore, a substantial number of patients who present initially to a non-PCI capable hospital require transportation by an emergency medical team (EMT) to the nearest PCI-capable hospital. The organization of health care for ACS patients is an important factor influencing chances of survival. Reduction of time between the first medical contact and the performance of coronary angiography and angioplasty if indicated is a key recommendation of international guidelines. In Poland, the national emergency medical services (EMS) provide front-line emergency response, but do not undertake interhospital transfers, which are the responsibility of private ambulance providers contracted by individual hospitals. These latter emergency medical teams (EMTs) are organized in two forms: a) a basic team (BT) consisting of at least two persons authorized to carry out medical emergency procedures (paramedics, most of whom receive their professional qualifications following a higher education diploma, and emergency nurses with secondary medical or higher nursing education, holding specialist qualifications in the field of emergency nursing); b) a specialist team (ST) comprising at least three persons, one of whom is always a physician. It is, therefore, practicable to identify differences in procedures and the association of both types of teams with regard to ACS patients. Considering the differences between paramedics, nurses and doctors as regards education, training, skill level and authority (e.g. in relation to advanced airway management and some medicines), we hypothesise that paramedics and emergency nurses in the Polish emergency medical system provide safe and effective care of patients, comparable to that provided by physician-led specialist teams during interhospital transfer of ACS patients. Therefore, the objective of this study was to assess whether the type of team (basic or specialist) transporting the ACS patient was associated with patient survival to admission at the PCI-capable hospital and on the 30day mortality. Study design and setting This study comprised a retrospective analysis of medical records of 500 patients with ACS transported since 1st January 2010 to 31st August 2015 by specialist and basic EMTs belonging to the Polish Emergency Medical Services company in Wroclaw (Poland), from admission rooms, hospital emergency departments and other departments of 7 hospitals without PCI capability in the Lower Silesian region of Poland, to PCI-capable hospitals. No patient with ACS was excluded. Study population In all the studied cases it was possible to transport the patient both by a specialist team (with a physician) and by a basic team (manned by paramedics), and the decision regarding the type of transport was made by a physician employed in the hospital who issued the transport order. No formal written protocols for ordering transportation by an EMT were available at the time of the study, other than that the EMT should arrive within 30 min of physician request. Ethical considerations This study was approved by the independent Bioethics Committee of the Wroclaw Medical University (decision no. KB-513/2016). All participants were asked to gave their informed consent to participate in this study. The study was carried out in accordance with the tenets of the Declaration of Helsinki and reccommendations of Good Clinical Practice. Statistical analyses Statistical analysis was performed using the Statistica 12 (TIBICO Inc., USA) software under licence of the Wroclaw Medical University, Poland. Patients were divided into two groups, depending on the type of the transporting EMT, i.e. the specialist transport group (ST) and the basic transport group (BT). For continuous variables, the arithmetic means and standard deviations were calculated and then tested with the Shapiro-Wilk test to determine the type of distribution. For qualitative variables, we calculated the frequency of their occurrence. Continuous variables were compared using the parametric t-Student test for independent trials or the nonparametric Mann-Whitney test, depending on the fulfillment of test assumptions. The chisquare test was used to compare qualitative variables. Logistic regression analysis and backward stepwise regression analysis of the dependence of 30-day survival was performed based on independent variables such as age, gender, hemodynamic parameters, applied treatment and type of EMTs, which carried out the order of medical transport. The results were considered statistically significant if the p-value is p < 0.05. Characteristics of study population The studied group comprised 500 patients: 292 (58.4%) men and 208 (41.6%) women (p < 0.001). Mean age in the study population was 68.7 ± 13.9 years. Female patients had a mean age of 72.6 ± 12.3, significantly higher than the age of male patients, which was 66.0 ± 14.3 years (p = 0.018). Baseline clinical characteristics are shown in Table 1. We were unable to identify ACS phenotype (STEMI, non-STEMI) from the documentation provided. Emergency medical teams A group of 132 persons (26.4%) were transported by a basic EMT (the BT group) and 368 (73.6%) persons by a specialist EMT (the ST group) (p < 0.001). Comparing the ST and BT groups, no significant differences were found with regard to patients' age and gender distribution. There were no statistically significantly differences in baseline hemodynamics parameters between the groups. Baseline demographic and hemodynamic data are provided in Table 1. Complications during medical transport Complications occurred during transportation in 1% of patients in the ST group and 1.5% of patients in the BT group (p = 0.366). Two patients (0.4%) in the BT group suffered cardiac arrest during transport. One patient transported by ST required mechanical ventilation. Cardiac arrest occurred in no patients in the ST group (0%) and two in the BT group (1.5%) (p = 0.118). In one case of cardiac arrest during transport, the presenting rhythm was pulseless electric activity (PEA) and in the other case asystole. In the first case the patient was successfully resuscitated, in the second case the patient died. Single cases of other complications were also reported. One patient transported by a specialist EMT had respiratory failure during transport (the mechanically-ventilated patient), one person had recurrent ventricular tachycardia treated with intravenous amiodarone, and one patient in cardiogenic shock was treated with catecholamines (administration started in ambulance). All three patients survived to hospital admission and to 30 days. Survival to hospital admission Survival to hospital admission was recorded in 499 patients (99.8%); 368 patients in the ST group and 131 patients in the BT group. Based on the data obtained in the study, we created models for logistic regression analysis. The dependent variable was survival of the ACS patient beyond 30 days since the day of medical transport. The independent variables were: age, gender, the occurrence of SBP < 90 mmHg, SBP 140-179 mmHg, SBP ≥180 mm, HR < 55/ min, HR > 90/min, administration of catecholamines, use of oxygen therapy, a recorded SatO2 ≥ 92% without administration of oxygen or ≥ 95% in patients who received oxygen, as well as the type of EMT. The results of the logistic regression and backward stepwise regression analysies for 30-day survival are shown in Table 3. The full-model logistic regression analysis showed that factors significantly correlated with a lower risk of death were: age (the younger the patient, the lower the risk of death), SBP in the range of 140-179 mmHg, SatO2 ≥ 92% without oxygen therapy or ≥ 95% when treated with oxygen therapy. In the backward stepwise logistic regression analysis, only three variables were significantly correlated with death within 30 days. These were age (the older the patient, the higher the risk of death), SBP < 90 mmHg and a failure to obtain the SatO2 ≥ 95% with oxygen therapy or ≥ 92% in cases where the patient did not receive oxygen. Figure 1 presents a summary of the main findings of the study. Discussion The need to transport patients from a non-PCI-capable hospital to a PCI-capable hospital is an important issue, of studies carried out in other countries stem, among other factors, from heterogenous organization of emergency medical systems. As types of the EMTs vary between countries, the scope of medical intervention falling within the competencies of each EMT is different. As discussed earlier, Poland operates two types of EMTs: a basic EMT, with paramedics or emergency nurses (specialized in the field of emergency nursing) and a specialist EMT which must always include a physician. In Poland, paramedics are mainly graduates of. In Japan, Fujii et al. reported that over 65% of patients were transferred to a hospital by emergency medical services, of which over 13% were taken initially to a non-PCI capable hospital, necessitating subsequent interhospital transfer to a PCI-capable hospital. A quarter (24%) of all patients self-presented at non-PCI capable hospitals and required interhospital transport. In Quebec, Lambert et al. found that of 774 patients, 441 were brought to the first hospital by ambulance, and then 213 were further transported to another, PCIcapable by the same EMT, while 228 patients had to wait 4-12 min for another ambulance transport. Patients who self-presented to the hospital were also transported to another facility. Using the same ambulance for interhospital transport reduces the time to arrival at a PCIcapable facility, and is associated with improved outcomes. In Ireland McKee at al reported that out of 1894 patients, 38% initially contacted their primary care physician, while in the United States, Fosbol et al. reported that out of 6010 patients as many as 49% required transport from a non-PCI facility to a PCI-capable hospital. The above data confirm that irrespective of the country, ACS patients frequently present to non-PCI capable hospitals, subsequently requiring interhospital transfer. This also suggests that inter-hospital transports are a common strategy employed not only in Poland, but also in other countries and on other continents. Demographic information In Poland, the average age at which patients experienced a myocardial infarction leading to hospitalization or death is 63 years for men and 74 years for women. Men constituted 58% of all the patients. Tousek et al. demonstrated that within the context of age at which myocardial infarction occurred, men constituted 63%. Trojanowski et al. reported a higher percentage of male patients presenting with this condition, namely 68%. In common with other countries, in Poland, ACS is more common in men than in women. Hemodynamic parameters Baseline demographic and hemodynamics characteristics in the present study are comparable with reports from other countries. Complications during transport and rates of survival Our study found that complications rarely occurred during medical transport irrespective of the type and capabilities of the team undertaking the transfer. Complications in the ST group were recorded in only 1% of patients, and in the BT group in 2% of patients. The proportion of patients who died during transportation was very low. This is in contrast to findings from other studies. Fosbol et al. reported that pre-hospital cardiac arrest was more frequent in patients who first came to a non-PCI capable hospital than in those who were transported by an EMT to a PCI-capable hospital (9.5% vs. 2.8%). In-hospital mortality was lower in patients transported directly to a PCI-capable hospital 6.3% vs. 9.3%). The differences between our study findings and those of other authors may be because the patients who were at greatest risk, i.e. presented with shock, severe pain or collapse may have called emergency medical services immediately, had a pre-hospital ECG performed and been transported directly to a PCI-capable hospital. A study conducted in Paris on 8181 patients with ACS showed that patients with acute, short-term symptoms who call an ambulance were at a greater risk of death than those with less acute symptoms. These observations may account for the low complication and mortality rate in our study. 30-day survival In the United Arab Emirates Callahan et al. reported 30-day mortality in patients transferred from a non-PCI capable hospital to a PCI-capable facility at 30 days, 1/128 (0.8%) patients transported between hospitals by emergency medical services had died. In USA, Al-Zaiti et al. reported that where ambulance services transported STEMI patients from their home to a hospital, the 30 day mortality was 11%. In turn, Thang et al. demonstrated that in ACS patients transported to a hospital by ambulance, the 30-day mortality was 4.3%. A lower mortality was also reported by May et al., where the 30-day mortality for such transports amounted to 2.9%. Based on the above studies conducted in different countries, we may conclude that the 30-day mortality rate varies and depends on the studied population. It must be noted that in the studies by Thang et al. and May et al., the patients were transported directly to a PCI capable facility. In our study we were unable to assess the time needed for the patient to reach a hospital equipped with an interventional cardiology lab, although some patients waited for transfer in a hospital located as far as 30 km away from the destination PCI-capable hospital. Another important aspect is that these patients always waited for the arrival of another transport team. Given the time dependent nature of ACSparticularly STEMIwe may assume that such delays could adversely impact the patients' chances of a good outcome. In our study, the logistic regression analysis showed that the patient's chances of survival were reduced by 4% with each year of age and increased by 60% when the SBP was 140-179 mmHg compared to the SBP < 90 mmHg, and also increased by 80% in the case of normal levels of saturation as compared to reduced saturation. The backward stepwise logistic regression analysis demonstrated a fivefold higher risk of death for patients with SPB < 90 mmHg. The risk of death increased with age (the older the patient, the higher the risk of death) and increased by 73% in cases where the saturation levels of ≥95% with oxygen therapy and ≥ 92% without oxygen therapy were not obtained. Schoos et al. found that, among others, the patient's age, pulse > 100/min, diabetes and chronic kidney disease were all statistically significantly correlated with the risk of death within 30 days. The risk of death increased nearly threefold if the heart rate was > 100/ min. Studies by other authors confirm that the patient's age affects the 30-day mortality. Limitations Our study has several limitations. Firstly, we did not have access to referring physicians' decision making regarding type of transportation team requested. Secondly, the records we assessed did not include potentially informative variables such as ACS phenotype, time from symptom onset, biomarker assays, Killip class or past medical history such as diabetes or hypertension. Thirdly, we were unable to ascertain how patients first presented to the non-PCI capable hospital, of what treatments had been administered prior to transfer. Fourthly, we did not have access to information on treatments given at the PCI-capable hospital (e.g. received intervention or medication) which may have influenced outcomes. We recommend future studies collect a more comprehensive data set for analysis. Conclusions Complications during medical transport of ACS patients from non-PCI capable hospitals to PCI-capable hospitals in our study were rare and their incidence was not associated with the type of the transporting EMT. The type of the EMT transporting the patient was not associated with 30-day survival. ACS patients can be transported by basic EMT and specialized EMT are only necessary for high-risk patients. Paramedics and emergency nurses in the Polish emergency medical system provide safe and effective care of patients, comparable to that provided by physician-led specialist teams during interhospital transfer of ACS patients. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
package cus.utils;
import org.apache.commons.lang3.StringUtils;
import javax.validation.Constraint;
import javax.validation.ConstraintValidator;
import javax.validation.ConstraintValidatorContext;
import javax.validation.Payload;
import java.lang.annotation.*;
import java.util.HashSet;
import java.util.Set;
@Target({ElementType.METHOD, ElementType.FIELD, ElementType.ANNOTATION_TYPE, ElementType.CONSTRUCTOR, ElementType.PARAMETER})
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Constraint(
validatedBy = {Arrayations.ArrayationsValidator.class}
)//添加校验器
public @interface Arrayations {
String[] value();
boolean caseSensitive() default true;
boolean required() default true;//自增
String message() default "手机号码格式错误";//出错提示信息
Class<?>[] groups() default {};
Class<? extends Payload>[] payload() default {};
class ArrayationsValidator implements ConstraintValidator<Arrayations, String> {
boolean required;
private Set<String> elements = new HashSet<>();
private boolean caseSensitive = true;
@Override
public void initialize(Arrayations constraintAnnotation) {
required = constraintAnnotation.required();
this.caseSensitive = constraintAnnotation.caseSensitive();
for (String e : constraintAnnotation.value()) {
this.elements.add(this.caseSensitive ? e : e.toUpperCase());
}
}
@Override
public boolean isValid(String value, ConstraintValidatorContext context) {
if (required) {
if (chkNull(value)) {
return false;
} else {
return this.elements.contains(this.caseSensitive ? value.toString() : value.toString().toUpperCase());
}
} else {
if (chkNull(value)) {
return true;
} else {
return this.elements.contains(this.caseSensitive ? value.toString() : value.toString().toUpperCase());
}
}
}
private boolean chkNull(String value) {
return value == null || StringUtils.isEmpty(value);
}
}
// class IsMobileValidator implements ConstraintValidator<Arrayations, Integer> {
//
//
// boolean required;
//
// @Override
// public void initialize(Arrayations constraintAnnotation) {
// required = constraintAnnotation.required();
// }
//
// @Override
// public boolean isValid(Integer value, ConstraintValidatorContext context) {
// if (value > 100)
// return true;
// else
// return false;
//
// }
// @Override
// public boolean isValid(String value, ConstraintValidatorContext context) {
// if(required){
// return value.contains("ad");
// }else{
// if(StringUtils.isEmpty(value)){
// return true;
// }else{
// return value.contains("ad");
// }
// }
// }
//}
}
|
/**
* write a set of bytes
* @param data
* @throws IOException
*/
public synchronized void write(byte data[])
throws IOException
{
write(data, 0, data.length);
} |
Book review: Sarah Henstra The Counter-Memorial Impulse in Twentieth-Century English Fiction. London: Palgrave Macmillan, 2009. ix + 182 pp. $80. ISBN 0230577148 Another thing thats missing is attention to social and cultural memory. While psychologists are naturally interested in memory as it is represented in individual minds and brains, individual experience, thought and action also take place in a world of other people significant others, groups, social institutions, and cultures. Social psychologists have become interested in issues surrounding memory collaboration and suggestion, stimulated by the clinical controversy over false and recovered memories of trauma. Sociologists, of course, have long entertained a notion of collective memory representations of the past that are shared by members of a group, and remembering as an activity that holds groups together. And political scientists increasingly understand that intergroup conflict is, all too often, conflict about memory. Social science, as much as neuroscience, is part of the future of memory, but theres not much of it here. Nor is there anything on memoir, the dominant literary style of the turn of the millennium, and one which raises all sorts of questions about the accuracy of memory, and the processes involved in reconstructing and imagining the past. But whats here is really good a very handy survey of the psychology and neuroscience of memory, presented in a series of short essays, each written with a point of view. Each section is readable in a single sitting, allowing the reader to catch up with what has been going on outside his or her narrow area of specialization. Can it serve as a textbook? Probably not. It would provide a solid basis for a graduate proseminar, where the students already have some grounding in the psychology and neuroscience of memory, but not for the undergraduate course. The individual articles often take too much for granted (though the relevant background is often found elsewhere), and the many individual articles deprive the book of the narrative flow that a textbook really needs. But for someone who already knows something about memory, and is looking for an (almost) comprehensive survey, this is it. |
<reponame>JungleComputing/constellation-rust<filename>src/error.rs
//! Module for handling Errors and Results
use std::{error, fmt, result};
#[derive(Debug)]
pub struct ConstellationError;
// Result type which can often have Constellation errors
pub type Result<T> = result::Result<T, ConstellationError>;
impl fmt::Display for ConstellationError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "THIS IS AN ERROR")
}
}
impl error::Error for ConstellationError {
// TODO Add methods/functions to identify error
fn cause(&self) -> Option<&error::Error> {
// Generic error, underlying cause isn't tracked.
None
}
}
|
Belarusian opposition politician Zmitser Dashkevich, the organizer of a March 24 rally in Minsk and leader of the unregistered Young Front political movement, reportedly has been detained by police.
Serbian President Aleksandar Vucic blasted NATO’s 1999 bombing campaign of his country, calling it “a crime.” Neighboring Kosovo, the beneficiary of the attack, said it will “forever be grateful” to the Western military alliance for intervening to help stop the bloodshed in the region.
Twelve-year-old Tasya wrote to Vladimir Putin in December, asking the president to help her overworked mother. After RFE/RL reported on her story, strangers sent money and gifts to the family. And that is when Tasya's mother says their real problems began.
Russian State Duma deputy Vadim Belousov has been detained on suspicion of accepting a $49 million bribe. Here's how this compares to other cases that have been tried in the courts.
Konstantin Kosachyov, who heads the Federation Council’s International Affairs Committee, said in a Facebook post on March 25 that Special Counsel Robert Mueller's probe was accompanied by "two years of incessant lies," and that its findings give the two countries a chance to repair ties.
As the Ukrainian presidential election nears, the world's leading industrialized nations have some strong words for Ukraine's top cop amid recent far-right violence in Kyiv and other cities.
Interim Kazakh President Qasym-Zhomart Toqaev signed a decree on March 23 renaming the capital Astana after former President Nursultan Nazarbaev, who stepped down abruptly earlier in the week.
Interim Kazakh President Qasym-Zhomart Toqaev has appointed recently dismissed prime minister Bakytzhan Sagintaev as his chief of staff, following Nursultan Nazarbaev’s surprise announcement last week that he was stepping down as president.
Human Rights Watch has called on Russian authorities to drop the case against Aleksandr Korovainy, an opposition activist who is facing administrative charges for posting an infographic from a news outlet backed by former oil tycoon and Kremlin critic Mikhail Khodorkovsky.
Late last month, Culture Minister Vladimir Medinsky set off alarm bells among culture aficionados when he sent a letter to regional administrations ordering them to bring the museums in their purview into line with "the state's priorities."
Iryna Kazlouskaya was a student in Minsk years ago when she got the news she was pregnant. She eventually had the child, but became convinced that a large part of Belarusian society, which has one of the highest rates of abortion in the world, views it as a form of birth control.
Romanian Prime Minister Viorica Dancila has announced at the annual policy conference of the American Israel Public Affairs Committee (AIPAC) in Washington, D.C. that Bucharest will move its embassy in Israel from Tel Aviv to Jerusalem.
Bulgarian Justice Minister Tsetska Tsacheva has resigned following RFE/RL investigative reports that revealed she and three other ruling GERB party politicians purchased luxury apartments at below market prices from the same firm. |
/*
Copyright 2021 VMware, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#ifndef _FRONTENDS_P4_SIMPLIFYSWITCH_H_
#define _FRONTENDS_P4_SIMPLIFYSWITCH_H_
#include "ir/ir.h"
#include "frontends/common/resolveReferences/resolveReferences.h"
#include "frontends/p4/typeChecking/typeChecker.h"
namespace P4 {
/** @brief Simplify select and switch statements that have constant arguments.
*/
class DoSimplifySwitch : public Transform {
TypeMap* typeMap;
bool matches(const IR::Expression* left, const IR::Expression* right) const;
public:
explicit DoSimplifySwitch(TypeMap* typeMap): typeMap(typeMap) {
setName("DoSimplifySwitch"); CHECK_NULL(typeMap);
}
const IR::Node* postorder(IR::SwitchStatement* stat) override;
};
class SimplifySwitch : public PassManager {
public:
SimplifySwitch(ReferenceMap* refMap, TypeMap* typeMap) {
passes.push_back(new TypeChecking(refMap, typeMap));
passes.push_back(new DoSimplifySwitch(typeMap));
setName("SimplifySwitch");
}
};
} // namespace P4
#endif /* _FRONTENDS_P4_SIMPLIFYSWITCH_H_ */
|
<reponame>cepsdev/rollAut<gh_stars>0
/*
Copyright 2019 <NAME> <<EMAIL>>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include <iostream>
#include <vector>
#include <set>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <memory>
#include <functional>
#include <optional>
#include <map>
#include <cassert>
#include <malloc.h>
#include <stdio.h>
#include <string.h>
#include <sstream>
#include <string>
#include <unistd.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <sys/epoll.h>
#include <fcntl.h>
#include <netdb.h>
#include <chrono>
#include <fstream>
#include "ringbuffer.h"
#include "threadsafe_queue.h"
#include "monitoring.h"
#include "monitoring_observer.h"
#include "http_utils.h"
#include "rollaut_async_stream.h"
#include "rollaut_wsserver_stream.h"
#include "../cryptopp/sha.h"
#include "../cryptopp/filters.h"
#include "../cryptopp/hex.h"
#include "rollaut_ws_interface.h"
#include "tests.h"
#include "rollaut_reg_manager.h"
#include "rollout_serialization.h"
#include "rollaut_rollout_db_importer.h"
#include "registry_utils.h"
#include "sm_compute_core_manager.h"
#include "websocket_utils.h"
namespace rollaut{
class Rollout_executer: public monitoring::Observer{
rapidjson::Document config;
std::mutex mtx;
short ws_api_port_base = 30000;
short ws_api_range = 10000;
short next_ws_api_port = ws_api_port_base;
bool pump_fake_data = false;
short get_ws_api_port (){
std::lock_guard<std::mutex> lk{mtx};
auto t = next_ws_api_port;
next_ws_api_port = ws_api_port_base+((next_ws_api_port-ws_api_port_base + 1) % ws_api_range);
return t;
}
public:
class config_failed : public std::runtime_error { public: config_failed(std::string w):std::runtime_error{w}{} };
class launch_failed : public std::runtime_error { public: launch_failed(std::string w):std::runtime_error{w}{} };
Rollout_executer(monitoring::Registry* reg,
bool fake_data):monitoring::Observer{reg,false},pump_fake_data{fake_data}
{
std::stringstream config_data;
{
std::ifstream is{"rollaut.json"};
for(;is;){
int ch;
if( (ch = is.get()) < 0) break;
config_data << (char)ch;
}
}
if (config_data.str().size() && config.Parse(config_data.str().data()).HasParseError())
throw config_failed{"Invalid configuration file: rollaut.json illformed."};
std::thread th{&Rollout_executer::do_observe,this};
th.detach();
}
void do_observe() override;
void do_execute(monitoring::Node* rollout);
};
class Rollout_observe_scheduled_time_and_flip_processing_status:
public monitoring::Observer{
public:
Rollout_observe_scheduled_time_and_flip_processing_status(monitoring::Registry* reg,
bool start_thread = true)
:monitoring::Observer{reg,start_thread}
{
}
void do_observe() override;
};
}
static void print_reg (monitoring::Registry& reg){
walk_reg(reg,[](monitoring::Node* n,int depth){
for(int i = 0; i != depth;++i) std::cout << " ";
std::cout << n->name <<" (" << n->node_type<< ", rev:"<< n->revision_id <<")";
if (reg_utils::removed(n)) std::cout << " removed";
else if (n->node_type & monitoring::NODE_TYPE_ATTRIBUTE){
if (n->to_string().length()>128)
std::cout << "=" << "...";
else
std::cout << "=" << n->to_string();
}
std::cout << std::endl;
});
}
void rollaut::Rollout_observe_scheduled_time_and_flip_processing_status::do_observe(){
using namespace std;
using namespace reg_utils;
for(;;){
std::vector<monitoring::Message> mv;
q->try_pop(mv);
if (!mv.size()){
std::this_thread::sleep_for(chrono::milliseconds{1000});
}
{
long sensible_time_interval_in_secs = 10;
auto al = reg->acquire_loc("rollouts/scheduled/");
//std::cerr << "FLIPPER" << std::endl;
auto rollouts = al.first;
if (!rollouts) continue;
time_t t = time(nullptr);
//std::cerr << "t:" << t << std::endl;
for(auto child:rollouts->children){
if (removed(child)) continue;
auto entity_attribute = child->get_attr("entity");
if (entity_attribute == nullptr) continue;
if (entity_attribute->to_string() != "rollout") continue;
auto rollout_entry = child;
auto processingstatus_attribute = rollout_entry->get_attr("processing_status");
if (processingstatus_attribute != nullptr && !reg_utils::removed(processingstatus_attribute)) continue;
auto scheduled_start_attr = rollout_entry->get_attr("scheduled_time_unix_time");
if (scheduled_start_attr == nullptr || reg_utils::removed(scheduled_start_attr) )
continue;
auto scheduled_start =
((monitoring::Attribute_node<time_t>*)scheduled_start_attr)->value;
//std::cerr << scheduled_start << std::endl;
//std::cerr << "diff:"<< t - scheduled_start << std::endl;
if (scheduled_start - sensible_time_interval_in_secs > t) continue;
if (scheduled_start + sensible_time_interval_in_secs < t) continue;
//std::cerr << "LIFT OFF" << std::endl;
/*auto dont_execute = rollout_entry->get_attr("dont_execute");
if (dont_execute != nullptr)
if (((monitoring::Attribute_node<bool>*)dont_execute)->value)
continue;*/
a("processing_status",std::string{"queued"},rollout_entry);
reg->trigger_observers(rollouts);
}
}
}
}
void rollaut::Rollout_executer::do_observe(){
using namespace std;
using namespace reg_utils;
std::map<monitoring::Node*,monitoring::revision_id_t> last_seen_revisions;
std::map<monitoring::Node*,
std::set<std::pair<int,rollaut::Websocket_interface*> > > subscribed_sockets;
auto rollouts = reg->register_observer("rollouts/scheduled",q);
last_seen_revisions[rollouts] = monitoring::revision_id_t{};
for(;;){
monitoring::Message m;
q->wait_and_pop(m);
auto watched_node = is_update_info(m);
if (watched_node.first){
auto last_seen_revision = last_seen_revisions[watched_node.second];
auto lck = reg->lock(watched_node.second);
auto new_rev = (watched_node.second)->revision_id;
last_seen_revisions[watched_node.second] = new_rev;
auto d = reg->compute_diff(last_seen_revision, watched_node.second);
for(auto entry : d.elems){
if (removed(entry.node)) continue;
auto entity_attribute = entry.node->get_attr("entity");
if (entity_attribute == nullptr) continue;
if (entity_attribute->to_string() != "rollout") continue;
auto rollout = entry.node;
for(int i = entry.children_start; i < entry.children_end;++i ){
auto attr = d.elems[i].node;
if (!(attr->node_type & monitoring::NODE_TYPE_ATTRIBUTE)) continue;
if (attr->name != "processing_status") continue;
if ( ((monitoring::Attribute_node<std::string>*)attr)->value == "queued" ){
std::thread run_ceps_core(
&Rollout_executer::do_execute,this,rollout);
run_ceps_core.detach();
}
}
}
lck.unlock();
}
}
}
void rollaut::Rollout_executer::do_execute(monitoring::Node* rollout){
std::vector<std::string> plugins;
std::string root_dir;
std::string runs_dir;
std::string rollout_dir;
std::string rollout_db_full = "db_rollout_dump.ceps";
bool smcores_entry_written = false;
auto fatal = [&](std::string m){
std::cerr << m << std::endl;
exit(1);
};
if (config.HasMember("root_folder")) root_dir = config["root_folder"].GetString();
if (root_dir.size() && root_dir[root_dir.size()-1]!='/') root_dir += "/";
runs_dir = root_dir+"runs";
rollout_dir = runs_dir + "/" + std::to_string(rollout->id)+"_"+std::to_string(rollout->revision_id);
{
auto dir_stream = opendir(runs_dir.c_str());
if (dir_stream == nullptr){
auto r = mkdir(runs_dir.c_str(), S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH);
if (r == -1) throw launch_failed{"Couldn't create '"+runs_dir+"'"};
} else closedir(dir_stream);
dir_stream = opendir(rollout_dir.c_str());
if (dir_stream == nullptr){
auto r = mkdir(rollout_dir.c_str(), S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH);
if (r == -1) throw launch_failed{"Couldn't create '"+rollout_dir+"'"};
dir_stream = opendir(rollout_dir.c_str());
if (dir_stream == nullptr) throw launch_failed{"Couldn't create '"+rollout_dir+"'"};
} else closedir(dir_stream);
}
if (config.HasMember("core_plugins") && config["core_plugins"].IsArray()){
auto const &l = config["core_plugins"];
for(auto it = l.Begin();it!=l.End();++it){
plugins.push_back(it->GetString());
}
}
{
std::ofstream os{rollout_dir+"/extract_rollout.ceps"};
os <<
R"(
// ${t.toUTCString()}
// Generated by RollAut
// DO NOT CHANGE
static_for(e:root.rollout_){
rollout{
e.name;
e.steps;
markets{
static_for(f:e.markets.market_details){
market{f.content();};
}
};
};
}
)";
std::ofstream os_ceps{rollout_dir+"/"+rollout_db_full};
auto ceps_attr = rollout->get_node("ceps");
if (ceps_attr == nullptr) fatal("No ceps");
ceps_attr = ceps_attr->get_attr("ceps");
if (ceps_attr == nullptr) fatal("No ceps");
os_ceps << ((monitoring::Attribute_node<std::string>*)ceps_attr)->value;
}
std::string plugins_opt;
for(auto p:plugins){
plugins_opt+= "--plugin"+root_dir+"lib/"+p+" ";
}
auto ws_api_port = std::to_string(get_ws_api_port());
std::string remote = "localhost";
std::string cmd = root_dir + "ceps "+plugins_opt;
cmd += rollout_dir+"/"+rollout_db_full+" "+
root_dir+"db_descr/gen.ceps "+
rollout_dir+"/extract_rollout.ceps "+
root_dir+"transformations/rollout2worker.ceps "+
root_dir+"transformations/rollout2sm.ceps "+
root_dir+"transformations/driver4rollout_start_immediately.ceps "
+ "--ws_api "+ws_api_port+" "
+ "--quiet"
;
std::cerr << cmd << std::endl;
std::thread ceps_proc{
[=](){
std::cerr << system(cmd.c_str()) << std::endl;
}
};
ceps_proc.detach();
//Try to connect to cepS core
auto max_tries = 10;
struct addrinfo* result, * rp;
auto tries = 0;
int cfd = -1;
for(auto & i = tries;i!=max_tries;++i){
struct addrinfo hints = {};
hints.ai_canonname = nullptr;
hints.ai_addr = nullptr;
hints.ai_next = nullptr;
hints.ai_socktype = SOCK_STREAM;
hints.ai_family = AF_INET;
hints.ai_flags = AI_NUMERICSERV;
if (getaddrinfo(nullptr,ws_api_port.c_str(),&hints,&result) != 0)
fatal("getaddrinfo failed");
for(rp=result;rp;rp=rp->ai_next)
{
cfd = socket(rp->ai_family,rp->ai_socktype,rp->ai_protocol);
if(cfd == -1) continue;
if (connect(cfd,rp->ai_addr,rp->ai_addrlen) != -1) break;
close(cfd);
}
freeaddrinfo(result);
if (!rp) {
std::this_thread::sleep_for(std::chrono::milliseconds{1000});
continue;
}
break;
}
if (tries == max_tries)
fatal("Websocket_interface::dispatcher(): Could not bind socket to any address.port="+ws_api_port);
{
auto acquire_result = reg->acquire_loc(rollout);
if (acquire_result.first){
auto attr_processing_status = rollout->get_attr("processing_status");
if (attr_processing_status != nullptr){
((monitoring::Attribute_node<std::string>*)attr_processing_status)->value = "running";
reg_utils::touch(attr_processing_status);
}
auto attr_start_time = rollout->get_attr("start_time_unix_time");
if (attr_start_time != nullptr){
((monitoring::Attribute_node<std::uint64_t>*)attr_start_time)->value = time(nullptr);
reg_utils::touch(attr_start_time);
} else reg_utils::a("start_time_unix_time",std::uint64_t{(std::uint64_t)time(nullptr)},rollout);
auto attr_health = rollout->get_attr("health");
if (attr_health != nullptr){
((monitoring::Attribute_node<std::string>*)attr_health)->value = "ok";
reg_utils::touch(attr_health);
} else reg_utils::a("health",std::string{"ok"},rollout);
auto attr_coverage = rollout->get_attr("coverage");
if (attr_coverage != nullptr){
((monitoring::Attribute_node<double>*)attr_coverage)->value = 0.0;
reg_utils::touch(attr_coverage);
} else reg_utils::a("coverage",double{0.0},rollout);
reg->trigger_observers(rollout->parent);
acquire_result.second.unlock();
} else{
}
}
//Establish websocket connection
std::string sec_phrase = "<PASSWORD>==";
std::string http_upgrade_request =
"GET / HTTP/1.1\r\nSec-WebSocket-Version: 13\r\nSec-WebSocket-Key: "+sec_phrase+"\r\n"+
"Connection: Upgrade\r\nUpgrade: websocket\r\nHost: localhost:"+ws_api_port+"\r\n\r\n";
send(cfd,http_upgrade_request.c_str(),http_upgrade_request.length(),0);
std::string unconsumed_data;
auto upgrade_reply = ws_utils::read_http_request(cfd,unconsumed_data);
if (!std::get<0>(upgrade_reply)) {
std::cerr << "FAILED" << std::endl;
return;
}
auto const & attrs = std::get<2>(upgrade_reply);
if (!ws_utils::field_with_content("Upgrade","websocket",attrs)) return;
auto send_ws_message = [](int sck,std::string const & reply) -> bool {
auto len = reply.length();
bool fin = true;
bool ext1_len = len >= 126 && len <= 65535;
bool ext2_len = len > 65535;
std::uint16_t header = 0;
if (fin) header |= 0x80;
if(!ext1_len && !ext2_len) header |= len << 8;
else if (ext1_len) header |= 126 << 8;
else header |= 127 << 8;
header |= 1;
auto wr = write(sck, &header,sizeof header );
if(wr != sizeof header) return false;
if (ext1_len)
{
std::uint16_t v = len;v = htons(v);
if( (wr = write(sck, &v,sizeof v )) != sizeof v) return false;
}
if (ext2_len)
{
std::uint64_t v = len;v = htobe64(v);
if( (wr = write(sck, &v,sizeof v )) != sizeof v) return false;
}
if ( (wr = write(sck, reply.c_str(),len )) != (int)len) return false;
return true;
};
send_ws_message(cfd,"SUBSCRIBE COVERAGE");
bool computation_done = false;
std::map<std::string,int> store2smsid;
std::map<int,double> sms2coverage;
std::vector<int> sms;
std::map<std::string, unsigned long long> cat2number;
std::map<std::string,std::string> cat2health = {
{"DoneState","complete"},{"FailState","fatal"},{"WarnState","critical"}
};
std::vector<std::string> cat_rank = {"fatal","critical","complete"};
std::vector<unsigned long long> cat_rank_id;
std::map<unsigned long long,std::string> catnumber2health;
std::map<int,unsigned long long> sms2catnumber;
std::map<int,int> active_categories;
std::map<int,int> smsidx2stepidx;
std::map<int,int> child_sms2root;
std::map<int,int> smsidx2current_step;
std::map<int,std::vector<int>> smsidx2exiting_times;
std::map<int,std::vector<int>> smsidx2entering_times;
int steps_per_store = 0;
auto compute_coverage = [&](rapidjson::Document & msg){
std::cerr << "******** compute_coverage" << std::endl;
if (sms2coverage.size() == 0) return 0.0;
auto tpl_transition_cov = msg["toplevel_sms_transition_coverage"].GetArray();
auto tpl_sms = msg["toplevel_sms"].GetArray();
for(std::size_t i = 0;i != tpl_transition_cov.Size();++i){
if (i >= tpl_sms.Size()) break;
sms2coverage[tpl_sms[i].GetInt()] = tpl_transition_cov[i].GetDouble();
}
double sum = 0.0;
double factor = 1.0/((double)sms2coverage.size());
for(auto e: sms2coverage){
sum += e.second;
}
return sum * factor;
};
//Read state changes and report them
for(;;){
auto frm = ws_utils::read_websocket_frame(cfd);
if (!frm.first) {
std::cerr << "***CONNECTION LOST!" << std::endl;
break;
}
std::vector<unsigned char> payload = std::move(frm.second.payload);
while (!frm.second.fin){
frm = ws_utils::read_websocket_frame(cfd);
if (!frm.first) break;
payload.reserve(payload.size()+frm.second.payload.size());
payload.insert(payload.end(),frm.second.payload.begin(),frm.second.payload.end());
}
if (!frm.first) break;
if(frm.second.opcode == 1) {
std::string s; s.resize(payload.size());
for(size_t j = 0; j < payload.size();++j) s[j] = payload[j];
//std::cerr << std::endl<< std::endl<< std::endl;
//std::cerr << s << std::endl<< std::endl<< std::endl;
rapidjson::Document msg;
if (msg.Parse(s.data()).HasParseError()) continue;
bool trigger_observers = false;
bool reset_health = false;
std::cerr << "********* " << s << std::endl;
if (msg.HasMember("what") && std::string{msg["what"].GetString()} == "init"){
if (!smcores_entry_written){
auto acquire_result = reg->acquire_loc("smcores/rollouts/scheduled/");
using namespace reg_utils;
if (acquire_result.first){
auto smcores = acquire_result.first;
std::string entry_name = std::to_string(rollout->id);
monitoring::Node* entry = nullptr;
for(auto child: smcores->children){
if (child->name == entry_name) {entry=child;break;}
}
if (entry != nullptr) {
for(auto ch:entry->children){
delete ch;
}
entry->children.clear();
}
if (entry == nullptr)
entry = reg_utils::e(entry_name,smcores);
reg_utils::a("ref_id",rollout->id,entry);
reg_utils::a("ref_name",rollout->name,entry);
reg_utils::a("remote",remote,entry);
reg_utils::a("ws_api",ws_api_port,entry);
reg->trigger_observers(smcores);
smcores_entry_written = true;
acquire_result.second.unlock();
}
}
active_categories.clear();
reset_health = true;
auto tpl_sms = msg["toplevel_sms"].GetArray();
active_categories[0] = tpl_sms.Size();
auto tpl_labels = msg["toplevel_sms_labels"].GetArray();
auto cat_mapping = msg["category_mapping"].GetArray();
auto labeled_states = msg["labeled_states"].GetArray();
auto root2childstates = msg["root2childstates"].GetArray();
auto n = std::min(tpl_sms.Size(),tpl_labels.Size());
sms.clear();sms2coverage.clear();store2smsid.clear();
for(decltype(n) i = 0; i != n;++i){
store2smsid[tpl_labels[i].GetString()] = tpl_sms[i].GetInt();
smsidx2current_step[tpl_sms[i].GetInt()] = 0;
}
for(auto it = tpl_sms.Begin();it!=tpl_sms.End();++it){
sms.push_back(it->GetInt());
}
for (std::size_t i = 0; i < cat_mapping.Size();++i,++i){
auto cat_str = cat_mapping[i].GetString();
auto cat_id = 1 << cat_mapping[i+1].GetInt();
cat2number[cat_str] = cat_id;
catnumber2health[cat_id] = cat2health[cat_str];
}
catnumber2health[0] = "ok";
//Compute categories's rank
cat_rank_id.clear();
cat_rank_id.resize(cat_rank.size(),0);
for(std::size_t i = 0; i != cat_rank.size();++i){
for(auto e : catnumber2health)
if (e.second == cat_rank[i]){cat_rank_id[i]=e.first;break;}
}
//Compute Mapping of sms index to step
int max_step = -1;
std::vector<int> done_states;
for(std::size_t i = 0; i < labeled_states.Size(); ++i){
std::string label = labeled_states[i].GetString();++i;
std::vector<int> v;
for(; i != labeled_states.Size(); ++i){
if (labeled_states[i].IsString()){
--i; break;
}
v.push_back(labeled_states[i].GetInt());
}
int step_idx = -1;
for(ssize_t i = label.length(); i!=0;--i){
if (label[i]=='$' && (i-1 >= 0) && label[i-1]=='$'){
std::size_t idx_pos = i+1;
if (idx_pos + 5 /*length of 'step_'*/ >= label.length()) break;
step_idx = std::atoi(label.c_str()+idx_pos + 5);
if (step_idx == 0) step_idx = -1;
}
}
if (step_idx > max_step) max_step = step_idx;
if (step_idx == -1) {
done_states = v;
continue;
}
for(auto e:v) smsidx2stepidx[e] = step_idx - 1;
//std::cerr << step_idx << ":";
//for(auto e : v){ std::cerr << e << " ";}
//std::cerr << std::endl;
}
steps_per_store = max_step + 1;
for(auto e:done_states) smsidx2stepidx[e] = max_step;
//Compute child_sms2root
for(std::size_t i = 0; i < root2childstates.Size();++i){
auto rootsms = root2childstates[i].GetInt();++i;
for(;i < root2childstates.Size()&&root2childstates[i].GetInt()!=0;++i)
child_sms2root[root2childstates[i].GetInt()]=rootsms;
}
//for(auto e:child_sms2root ) std::cerr << "** " << e.first << " -> " << e.second << std::endl;
for(auto e:store2smsid){
smsidx2exiting_times[e.second] = std::vector<int>{};
smsidx2exiting_times[e.second].resize(steps_per_store,-1);
smsidx2entering_times[e.second] = std::vector<int>{};
smsidx2entering_times[e.second].resize(steps_per_store,-1);
}
}
if (msg.HasMember("category_changes")){
auto cat_changes = msg["category_changes"].GetArray();
for(std::size_t i = 0; i != cat_changes.Size();++i){
auto smsidx = cat_changes[i++].GetInt();
auto cat = cat_changes[i].GetInt();
sms2catnumber[smsidx] = 0;
for(auto e : cat_rank_id){
if (e & cat){
--active_categories[sms2catnumber[smsidx]];
sms2catnumber[smsidx] = e;
++active_categories[e];
break;
}
}
}
}
if (msg.HasMember("covered_states") && msg["covered_states"].IsArray()){
std::map<int,int> smsidx2max_step_idx_seen_covered;
auto newly_covered = msg["covered_states"].GetArray();
for(std::size_t i = 0; i != newly_covered.Size();++i){
auto it = child_sms2root.find(newly_covered[i].GetInt());
if (it == child_sms2root.end()) continue;
int root = it->second;
auto step_idx_it = smsidx2stepidx.find(newly_covered[i].GetInt());
if (step_idx_it == smsidx2stepidx.end()) continue;
auto step_idx = step_idx_it->second;
if (smsidx2max_step_idx_seen_covered[root] < step_idx)
smsidx2max_step_idx_seen_covered[root] = step_idx;
}
for(auto e:smsidx2max_step_idx_seen_covered){
smsidx2current_step[e.first] = e.second;
}
}
//covered_states
//Computation of overall health
auto overall_health = 0;
for(auto e : cat_rank_id){
if (active_categories[e] > 0){
overall_health = e;
break;
}
}
if (catnumber2health[overall_health] == "complete"){
if (active_categories[0] > 0) overall_health = 0;
}
//Entering-/Exitingtimes
if (msg.HasMember("entering_times") && msg["entering_times"].IsArray()){
trigger_observers = true;
auto v = msg["entering_times"].GetArray();
//std::cerr << "entering_times" << std::endl;
for(std::size_t i = 0; i < v.Size();++i){
auto sms_idx = v[i].GetInt();
auto sec = v[i+1].GetInt();
//auto msec = v[i+2].GetInt();
//auto nsec = v[i+3].GetInt();
i += 3;
auto it_root = child_sms2root.find(sms_idx);
if (it_root == child_sms2root.end()) continue;
auto it_step = smsidx2stepidx.find(sms_idx);
if (it_step == smsidx2stepidx.end()) continue;
// std::cerr << it_root->second << " " << it_step->second <<" "<< sec<<std::endl;
smsidx2entering_times[it_root->second][it_step->second] = sec;
}
}
if (msg.HasMember("exiting_times") && msg["exiting_times"].IsArray()){
trigger_observers = true;
auto v = msg["exiting_times"].GetArray();
for(std::size_t i = 0; i < v.Size();++i){
auto sms_idx = v[i].GetInt();
auto sec = v[i+1].GetInt();
//auto msec = v[i+2].GetInt();
//auto nsec = v[i+3].GetInt();
i += 3;
auto it_root = child_sms2root.find(sms_idx);
if (it_root == child_sms2root.end()) continue;
auto it_step = smsidx2stepidx.find(sms_idx);
if (it_step == smsidx2stepidx.end()) continue;
smsidx2exiting_times[it_root->second][it_step->second] = sec;
}
}
auto acquire_result = reg->acquire_loc(rollout);
auto attr_health = rollout->get_attr("health");
if (attr_health != nullptr){
((monitoring::Attribute_node<std::string>*)attr_health)->value
= catnumber2health[overall_health];
reg_utils::touch(attr_health);
} else reg_utils::a("health",catnumber2health[overall_health],rollout);
{
auto attr_stores =
(monitoring::Attribute_node<std::unique_ptr<monitoring::aval>>*)
rollout->get_attr("stores");
reg_utils::touch(attr_stores);
auto store_list = (monitoring::aval_list*) attr_stores->value.get();
for(auto & e_ : store_list->elems){
auto& e = *((monitoring::aval_obj*)e_);
auto& label = *((monitoring::aval_entry<std::string>*)e.elems["name"]);
auto& health = *((monitoring::aval_entry<std::string>*)e.elems["health"]);
auto& current_state = *((monitoring::aval_entry<int>*)e.elems["current_state"]);
auto smsidx = store2smsid[label.value];
health.value = catnumber2health[sms2catnumber[smsidx]];
current_state = smsidx2current_step[smsidx];
auto& entering_times = *((monitoring::aval_list*)e.elems["entering_times"]);
auto& exiting_times = *((monitoring::aval_list*)e.elems["exiting_times"]);
if (entering_times.elems.size() != 0)
for(auto e: entering_times.elems) delete e;
entering_times.elems.clear();
for(auto e : smsidx2entering_times[store2smsid[label.value]]){
//std::cerr << e << std::endl;
if (e == -1) break;
entering_times.elems.push_back(new monitoring::aval_entry<int>{e});
}
if (exiting_times.elems.size() != 0)
for(auto e: exiting_times.elems) delete e;
exiting_times.elems.clear();
for(auto e : smsidx2exiting_times[store2smsid[label.value]]){
if (e == -1) break;
exiting_times.elems.push_back(new monitoring::aval_entry<int>{e});
}
}
}
if (msg.HasMember("toplevel_sms_transition_coverage")){
auto cov = compute_coverage(msg);
{
if (acquire_result.first){
auto attr_coverage = rollout->get_attr("coverage");
if (attr_coverage != nullptr){
((monitoring::Attribute_node<double>*)attr_coverage)->value = cov;
reg_utils::touch(attr_coverage);
} else reg_utils::a("coverage",double{cov},rollout);
if (cov >= 1.0 && !computation_done){
computation_done = true;
auto attr_end_time = rollout->get_attr("end_time_unix_time");
if (attr_end_time != nullptr){
((monitoring::Attribute_node<std::uint64_t>*)attr_end_time)->value = time(nullptr);
reg_utils::touch(attr_end_time);
} else reg_utils::a("end_time_unix_time",std::uint64_t{time(nullptr)},rollout);
auto attr_processing_status = rollout->get_attr("processing_status");
if (attr_processing_status != nullptr){
((monitoring::Attribute_node<std::string>*)attr_processing_status)->value = "complete";
reg_utils::touch(attr_processing_status);
}
}
auto attr_stores =
(monitoring::Attribute_node<std::unique_ptr<monitoring::aval>>*)
rollout->get_attr("stores");
reg_utils::touch(attr_stores);
auto store_list = (monitoring::aval_list*) attr_stores->value.get();
for(auto & e_ : store_list->elems){
auto& e = *((monitoring::aval_obj*)e_);
auto& coverage = *((monitoring::aval_entry<double>*)e.elems["coverage"]);
auto& label = *((monitoring::aval_entry<std::string>*)e.elems["name"]);
coverage.value = sms2coverage[store2smsid[label.value]];
}
trigger_observers = true;
} else{
}
}
}
if (trigger_observers) reg->trigger_observers(rollout->parent);
acquire_result.second.unlock();
}
}//for
}
std::map<std::string, bool*> options;
int main(int argc, char *argv[])
{
bool ceps_executer_pump_test_data = false;
options["--ceps-executer-pump-test-data"] = &ceps_executer_pump_test_data;
monitoring::Registry reg;
rollaut::Registry_manager regm {reg};
regm.get_reg().add_node("rollouts");
regm.get_reg().add_node("rollouts/scheduled");
regm.get_reg().add_node("smcores");
regm.get_reg().add_node("smcores/rollouts");
regm.get_reg().add_node("smcores/rollouts/scheduled");
monitoring::Statemachine_execution_core_manager smcore_observer{®,""};
monitoring::Observer observer{®};
observer.watch("rollouts/scheduled");
smcore_observer.watch("smcores/rollouts/scheduled");
std::string hostname; if (getenv("ROLLAUT_DB_HOST") != nullptr) hostname = getenv("ROLLAUT_DB_HOST");
std::string user; if (getenv("ROLLAUT_DB_USER") != nullptr) user = getenv("ROLLAUT_DB_USER");
std::string passwd; if (getenv("ROLLAUT_DB_PASSWD") != nullptr) passwd = getenv("ROLLAUT_DB_PASSWD");
std::string timestamp;
std::string database; if (getenv("ROLLAUT_DB_DB") != nullptr) database = getenv("ROLLAUT_DB_DB");
rollaut::Rollout_db_importer import_rollouts_from_db{
®,
hostname,
user,
passwd,
timestamp,
database};
rollaut::Rollout_observe_scheduled_time_and_flip_processing_status flipper{®};
if (argc > 1){
for (std::size_t i = 1; i != argc; ++i){
auto it = options.find(argv[i]);
if (it == options.end()) continue;
*it->second = true;
}
}
rollaut::Rollout_executer execute_rollouts{®,ceps_executer_pump_test_data};
regm.run();
#ifdef RUN_STATE_AND_STREAM_SERVICE_TESTS
tests::test_state_stream_service();
#endif
#ifdef RUN_RINGBUFFER_TESTS
tests::test_ringbuffer();
#endif
#ifdef RUN_THREAD_SAFE_QUEUE_TESTS
tests::test_thread_safe_queue_with_ringbuffer();
#endif
#ifdef RUN_BASIC_REGISTRY_TESTS
tests::test_add_node_and_locks();
#endif
#ifdef RUN_BASIC_OBSERVER_TESTS
tests::test_basic_observer();
#endif
#ifdef RUN_BASIC_WEBSOCKET_TESTS
tests::test_echo_webserver();
#endif
exit(0);
}
|
/**
* ColorHueView is a view used to pick different colors based on a color wheel concept.
* The code is influenced from the Android API demo ColorPickerDialog, but made into a View.
*
* @author baracudda
*/
public class ColorHueView extends View {
private DisplayMetrics currDisplayMetrics = null;
private int CENTER_X = 100;
private int CENTER_Y = 100;
private int CENTER_RADIUS = 32;
private int VIEW_MARGIN = 4;
private final int[] mColorSet = new int[] {
0xFFFF0000, 0xFFFF00FF, 0xFF0000FF, 0xFF00FFFF, 0xFF00FF00,
0xFFFFFF00, 0xFFFF0000
};
private Paint mCenterButton;
private Paint mOuterContainer;
private OnColorHueSelectedListener mSelectedListener;
private OnColorHueChangedListener mChangedListener;
private boolean mTrackingCenter = false;
private boolean mHighlightCenter = false;
public interface OnColorHueChangedListener {
void onColorHueChanged(int aColor);
}
public interface OnColorHueSelectedListener {
void onColorHueSelected(int aColor);
}
public void setOnHueChangedListener(OnColorHueChangedListener aListener) {
mChangedListener = aListener;
}
public void setOnHueSelectedListener(OnColorHueSelectedListener aListener) {
mSelectedListener = aListener;
}
protected int dipsToPixels(int aDip) {
float px;
px = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, aDip, currDisplayMetrics);
return Math.round(px);
}
public ColorHueView(Context aContext, AttributeSet aAttrSet) {
super(aContext, aAttrSet);
initialize(Color.WHITE);
}
public ColorHueView(Context aContext, OnColorHueChangedListener aHueChangedListener) {
super(aContext);
initialize(Color.WHITE);
mChangedListener = aHueChangedListener;
}
public ColorHueView(Context aContext, OnColorHueChangedListener aHueChangedListener, int aColor) {
super(aContext);
initialize(aColor);
mChangedListener = aHueChangedListener;
}
protected void initialize(int aInitialColor) {
currDisplayMetrics = getContext().getResources().getDisplayMetrics();
CENTER_X = dipsToPixels(CENTER_X);
CENTER_Y = dipsToPixels(CENTER_Y);
CENTER_RADIUS = dipsToPixels(CENTER_RADIUS);
mOuterContainer = new Paint(Paint.ANTI_ALIAS_FLAG);
Shader s = new SweepGradient(0,0,mColorSet,null);
mOuterContainer.setShader(s);
mOuterContainer.setStyle(Paint.Style.STROKE);
mOuterContainer.setStrokeWidth(dipsToPixels(48));
mCenterButton = new Paint(Paint.ANTI_ALIAS_FLAG);
mCenterButton.setStrokeWidth(5);
mCenterButton.setStyle(Paint.Style.FILL);
setColor(aInitialColor);
}
public void setColor(int aColor) {
mCenterButton.setColor(aColor);
invalidate();
}
private RectF mOnDrawTempCenterPoint = new RectF();
@Override
protected void onDraw(Canvas aCanvas) {
float theRadius = CENTER_X - (mOuterContainer.getStrokeWidth()/2);
aCanvas.translate(CENTER_X, CENTER_Y);
mOnDrawTempCenterPoint.set(-theRadius,-theRadius,theRadius,theRadius);
aCanvas.drawOval(mOnDrawTempCenterPoint,mOuterContainer);
aCanvas.drawCircle(0,0,CENTER_RADIUS,mCenterButton);
if (mTrackingCenter) {
int saveColor = mCenterButton.getColor(); //setting Alpha will change the color
mCenterButton.setStyle(Paint.Style.STROKE);
mCenterButton.setAlpha((mHighlightCenter)?0xFF:0x80);
aCanvas.drawCircle(0,0,CENTER_RADIUS+mCenterButton.getStrokeWidth(),mCenterButton);
mCenterButton.setStyle(Paint.Style.FILL);
mCenterButton.setColor(saveColor);
}
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
int theSize = Math.round(CENTER_X*2)+VIEW_MARGIN;
setMeasuredDimension(theSize,theSize);
}
@Override
public boolean onTouchEvent(MotionEvent aEvent) {
float x = aEvent.getX()-CENTER_X;
float y = aEvent.getY()-CENTER_Y;
boolean isInCenter = (Math.sqrt(x*x + y*y) <= CENTER_RADIUS);
switch (aEvent.getAction()) {
case MotionEvent.ACTION_DOWN:
mTrackingCenter = isInCenter;
if (isInCenter) {
mHighlightCenter = true;
invalidate();
break;
}
case MotionEvent.ACTION_MOVE:
if (mTrackingCenter) {
if (mHighlightCenter != isInCenter) {
mHighlightCenter = isInCenter;
invalidate();
}
} else {
float theAngle = (float) Math.atan2(y,x);
// need to turn angle [-PI … PI] into unit [0….1]
float theColorPoint = theAngle/(2*(float)Math.PI);
if (theColorPoint<0) {
theColorPoint += 1;
}
int theColor = BitsColorUtils.interpolateColor(mColorSet,theColorPoint);
mCenterButton.setColor(theColor);
if (mChangedListener!=null)
mChangedListener.onColorHueChanged(theColor);
invalidate();
}
break;
case MotionEvent.ACTION_UP:
if (mTrackingCenter) {
if (isInCenter && mSelectedListener!=null) {
mSelectedListener.onColorHueSelected(mCenterButton.getColor());
}
mTrackingCenter = false;
invalidate();
}
break;
}
return true;
}
} |
import math
t = int(input())
def printDivisors(curPrimes,curPrime,primeDict,forbidden,printed):
l = len(curPrimes)
num = 1
for i in range(primeDict[curPrime]):
num*=curPrime
for j in range(l):
temp = num*curPrimes[j]
curPrimes.append(temp)
if temp not in printed:
print(int(temp), end = " ")
printed[temp] = True
return [curPrimes,printed]
for o in range(t):
printed = {}
m = int(input())
primeList = []
primeDict = {}
for i in range(2,math.ceil(math.sqrt(m))+1):
if m%i == 0:
primeList.append(i)
primeDict[i] = 0
while m%i == 0:
m = m/i
primeDict[i] = primeDict[i] + 1
if m != 1:
primeList.append(m)
primeDict[m] = 1
isOne = False
if len(primeList) == 2:
if primeDict[primeList[0]] == 1 and primeDict[primeList[1]] == 1:
isOne = True
if isOne == True:
print(int(primeList[1]),int(primeList[1]*primeList[0]),int(primeList[0]))
print(1)
elif len(primeList)<=2:
if len(primeList) == 1:
num = 1
for i in range(primeDict[primeList[0]]):
num *= primeList[0]
print(int(num), end = " ")
print()
print(0)
else:
primeOne = primeList[0]
primeTwo = primeList[1]
forbidden1 = 0
forbidden2 = 0
if primeDict[primeOne] != 1:
forbidden1 = primeOne*primeTwo
forbidden2 = primeOne*primeTwo*primeOne
else:
forbidden1 = primeOne*primeTwo
forbidden2 = primeOne*primeTwo*primeTwo
print(int(forbidden1),end = " ")
printed[forbidden1] = True
d = [1]
for i in range(primeDict[primeOne]):
t = d[-1]*primeOne
d.append(t)
if t not in printed:
print(int(t), end = " ")
printed[t] = True
print(int(forbidden2),end = " ")
printed[forbidden2] = True
cur = 1
for i in range(primeDict[primeTwo]):
cur *= primeTwo
for j in d:
temp = int(cur*j)
if temp not in printed:
print(temp,end = " ")
printed[temp] = True
print()
print(0)
else:
curPrimes = [1]
for i in range(len(primeList)-1,-1,-1):
curPrime = primeList[i]
forbidden = 0
if i == len(primeList)-1:
forbidden = curPrime*primeList[0]
else:
forbidden = curPrime*primeList[i+1]
print(int(forbidden),end = " ")
printed[forbidden] = True
ret = printDivisors(curPrimes,curPrime,primeDict,forbidden,printed)
curPrimes = ret[0]
printed = ret[1]
print()
print(0)
|
<reponame>raccoongang/python-social-auth<gh_stars>1000+
from social_core.backends.persona import PersonaAuth
|
Jide Olugbodi
Club career
Olugbodi played for Bangladeshi side Mohammedan SC before moving to Swiss side Schaffhausen in 1997. He then joined German side Rot-Weiß Oberhausen staying for two years before joining Austrian club Austria Lustenau.
In October 2003, Olugbodi joined English Second Division side Brentford. He made his debut in the 3–0 home defeat to Sheffield Wednesday on 4 October, replacing Eddie Hutchinson as a substitute in the 72nd minute. Olugbodi signed a new short-term contract with Brentford in early November, keeping him at the club until December. He made a total of five appearances for Brentford in all competitions, including the Second Division, the Football League Trophy and FA Cup without scoring a goal.
International career
Olugbodi was called up to the Nigeria squad to face Jamaica at Loftus Road in London on 7 November 2001. Due to injury, he was forced to withdraw from the 54-man Nigeria squad that was due to play Paraguay in March 2002, along with other forwards John Utaka and Dele Adebola. |
. OBJECTIVE To describe a case of tumor implantation at the site of resection in the ureteral orifice, following nephroureterectomy combined with resection of the bladder cuff of. METHODS/RESULTS A patient with high grade and stage carcinoma of the renal pelvis and no previous bladder tumor presented with a perivesical mass 6 months following nephroureterectomy. She had a high grade transitional cell carcinoma that infiltrated all of the bladder wall, vagina and parametrium in the region of the meatus that had been resected endoscopically. CONCLUSIONS The present case indicates there may be histological or technique-related factors that facilitate tumor cell implantation in the deep perivesical region after this procedure is performed. It is necessary to identify these factors before this procedure is used widely. |
Botanical classification: Dianthus caryophyllus cultivar Yoder Lady.
The present invention relates to a new and distinct cultivar of Carnation plant, botanically known as Dianthus caryophyllus and hereinafter referred to by the name xe2x80x98Yoder Ladyxe2x80x99.
The new Carnation is a product of a planned breeding program conducted by the Inventor in Salinas, Calif., and Suba, Cundinamarca, Colombia, South America. The objective of the breeding program is to create new cut Carnation cultivars having long flowering stems, early flowering, attractive flower color, and good flower form and substance.
The new Carnation originated from a cross-pollination made by the Inventor in 1994, in Salinas, Calif., of a proprietary selection of Carnation identified as code number 0110, not patented, as the female, or seed, parent, with the Carnation cultivar Jazz, disclosed in U.S. Plant Pat. No. 10,121, as the male, or pollen, parent.
The cultivar Yoder Lady was discovered and selected by the Inventor as a flowering plant within the progeny of the stated cross-pollination in a controlled environment in Suba, Cundinamarca, Colombia, South America in October, 1995. The selection of this plant was based on its flower color and good flower form and substance.
Asexual reproduction of the new Carnation by terminal cuttings in Suba, Cundinamarca, Colombia, South America since November, 1995, has shown that the unique features of this new Carnation are stable and reproduced true to type in successive generations.
The cultivar Yoder Lady has not been observed under all possible environmental conditions. The phenotype may vary somewhat with variations in environment such as temperature and light intensity without, however, any variance in genotype.
The following traits have been repeatedly observed and are determined to be the unique characteristics of xe2x80x98Yoder Ladyxe2x80x99. These characteristics in combination distinguish xe2x80x98Yoder Ladyxe2x80x99 as a new and distinct cultivar of Carnation:
1. Orange-colored flowers with occasional random orange red-colored streaks and splashes.
2. Early and freely flowering habit with about 12 to 14 flowers per flowering stem.
3. Fragrant flowers.
4. Good postproduction longevity with flowers maintaining good substance and color for about ten days in an interior environment after shipping.
5. Resistance to Fusarium oxysporum.
Plants of the new Carnation can be compared to plants of the female parent selection. In side-by side comparisons conducted in Suba, Cundinamarca, Colombia, South America, plants of the new Carnation and female parent selection differed primarily in flower coloration as plants of the female parent selection had pink-colored flowers.
Plants of the new Carnation can be compared to plants of the male parent, the cultivar Jazz. In side-by side comparisons conducted in Suba, Cundinamarca, Colombia, South America, plants of the new Carnation and the cultivar Jazz differed in the following characteristics:
1. Plants of the new Carnation flowered one to two weeks earlier than plants of the cultivar Jazz.
2. Plants of the new Carnation and the cultivar Jazz differed in flower coloration as flowers of plants of the cultivar Jazz had more red-colored streaks and splashes than plants of the new Carnation. |
Stochastic Fluctuations and Distributed Control of Gene Expression Impact Cellular Memory Despite the stochastic noise that characterizes all cellular processes the cells are able to maintain and transmit to their daughter cells the stable level of gene expression. In order to better understand this phenomenon, we investigated the temporal dynamics of gene expression variation using a double reporter gene model. We compared cell clones with transgenes coding for highly stable mRNA and fluorescent proteins with clones expressing destabilized mRNA-s and proteins. Both types of clones displayed strong heterogeneity of reporter gene expression levels. However, cells expressing stable gene products produced daughter cells with similar level of reporter proteins, while in cell clones with short mRNA and protein half-lives the epigenetic memory of the gene expression level was completely suppressed. Computer simulations also confirmed the role of mRNA and protein stability in the conservation of constant gene expression levels over several cell generations. These data indicate that the conservation of a stable phenotype in a cellular lineage may largely depend on the slow turnover of mRNA-s and proteins. Introduction Specific gene regulation mechanisms are believed to ensure a constant expression level and guarantee long-term phenotypic stability of the cells and cell lineages. However, gene expression, as biochemical reactions in general, is a probabilistic process essentially because of the low copy number of participating molecules. As a result, mRNA and protein levels in vary widely even between cells of a clonal populations exposed to a homogenous environment. This general phenomenon that concerns every gene in every cell type of multicellular organisms poses a challenge to our understanding of the phenotypic stability of the cell. It has been shown that the energetic costs of the suppression of this noise by specific regulatory mechanisms is very high. This makes impossible the suppression of the fluctuations in individual cells beyond a certain limit. The difficulty is similar if we want to explain how a phenotype can be stably transmitted over cell divisions in a cell lineage. Again, this role is typically attributed to memory mechanisms of gene transcription regulation. However, these mechanisms are also blurred by noise. Gene expression is a multistep process that includes chromatin remodeling, transcription and translation, mRNA and protein degradation. All these steps are noisy and may contribute to the random variation of the protein abundance in individual cells. It is unclear how variations generated during these different steps influence the stable transmission of a phenotype over cell divisions. Most of the published studies used fixed time-point analysis of isogenic cell populations with the implicit assumption of ergodicity. This approach allowed the identification of many different sources of variation: transcription, chromatin dynamics, unequal repartition of molecules during cell division. However, fixed time-point studies provided no direct information about the frequency and temporal dynamics of the variation. Rapid fluctuations can result in apparently similar population patterns as slow variations. However, a population of rapidly fluctuating cells may display radically different biological properties than a population of slowly fluctuating individuals. High frequency of fluctuations may endow the cell with the capacity of rapid adaptation while low frequency promotes stable phenotype. The main steps of the gene expression process may contribute differently to the overall dynamics of the fluctuations. The identification of these contributions may reveal potentially critical stages that induce differentiation or set periods of phenotypic stability. Therefore, we analyzed the dynamics of protein abundance variation in human cells using our dual reporter gene experimental system. In this model, single copies of two different reporter genes were introduced into independent genomic integration sites of cells. The two transgenes differed only by a small number of nucleotides: one encoded for cyan-the other for yellow-fluorescent protein (CFP and YFP) and both transgenes included a CMV promoter as a regulatory sequence. The use of reporter genes with a viral promoter and coding for fluorescent protein had a number of advantages in the context of our work. The fluctuations of transcription from such a promoter are expected to be independent of specific gene networks and dependent only on the variation of the overall transcriptional potential. The fluorescent proteins used are devoid of cellular functions; hence, they are not targeted by specific regulatory events, nor do they impact on the gene expression process itself. Our previous study revealed a markedly heterogeneous gene expression level within clonal populations of cells. The expression of both transgenes varied considerably between the cells of the same clone and correlated only poorly with each other within the same individual cell. This suggested that the chromatin context around the transgene insertion sites is the major determinant of the gene expression level and the promoter has unexpectedly little effect on it. At the same time, the overall gene expression level distribution in each clonal population displayed remarkable stability over time independently of the average expression levels. The profile of the clones described in Neildez et al. remained essentially identical over the 5 years the cells spent in culture with subpopulations of four types (YFP+/CFP+; YFP+/CFP-; YFP-/CFP+ and YFP-/ CFP-, Fig. 1) remaining in strikingly constant proportions. The population level stability can be based 1. either on the ergodic properties of the cell where rapid fluctuations in individual cells continuously restore the overall distribution; 2. or on a cellular memory mechanism that maintains the expression level of the mother cell in the daughter cells. In this case, the heterogeneity in the population is mainly generated at the early stages of clonal expansion and followed by stability in the cell lineages. In order to differentiate between the two possibilities, we investigated the temporal dynamics of the reporter gene expression variation at different time-scales: over several days, weeks or within a single cell cycle. We observed a slow and independent diversification of expression levels of CFP and YFP reporters in sub-lineages within the same clonal populations suggesting the process involves the opposing action of variance-generating and ''memory'' mechanisms. To clarify the relative contribution of the different stages of the gene expression process to the equilibrium between variation and memory in cell lineages, we established new cell clones with decreased mRNA and protein stability. The ''memory'' effect was completely lost in these clones. These observations demonstrate that in addition to chromatin-based mechanisms protein and mRNA stability are crucial to maintain a stable level of gene expression in a cell lineage. Parental clone characterization Essentially all the eight clones described earlier displayed a similar stable qualitative profile independently of the average expression level of each reporter gene. This suggests that all 16 chromatin integration sites impacted the long-term fluctuation profile in the same manner. Therefore, a single representative clone was chosen from the collection for detailed analysis. Although genetically identical, the cells of the clone displayed highly variable YFP and CFP fluorescence levels ( Fig. 1). First, we verified if the differences of the YFP and CFP protein levels between the cells reflect the transcription of the genes. Using Q-RT-PCR, we measured the mRNA levels of the two transgenes in flow cytometry-sorted nonfluorescent, low-fluorescent and high-fluorescent cells (S1 Fig.). The relative level of transcripts correlated well with the average fluorescence intensity and no YFP and CFP transcripts were detected in non-fluorescent cells. Therefore, the protein levels correlate with the transcript levels. A small fraction of cells of the clonal population did not express any of the transgenes (YFP-/CFP-) or only one of them (YFP-/CFP+ or YFP+/CFP-). We wondered if the silent state is determined by the epigenetic state of the corresponding promoter. We therefore analyzed the DNA methylation pattern of the CMV promoter in expressing and non-expressing cells using the bisulfite sequencing method. The promoter was found unmethylated irrespectively of the expression of the transgene (S2 Fig.), and cannot account for the transcriptional state further reinforcing the idea that the observed silencing is not due a specific regulatory mechanisms acting directly on the CMV promoter. We suspected the wider genomic context to be at the origin of transgene-and cell-dependent variation of expression levels, because the main difference between the two transgenes resided in their genomic integration sites. Unfortunately, the cloning, identification and analysis of the integration sites (S3 Fig.) provided no specific clues to understand why the transgenes were silenced in a fraction of the cells, why the expression levels varied so widely and why the two transgenes displayed independent expression levels. The YFP-coding transgene was integrated in the intron 3 of the Glis3 gene on the Chr. 9p24.2 and the CFP-coding transgene into the intron 7 of the Wee gene (Chr15.5). We have examined the expression of the genes flanking the integration sites as well as the Glis3 and Wee genes where the transgenes were inserted. They all are expressed at the same level in low-and high reporter gene-expressing cell fractions (S4 Fig.) suggesting that both loci are fully active in the cell type used in this study. Fluctuation analysis at low temporal resolution Three types of subclones were derived from the original clonal population by isolating individual cells using a cell-sorter originated from cells with high YFP fluorescence, with low YFP fluorescence and from cells that did not express YFP. Overall, 12 subclones derived from high expressing cells, 18 from low expressing cells and 14 from negative cells were analyzed. CFP fluorescence was not taken into account for the cloning. The individual subpopulations derived from the isolated cells were cultured for 55 days and periodically analyzed using flow cytometry. After an initial period of 2 weeks required for the expansion of the subclones, each population was analyzed once a week by flow cytometry. Although the average YFP fluorescence in the subclones varied substantially, all of them were characterized by the general tendency of a slow relaxation to the original parental average ( Fig. 2A). However, the process was strikingly slow, requiring many cell generations. Three weeks after the isolation of the founder cells, it was still possible to recognize if a population was derived from a high-, low-or non-expressing cell. The normalized variance of the fluorescence calculated as the ratio of the variance and the square of the mean (NV5s 2 /m 2 ) remained constant all over the experiment in the high-and low-expressing subclones (Fig. 2B). This shows that the variation of the expression level is a steady and continuous process that results in the increase of the overall heterogeneity with the expansion of the population. In subclones derived from expressing cells a fraction of non-expressing cells was systematically observed indicating that complete and mitotically stable silencing of the transgene also occurred in some cells with a detectable frequency (Fig. 2C). This frequency of silencing was higher in subclones derived from low-compared to high-expressing cells. One subclone derived from a low-expressing cell became negative by the end of the experiment ( Fig. 2A) The subclones derived from initially negative YFP cells mirrored the behavior of the populations derived from expressing cells. In all but one of the analyzed 14 subclones we observed a varying proportion of YFP expressing cells (Fig. 2C). This suggests that spontaneous activation of the transgene occurred with a low but detectable frequency and that the active state was transmitted through the mitosis. The CFP fluorescence was also analyzed in the subclones. Since they were established on the basis of YFP fluorescence of the founder cells, their original CFP fluorescence level was not known. The subsequent flow cytometry analysis of the subclones showed that the fluctuation of the CFP fluorescence also followed slow relaxation dynamics toward the original parental average (not shown) with rare switch-on and switch-off events (Fig. 2D). Despite the similar kinetics, these events did not correlate with the YFP fluorescence further demonstrating the independence of the two transgenes. Fluctuation analysis at high temporal resolution Next we sought to characterize the fluctuations at higher time resolution using time-lapse microscopy. This approach allowed us to follow individual cells and Fig. 2A). The initially high NV in populations derived from non-fluorescent cells is the consequence of the re-expression of the YFP fluorescence in a small fraction of cells. C. The fraction of the expressing cells in the populations derived from non-fluorescent, low-or high-expressing founders. The varying proportion of expressing cells in the populations derived from non-expressing founders suggests re-expression of the reporter gene is not uniform in these clones. The populations originated from low-expressing founders contain their descendants for 3 to 4 cell cycles, (up to 120-130 hours at a resolution of 1 image per 10 min). A representative movie can be found in the Supplementary files (S1 Movie) and a snapshot is shown on Fig. 3. It was particularly easy to recognize the cells belonging to the same lineage even after several divisions, because the mother cells and their siblings have essentially the same color indicating similar levels of the two fluorescent proteins. High expressing cells gave rise to high expressing lineages of cells and, reciprocally, low expressing cells produced low expressing daughter cells. This illustrates the low fluctuation of YFP and CFP fluorescence at the time scale of the time-lapse analysis. These observations are in accordance with the data obtained on the longer time-scale using cytometry and illustrate well that the cells are able to maintain the average level of the reporter proteins close to that of the cell they derived from three-four generations earlier. To better illustrate this essential point, we used an automatic image analysis to identify and track the cells and their descendants. The total and mean intensities of the YFP and CFP fluorescence in single cells were recorded. The quantitative analysis of the cells confirmed the qualitative observations. The total fluorescence of both reporter proteins increased progressively together with the cell volume during the cell cycle. During the first half of the cell cycle the increase was modest but steeply increased during the second half (Fig. 4). The total fluorescence dropped roughly to half at each division as the cell mass was divided by two. However, the cells conserved a similar average fluorescence because the cell volume also increased progressively during the cell cycle and dropped to half at division. As shown earlier, the fluorescence level in the cells correlated with the mRNA level. However, the within-lineage stability of the fluorescence level was similar in low-and high-expressing cells, suggesting that the ''memory'' mechanism responsible for the transmission functioned independently of the transcription. On rare occasions, we observed cell lineages where the intensity of one of the two fluorescent proteins decreased continuously. One representative cell lineage is shown on the Fig. 5. The mother cell displayed typical behavior, but the YFP fluorescence started to decrease simultaneously in the daughter cells and further decreased in the subsequent generations. This continuous decrease suggests that the transcriptional silencing of the reporter gene presumably occurred in the mother cell and was transmitted stably over the division and the already synthetized mRNA-s and fluorescent proteins became diluted over several cell generations resulting in a gradual decrease. The ratio of such silencing events was only 1-2%. The CFP level however, remained stable in the same cell lineage. varying proportions of cells that have silenced the reporter gene, while silencing is less frequent in the populations derived from high-expressing founders. D. The expression/silencing of the CFP-coding reporter gene is independent of the YFP-coding reporter gene as indicated by the varying fractions of CFP-expressing cells in the same subclones presented on Fig. 2A to Fig. 2C. All but one YFP negative cells were positive for CFP (left) on the day of cloning, one low YFP-expressing (middle) and four high YFP-expressing (right) clones derived from cells that did not express CFP. Computer simulations The observations both on short-and long time-scales show slow reporter gene expression level variation over several cell generations. This global kinetics of the fluorescence fluctuations reflects the overall outcome of the gene expression process that includes production and degradation of the YFP and CFP proteins. However, the impact of the various steps on the overall kinetics may differ substantially. In order to discriminate between the relative impact of the different steps on the overall clonal stability of the gene expression level in our experimental system and to formulate experimentally testable hypotheses we performed computer simulations using a 'random-telegraph model' modified to include regular cell replication events during which the mRNAs and the proteins quantities are equally divided between the mother and daughter cells. More precisely, our model simulates the levels of proteins in an individual cell as a function of chromatin opening/closing rates, frequency of transcriptional bursts during the open chromatin periods, rates of mRNA and protein production and degradation and frequency of division events. The main conclusion emerging from the simulations concerns the strong within-lineage correlation, or ''memory effect'' of the expression levels. The long mRNA-and protein half-lives appear as key parameters for the maintenance of the expression level over several cell generations. When the half-life of the protein was reduced in the model, the memory effect disappeared and the autocorrelation of the fluorescence levels over time decreased drastically (Fig. 6). This observation can be interpreted as follows: The amount of fluorescent protein in a cell is determined by the balance between synthesis, degradation and regular halving during cell division. The synthesis rate depends on the slow chromatin opening/closing kinetics and the faster transcription/ translation reactions. When degradation is slow, the cell may reach the end of the cycle and divide before the protein level reaches the equilibrium. As a result, the protein level in the cell (reflected by the total fluorescence in the experiments) is essentially limited by the regular dilutions due to cell division. Hence, the slow variations of the protein level predominantly reflect the fluctuations of the chromatin on-off dynamics that may result either in the halting of the synthesis process during possibly long-lasting closed-or in open and transcriptionally potentially active chromatin states. When the mRNA and protein degradation rates are high, the equilibrium level is low and can be reached before the division event. The rapid variations of this level now reflect the bursty fluctuations of the transcription/translation. We have tested experimentally these predictions by the construction of new clones with short-live mRNA and proteins (see bellow). The computer simulations also revealed that the existence of stable subpopulations with one or both reporter genes inactivated over several generations, this being the primary consequence of the low chromatin on-off switch rates (S5 Fig.). Rare switches lead to long periods of closed chromatin when transcription is impossible and allow the decrease of fluorescence by gradual dilution over several cell cycles as it was observed on the time-lapse records (Fig. 5). This prediction on the rare chromatin switches is also experimentally testable (see the next section). The chromatin on/off rates are low In order to explore the predictions of the model, we set up two types of experiments. First, we sought to measure the frequency of transcriptional bursts. To do this, we performed whole-cell photobleaching studies. The experimental strategy was founded on the idea that rapid increase of fluorescence is expected following a transcriptional burst during the phase of protein synthesis. Previous time-lapse measurements failed to detect the increase presumably because the increment of fluorescence was too small to be easily differentiated from the already high fluorescence of the highly stable YFP and CFP expressed by the cells. To overcome this difficulty, we reduced the fluorescence of the YFP to 20% of the initial level by photobleaching. The intensity of CFP was kept unchanged as control and used to normalize the YFP intensity. The YFP fluorescence of randomly selected cells was bleached and the cells were monitored for 5 hours. The normalized YFP/CFP results are shown on Fig. 7. Although the low number of the cells analyzed did not allow a precise statistical estimation, it was clear from these results that approximately only the half of the cells in the population started to recover the YFP fluorescence immediately after the bleach indicating active transcription and translation of the YFP reporter gene. The other half behaved exactly as the control cells (S6 Fig.) treated with an inhibitor of transcription The long mRNA and protein half-lives determine cellular memory Next, we sought to clarify the effect of the long mRNA and protein half-lives. To do this, we created new reporter gene-expressing cell lines. We modified the yellow and cyan gene constructs in a way to decrease the reporter half-lives. We used a destabilizing AU-rich element (ARE) and the PEST sequence of the mouse ornithine decarboxylase to reduce the mRNA and protein half-lives respectively. The experimental measurements indicated that the half-life of the Fig. 6. Computer simulation of the effects of protein stability on the evolution of the total fluorescence in a clonal population. Here we consider a period during which the chromatin state is constantly open. The results obtained with long half-lived mRNA and proteins are shown on the left side and those with short half-lived proteins and mRNA on the right side. Panels A (resp. B) through G (resp. F) represent changes in number of molecules in a single cell and its daughter cells during 7 divisions for stable molecules (resp. unstable molecules). All times are in minutes. Note the steady increase of the Protein number during the cell cycles and the rapid decrease during the divisions (panel C) and the irregular fluctuations for a similar simulation with unstable molecules (panel D). Panels E and F represent the number of protein normalized by a hypothetical volume increasing linearly from 1 to 2 during the cell cycle (that is to say the mean fluorescence level). Stable molecules lead to low stochasticity (panel E, NV50.017) during the open chromatin state. Unstable molecules reveal a highly stochastic behavior (panel F, NV50.11) even though chromatin remains stably open. This difference is further exemplified on panels G and H that show the autocorrelation function calculated on the normalized protein concentration on the basis of very long simulated data (120 divisions). Note the rapid loss of autocorrelation in cells with unstable mRNA and proteins (panel G) compared to cells with stable mRNA and proteins (panel H). The detailed description of the model and the parameters can be found in S1 File. Fluctuating Stability and Epigenetic Memory mRNA decreased to 1.5 h for YFP and 1.9 h for CFP, less than the half of the corresponding half-life in the cells used in the first part of this study. In the case of the proteins, the introduction of the ''PEST'' sequence had an even stronger effect: the half-life of the YFP decreased from the average 43 h to 6 h30 and that of CFP from 29 h30 to 5 h40. Many independent clones have been established. All of them displayed qualitatively similar fluctuating gene expression suggesting that the reporter gene's integration site did not impact significantly the temporal dynamics of fluctuations. Two representative clones with single insertion sites for both transgenes were analyzed in detail. As expected on the basis of the short half-life of the mRNA-s and proteins, the average level of YFP and CFP fluorescence measured by flow cytometry was found 100 times lower than the levels observed in cell clones with the stable proteins. Despite the large difference in the expression levels, the snapshot of the expression profile was similar in the two types of clones with cells that expressed only one fluorescent protein or that were negative for both. Time-lapse video microscopy analysis revealed that the fluorescence level of both reporter proteins now varied substantially and with a high frequency ( Fig. 8 and S2 Movie). The period of the changes was shorter than the cell cycle, so that the same cell could change fluorescence between two divisions. A simple visual inspection was sufficient to conclude that no lineagespecific correlation of the fluorescence can be seen in these clones. On the basis of the time-lapse records we quantified the fluorescence fluctuations in about 120 cells from the two clones and calculated the autocorrelation functions for YFP and CFP in each cell and the time t 1/2 for the autocorrelation to drop to 50%. Although the cell-to-cell differences are important, both for YFP and CFP t 1/2 was less than 2 h in the majority of cells, illustrating the loss of the ''cell memory'' observed previously (Fig. 9). The half-life of the two proteins correlated well within the same cell suggesting that they are degraded through the same pathway ( Fig. 9). Therefore, the rapid fluctuations of the fluorescence were made possible by the short lifetime of the mRNA-s and the corresponding fluorescent proteins and presumably directly reflect the fluctuations in the transcriptional activity of the genes. Discussion In the present study we have analyzed the temporal stability of gene expression in clonal populations of cells with two independent reporter genes having identical promoters but coding for different fluorescent proteins. As described earlier, the transcriptional activity of the two copies of the identical promoter was significantly different even in the same individual cells. This resulted in a substantial population level phenotypic heterogeneity of the reporter gene expression levels within the same clone. Different clones differed by the average expression level of the population demonstrating the role of the reporter gene integration site in determining the average rate of transcription. However, the temporal behavior of these clones was similar. In the present study, we analyzed one representative clone in detail. Contrary to our expectations, the gene expression levels in individual cells fluctuated very slowly. Daughter cells displayed almost identical levels of yellow and cyan mean fluorescence than the maternal cell resulting in easily recognizable lineages of cells with similar expression levels within the same population ( Fig. 4 and S1 Movie). When individual cells were sub-cloned from the same clonal population, it took several weeks and many cell divisions for the initially different fluorescence levels in the subclones to gradually regress to the average of the initial population ( Fig. 2A). We are dealing therefore with two opposite processes; the first is a ''memory'' process that tends to maintain the same expression level of the proteins over many cell generations while the second process introduces variation and gradually erases the memory expression level. Our observations clearly show that in our model mRNA and protein stability are key players in the memory process. The average half-life of the CFP and YFP proteins exceeded the length of two cell cycles. The protein molecules synthesized in the maternal cell have great chances to be still present in granddaughter cells. As a result, the high protein stability filters the rapid fluctuations generated by the noisy transcription bursts. The dominating slow fluctuations of the fluorescence level reflect the chromatin dynamics that determines the transcriptional potential of the reporter gene. When cell clones coding for short half-life mRNA and proteins, but transcribed from identical promoters were established the memory effect was lost. We observed rapid fluctuations of the fluorescence levels in these cells with a characteristic time scale of a few hours. These rapid variations presumably reflect the high transcription burst frequency of the CMV promoter that was undetectable in the clones with the stable reporter proteins. Therefore, these observations demonstrate that by buffering the rapid fluctuations due to transcriptional bursts the high stability of the messenger and the protein product contributes to the conservation of the gene expression level over many cell division. This role is traditionally attributed to chromatin related epigenetic mechanisms. Indeed, chromatin-based mechanisms are known to transmit the transcription state of a gene over cell divisions and this is obviously the case in our cell model also. In the cells with the stable YFP and CFP proteins, the mRNA levels correlate with the corresponding fluorescence levels indicating that the overall synthesis rate is determinant for the protein abundance. YFP and CFP fluorescence levels may differ substantially within the same cell. The difference in transcription between the two reporter genes is not due to the promoter, because an identical promoter drives the two genes. It is the chromatin structure at the integration site that determines if transcription can occur. Chromatin structure also accounts for the transmission of the silent state during mitosis. This is particularly evident in low expressing cells. Cells derived from a low expressing founder remain low expressing for many generations suggesting that the low rate of mRNA synthesis was transmitted through the cell division. One reporter gene can even become silenced while the other reporter gene remains fully active. This total independence of the two genes driven by identical promoters can only be explained by the dominant influence of the chromatin at the integration site on the transcription potential. However, we observed that the memory of the overall gene product level is lost in cell clones with short half-lived mRNA and proteins in spite of the fact that the same CMV promoter was used as in the long half-lived mRNA/protein-expressing clones. The difference in the memory is therefore correlated with the stability of the gene products and not the transcriptional activity. Our observations extend the concept of cellular memory by showing that the conservation of the stable phenotype in a cellular lineage may largely depend on the very slow turnover of the fluorescent proteins. The volume and the mass of the maternal cell increase about twofold by the end of the cell cycle and this material is roughly halved during the division. In order to maintain the same level of a protein (in terms of number of molecules/volume) over the cell cycle and in the daughter cells after division, the total amount of the protein has to double by the end of the cycle. Hence, to achieve the stability of the phenotype in proliferating cells the protein synthesis rate has to exceed the degradation rate. This can be achieved even at low transcriptional burst frequency is the stability of the gene product is high. This is exactly what we see, but only in the long half-lived mRNA/ protein-expressing cells (Fig. 3); the total YFP and CFP fluorescence reflecting the total number of fluorescent protein molecules of each species increases during the cell cycle and falls at the moment of division. It is very important to remind that stability is not an intrinsic property of the protein molecule itself. As a substrate of degrading enzyme(s), it is determined by the affinity (measured by the Michaelis constant) and the frequency of interactions between the protein and the degrading enzyme, but also by the frequency and affinity of interactions with partner proteins that may protect it from degradation. Stable cell phenotype in the cell lineage means that the concentrations of key proteins that bring about it remain stable over the cell divisions. It is interesting from this viewpoint that proteins of similar stability were found to belong to the same functional category. This observation can be interpreted by considering that the proteins of the same functional network frequently interact physically with each other. These interactions are essential both for the stability of the network as a whole but may account for the stability of the individual proteins also. It appears therefore compelling that mutual stabilization of the proteins by frequent interactions may be an important factor of phenotypic stability and cellular memory. If high stability of the proteins contributes to the maintenance of the phenotype in the cells, the opposite may also be true; it may hamper the cell's capacity to respond quickly in case of environmental stress. Therefore, protein stability may be an important target of regulation; a rapid degradation of functional proteins could be a first step to a rapid phenotypic conversion. Although the CFP and YFP fluorescence levels remained essentially stable over several cell generations under normal growth conditions, gradual drift was observed. Independently of the starting level in the founder cell, the average level and the distribution of the fluorescence in the population derived from it gradually approximated that of the original clonal population the founder cell was isolated from. The relaxation to the average is a hallmark of stochastic fluctuations. It is likely, that it results from random variations at all stages, because chromatin dynamics, gene transcription, protein production and degradation are all noisy processes. This was particularly evident in the case of the repressed chromatin state. The spontaneous stable reactivation resulting in a mosaic expression in a clonal population was the manifestation of the chromatin noise (Fig. 2C). Another factor that could contribute to the generation of heterogeneity was the length of the cell cycle. Cells with a long cell cycle synthesized more proteins and gave rise to daughter cells with higher fluorescence than those that divided early. As a result, after a few cell divisions, the initially small differences accumulated and contributed substantially to the heterogeneity in the population. Overall, the comparison of cell clones expressing high or low-stability reporter proteins raises a number of important points. The extent of variation is similar in both types of clones; these cells visit all possible expression level states from the lowest (non-expressing) to the highest (maximal expressing). However, the typical time scale of these variations is different in the two types of clones suggesting that they depend on different mechanisms. In the high stability protein-expressing clones the expression level and the fluctuations essentially reflect the effect of chromatin dynamics around the transgene integrations sites. Integration sites are unique and independent; as a corollary the characteristic expression levels and fluctuations differ even within the same cells. The memory effect due to the chromatin and the long half-life of the mRNA-s and proteins results in stable subpopulations with different expression patterns even within populations of isogenic cells. Clearly, in our system at least, the chromatin fluctuations are slow with rare switches between the repressed close and the transcriptionally competent open chromatin states, leading to stable subpopulations that repress none, one or both of the two reporter genes. Similar situation has been reported in a different cell model. By contrast, in the low stability protein expressing cells the fluctuations essentially reflect the noise arising during the initiation of transcription, mRNA synthesis and transport processes. The high stability of the reporter protein in the first type of clones successfully buffered these fluctuations, but the short half-life of the mRNA-s and proteins reveal them. The study of the temporal dynamics of the reporter gene expression variation at different time-scales provides a proof of principle that the epigenetic memory of phenotypic stability in a cell lineage emerges from the joint action of the process of gene transcription/protein-synthesis/degradation/cell division. Protein stability is crucial for the capacity of the cell to maintain a stable level of gene expression and raises an important possibility that it could be a target for efficient regulation when rapid change in the gene expression level is required. Although this conjecture has been discussed recently, the impact it may have on the epigenetic inheritance of cellular phenotypes during cell divisions remains underestimated. Indeed, our observations suggest that the stability of the cellular mRNA and proteins confers the capacity to a cell to conserve a stable gene expression level and transmit it over multiple generations even if transcription and translation are highly fluctuating. In addition, reducing short-term fluctuations through high stability of the molecules can be considered as simple way of transcription noise reduction at a low energy cost. Indeed, it takes less energy for the cell to maintain the constant level of a protein by not degrading the molecules already present than continuously re-synthesizing them. The cell clones used were originally described in. They were derived from 911 cells, a Human Embryonic Retinoblastoma cell line transfected with a plasmid containing the enhanced cyan fluorescent protein (CFP) reporter gene under the control of the cytomegalovirus (CMV) promoter. A population of stable CFP expressing cells was isolated using a cell sorter and transfected with a second construct containing the enhanced yellow fluorescent protein (YFP) reporter gene under the control of an identical CMV promoter. Double cyan/ yellow fluorescent cells were isolated using a cell sorter and expanded. Two of the eight clones described in Neildez et al. containing a single integrated copy of each transgene were selected for further analysis in this study. The integration sites of both transgenes in each clone were identified by splinkerette PCR. Flow cytometry analysis and cell sorting Cells were detached from the culture dish using trypsin/EDTA (Gibco), mixed with DMEM containing 10% FCS and centrifuged after sifting with a 40 mm cell strainer to remove cell clumps (Becton Dickinson). Cell pellet was suspended in 1x PBS (Gibco) containing 2% FCS. A dead cell staining was done with Propidium iodide at 2 mg/ml final concentration (Sigma) and cells either analysed or used for sorting. Flow cytometry acquisition and analysis were performed using a LSR II cytometer and the FACS DIVA software (Becton Dickinson). Typically, CFP and YFP were excited at 405 nm and 488 nm respectively and fluorescence collected using a 525/50 and a 530/30 filter, respectively. In order to compare results from one day to another, calibration beads were used each day. In each experiment, 10 4 PI negative cells were analysed. Cell sorting was performed using a MoFlo cell sorter (Cytomation). CFP and YFP proteins were excited using a 445 nm and a 488 nm (100 mW) excitation source respectively. Fluorescence was collected using a 485/25 nm filter and a 575/ 25 nm filter. Cells were collected in a 96 wells culture plate (Becton Dickinson) containing 100 ml of complete medium and incubated in standard conditions. Two days later, 50 ml of complete medium were added in each well and the presence of cells checked using an epifluorescence microscope. When the cell density had reached 80%, cells were transferred to a 12 wells, then a 6 wells plate and finally to a 25 cm 2 dish where they were routinely diluted and expanded. Time Lapse imaging Cells were seeded in a 35 mm culture dish (cat. P35G-0-14-C, MatTek corporation) at low density (10 3 cells/cm 2 ). The next day, the dish was set in a CO 2 -regulated chamber under a confocal microscope (Zeiss LSM 510 Meta) for imaging. The microscope whole system is enclosed in a thermo-regulated structure. The dish was incubated 1 hour under the microscope in order to stabilize the temperature of the whole system before acquisition. CFP and YFP fluorescence were acquired using a 10x dry objective, at 457 nm and 514 nm excitation wavelength respectively and fluorescence collected with a 480-520 IR and 535-590 IR emission filter respectively. Time delay between consecutive frames was set to 10 minutes and the image resolution to 1024*1024 using an 8 bits pixels depth. Whole cell photo-bleaching The culture dish with the cells was set up as mentioned for time-lapse experiments. An image of YFP and CFP fluorescence was acquired before the photo-bleaching with a 40x immersion objective. Then, a Region Of Interest (ROI) was drawn around each cell to be bleached at zoom 2x and the bleach performed. YFP fluorescence was bleached at 2x magnification with 20 pulses of a full power 514 nm wavelength laser (100 mW, Argon). A time-lapse acquisition was then performed at zoom 1x with a time delay of 10 for 6 hours. In each experiment, we monitored bleached cells but also control cells that were not illuminated. Bleaching settings were optimized in regard to laser power, number of pulses and magnification. Typically, with these settings, 80% of YFP bleaching is achieved and cell viability is not significantly affected. When indicated, cells were treated with the transcription inhibitor DRB (5 mg/ml) for 30 min before photobleaching (SIGMA) or the translation inhibitor cycloheximide (10 mM) 1 hour before photobleaching (SIGMA). Image Analysis and Data processing Time-lapse images were automatically analysed using the freely available CellProfiler software. The image analysis pipeline includes image intensity normalization, cell segmentation, cell tracking and quantification of morphological, spatial and fluorescence features. Movies and mosaics were edited manually with ImageJ. The statistical analysis and graphical representations of flow cytometry and image analysis data's were performed using the ''R'' software updated with the ggplot2 library. Note the lack of recovery in both cases except a small and rapid initial increase due presumably to the termination of ongoing reactions or partial fluorescence recovery of some bleached molecules. doi:10.1371/journal.pone.0115574.s006 (TIFF) S1 File. Supporting Materials and Methods. doi:10.1371/journal.pone.0115574.s007 (DOCX) S1 Movie. Time-lapse movie of the cell clone expressing the stable YFP and CFP proteins. The YFP and CFP fluorescence was colored artificially in red and green for better visibility. The cells expressing both YFP and CFP are colored by the proportional mixture of the two colors. Note that the mother cells and their siblings have essentially the same color indicating similar levels of the two fluorescent proteins. One image was recorded every 10 minutes for five days. doi:10.1371/journal.pone.0115574.s008 (MOV) S2 Movie. Time-lapse movie of the cell clone expressing the unstable YFP and CFP proteins. The YFP and CFP fluorescence was colored artificially in red and green for better visibility. The cells expressing both YFP and CFP are colored by the proportional mixture of the two colors. Note that the cells change fluorescence intensity during a single cell cycle. As a result, individual cell lineages cannot be tracked on the basis of the fluorescent protein expression level. One image was recorded every 20 minutes foe 3 days. doi:10.1371/journal.pone.0115574.s009 (MOV) |
1 Why Is Financial Management So Important in Business?
An organization’s financial management plays a critical role in the financial success of a business. Therefore, an organization should consider financial management a key component of the general management of the organization. Financial management includes the tactical and strategic goals related to the financial resources of the business. Some of the specific roles included in financial management systems include accounting, bookkeeping, accounts payable and receivable, investment opportunities and risk.
When establishing any financial management system, a business needs to determine if the management of the system will occur in-house or if it will use an outside entity. Any accounting system should measure, identify, record and communicate all of the financial information about the organization. The foundation of an effective accounting system is good bookkeeping. A bookkeeper gets the complete and accurate financial information to the accountant. While the accounting system looks at the overall financial picture of the organization, bookkeeping deals with the specific transactions that take place on a day-to-day basis.
Account payables provide an organization with information about accounts with suppliers. This includes the outstanding sums of money owed to these suppliers. Additionally, account payables will show the cost of items purchased, how the organization made payments in the past and details about the transaction. Accounts payable will also show the workflow and allow the business to approve invoices, update records and maintain an integrated document management system. Account receivables, on the other hand, records what customers owe the organization for products and services purchased. An accounts receivables system can keep track of invoices, payments, produce reminder letters for outstanding payments and calculate interest for balances owed. Additionally, accounts receivables can help the organization recover past due accounts before they become bad debts.
Another aspect the financial management system relates to finding opportunities that can complement or benefit the organization. A business can only exploit these opportunities if the organization efficiently and effectively finds the opportunities and has the ability to pay for the desired acquisitions. By carefully considering the different aspects of the financial management system, a business can evaluate its overall financial health and determine its ability to invest in potential opportunities.
A business also must carefully evaluate risk. A primary goal of the financial management system is to minimize risks for the organization by implementing strategies that help the business to counteract unforeseen liabilities. The financial management system should include adequate insurance for property, equipment and key employees. Additionally, budgeting for quarterly and yearly working capital helps to minimize potential financial risk for the organization. Further, controlling debt and establishing a credit system with suppliers and financial institutions helps to minimize financial risk by allowing the business operational flexibility in the event the business experiences cash flow problems.
Bass, Brian. "The Important Roles Within a Financial Management System." Small Business - Chron.com, http://smallbusiness.chron.com/important-roles-within-financial-management-system-31146.html. Accessed 20 April 2019. |
Scalable interaction design for collaborative visual exploration of big data Novel input devices such as tangibles, smartphones, multi-touch surfaces etc. have given impetus to new interaction techniques. In this PhD research, the main motivation is to study novel interaction techniques and designs that augment collaboration in a collocated environment. Furthermore, the main research aim is to take advantage of scalable interaction design techniques and tools that can be applied in a variety of devices so as to help users to work together on a problem with an abstract big data set, using visualizations on a collocated context. |
. Discussing the changed quality of life following open heart surgery the psychosexual status of 100 pre- and postoperative patients (aortocoronary bypass operation vs valve replacement) was assessed by standardized interviews. The preoperative interview showed only 9% of the patients having sexual intercourse during the last six months while 91% were sexually abstinent. As main reasons for the latter the patients named voluntary abstinence due to their illness (14%), negative advice of their doctor (22%) and no desire for sexual activity (33%). One year postoperatively the situation changed clearly. Within the last six months 47% of the former patients were sexually active, while 52% remained inactive. The open heart operation which improves life physically can also be seen as improving quality of life in respect to the psychosexual life of the patients. The interview data moreover show the great importance of medical advice in questions relating to sexuality which ought to be an absolutely necessary part of the dialogue with cardiological patients. |
<filename>config.py<gh_stars>0
import os
DATA_PATH = "./data/chest_xray"
TRAIN_DATA = os.path.join(DATA_PATH, "train")
VALIDATION_DATA = os.path.join(DATA_PATH, 'val')
TEST_DATA = os.path.join(DATA_PATH, 'test')
CPKT = "./cpkt/"
DEVICE = "/GPU:0"
BATCH_SIZE = 32
EPOCHS = 30
IMG_H = 200
IMG_W = 200
|
package data
import (
"encoding/json"
"io"
)
func NewReader(r io.Reader) *Reader {
return &Reader{decoder: json.NewDecoder(r)}
}
type Reader struct {
decoder *json.Decoder
error error
item *Item
}
func (r *Reader) Scan() bool {
if !r.decoder.More() {
return false
}
if r.item == nil {
r.item = new(Item)
}
r.error = r.decoder.Decode(r.item)
return r.error == nil
}
func (r *Reader) Err() error {
return r.error
}
func (r *Reader) Item() *Item {
return r.item
}
|
Hyperspectral tecniques for water quality monitoring: Application to the Sacca di Goro Italy The paper presents a comparison between an empirical algorithm and a physics based model for the assessment of water compound concentrations by remote sensing hyperspectral data. At the purpose a series of in situ measurements were carried out monthly, from June to October 2005, to spectrally characterize the water of the Sacca di Goro (Italy) at spatial (horizontal and vertical) and temporal (daily and seasonal) scales. The results obtained by the application of the two different methods to the in situ acquired data showed that an appreciable improvement is obtainable by considering the physical approach. |
<filename>server/servemaps.py
#!/usr/bin/env python
import modelwrapper
from options.test_options import TestOptions
import requests
import gzip
from PIL import Image
from io import BytesIO
from http.server import BaseHTTPRequestHandler, HTTPServer
import urllib.parse as urlparse
class S(BaseHTTPRequestHandler):
def _set_headers(self):
self.send_response(200)
self.send_header('content-type', 'image/png')
self.end_headers()
def gzipencode(self, content):
out = BytesIO()
f = gzip.GzipFile(fileobj=out, mode='w', compresslevel=5)
f.write(content)
f.close()
return out.getvalue()
def do_GET(self):
# Parse the given URL
o = urlparse.urlparse(self.path)
params = urlparse.parse_qs(o.query)
originalUrl = params["originalUrl"][0]
# Download the given image.
print("Downloadling " + originalUrl)
res = requests.get(originalUrl)
img = Image.open(BytesIO(res.content))
# img.save("./downloaded.png")
# Feed it to our neural network to get an ML-generated version.
img_fake = modelwrapper.getFakeImage(img)
# Create a stream in RAM to write the generated image as a PNG.
byte_io = BytesIO()
img_fake.save(byte_io, format='PNG')
# Encode it for a faster transfer (this also works without encoding).
content = self.gzipencode(byte_io.getvalue())
# Send the image to the client.
self.send_response(200)
self.send_header('Content-Type', 'image/png')
self.send_header("Content-Length", str(len(str(content))))
self.send_header("Content-Encoding", "gzip")
self.end_headers()
self.wfile.write(content)
self.wfile.flush()
def do_HEAD(self):
self._set_headers()
def runServer(server_class=HTTPServer, handler_class=S, port=8080):
server_address = ('', port)
httpd = server_class(server_address, handler_class)
print("Starting httpd server...")
httpd.serve_forever()
if __name__ == '__main__':
# Use CycleGAN's option parser to let users configure the server as if
# it was just a CycleGAN/pix2pix model.
opt = TestOptions().parse()
# Hard-code test usage, so that the model doesn't spend time generating
# extra unneeded visuals.
opt.model = "test"
# Options from test_single.sh:
# opt.direction = "AtoB"
# opt.dataset_mode = "single"
# opt.netG = "unet_256"
# opt.norm = "batch"
# Load the model.
modelwrapper.setup(opt)
# Start the server.
runServer()
|
103 W high beam quality green laser with an extra- cavity second harmonic generation. We demonstrated a 103.5 W green laser with an extra-cavity second harmonic generation. The IR source was a high power Q-switched Nd:YVO MOPA laser. The type I phase-matching LiBO was used as the nonlinear crystal in the second harmonic generation. The 103.5 W average power of 532 nm green laser was obtained at a repetition rate of 60 kHz with the beam quality factors of M2x <1.44 and M2y <1.23 in the orthogonal directions, corresponding to a peak power as high as 1.5 MW with the instability of pulse energy less than +/-4%. The optical frequency conversion efficiency from IR to green laser was up to 67%. |
Hybrid Approach for Metamodel and Model Co-evolution Evolution is an inevitable aspect which affects metamodels. When metamodels evolve, model conformity may be broken. Model co-evolution is critical in model driven engineering to automatically adapt models to the newer versions of their metamodels. In this paper we discuss what can be done to transfer models between versions of a metamodel. For this purpose we introduce hybrid approach for model and metamodel co-evolution, that first uses matching between two metamodels to discover changes and then applied evolution operators to migrate models. In this proposal, migration of models is done automatically; except, for non resolvable changes, where assistance is proposed to the users in order to co-evolve their models to regain conformity. Introduction In Model-Driven Engineering (MDE), metamodels and domain-specific languages are key artifacts as they are used to define syntax and semantics of domain models. Since in MDE metamodels are not created once and never changed again, but are in continuous evolution, different versions of the same metamodel are created and must be managed. The evolution of metamodels is a considerable challenge of modern software development as changes may require the migration of their instances. Works in this direction exist already. Several manual and semi-automatic approaches for realizing model migration have been proposed. Each approach aims to reduce the effort required to perform this process. Unfortunately, in several cases it is not possible to automatically modify the models to make them conform to the updated metamodels. This is so because certain changes over metamodels require introducing additional information into the conformant model. In the literature, three general approaches to the migration of models exist: manual, state-based, operator-based. Manual approaches are tedious and error prone. State-based approaches also called difference-based approaches allow synthesizing a model migration based on the dif-ference between two metamodel versions. In contrast, operator-based approaches allow to incrementally transforming the metamodel by means of coupled operations which also encapsulate the corresponding model migration. They allow capturing the intended model migration already when adapting the metamodel. A major drawback of the later approach has been overly tight coupling between the tool performing the migration, and the recorder tracking the changes made to the models. Usually, existing approaches try to find how to best accomplish model coevolution. Essentially, we can define two main requirements: the correctness of migration and minimizing the effort of migration by automating as far as possible the process. In this paper, we propose an alternative solution to model migration which combines state-based and operator based principles to co-evolve models and metamodels. Our vision to resolve this problem is to generate evolution strategies with their corresponding model migration strategies. We focus on including users decisions during metamodel and model co-evolution process to ensure semantic correctness of evolved models. The rest of the paper is structured as follows. Section 2 gives an overview of basic concepts and describes the metamodel and model co-evolution problem Section 3 presents our proposed approach for solving the model co-evolution problem. In section 4, we present some proposed approaches in the past and situates our solution. Section 5 presents some guidelines to implement proposed framework. Finally, section 6 concludes and gives some future works. Models and metamodels In this section we present the central MDE definitions used in this paper. The basic assumption in MDE is to consider models as first-class entities. An MDE system basically consists of metamodels, models, and transformations. A model represents a view of a system and is defined in the language of its metamodel. In other words, a model contains elements conforming to concepts and relationships expressed in its metamodel. A metamodel can be given to define correct models. In the same way a model is described by a metamodel, a metamodel in turn has to be specified in a rigorous manner; this is done by means of meta-metamodels. This may be seen as a minimal definition in support of the basic MDE principle "Everything is considered as a model". The two core relations associated to this principle are called representation "Represented by" and conformance "Conform To". A model conforms to a metamodel, when the metamodel specifies every concept used in the model definition, and the models uses the metamodel concepts according to the rules specified by the metamodel. In this respect, the object management group (OMG) has introduced the four level architecture which organizes artifacts in a hierarchy of model layers (M0, M1, M2, and M3). Models at every level conform to a model belonging to the upper level. M0 is not part of the modeling world as depicted in Fig.1, so the four level architec-ture should more precisely be named (3+1) architecture. One of the best known metamodels in the MDE is the UML (Unified Modeling Language) metamodel; MOF (Meta-Object Facility) is the metametamodel of OMG that supports rigorous definition of modeling languages as UML. Metamodel evolution and model co-evolution Metamodels may evolve in different ways, due to several reasons : during design, alternative metamodel versions are developed and well-known solutions are customized for new applications. During implementation, metamodels are adapted to a concrete metamodel formalism supported by a tool. During maintenance, errors in a metamodel are corrected. Furthermore, parts of the metamodel are redesigned due to a better understanding or to facilitate reuse. The addition of new features and/or the resolution of bugs may change metamodels, thus causing possible problems of inconsistency to existing models which conform to the old version of the metamodel and may become not conform to the new version. Therefore to maintain consistency, metamodel evolution requires model adaptation, i.e., model migration; so these two steps are referred as model and metamodel co-evolution. Metamodel and model coevolution is a term that denotes a coupled evolution of metamodels and models, which consists to adapt (co-evolve) the models conforming to the initial version of the metamodel, such that they conform to the target (evolved) version, preserving the intended meaning of the initial model if possible, as illustrated in Fig.2. Furthermore, model adaptations should be done by means of model transformations. A model transformation takes as input a model conforming to a given metamodel and produces as output another model conforming to the evolved version of the given metamodel. Not breaking changes, changes occurring in the metamodel don"t break the models conformance to the metamodel. Breaking and resolvable changes, changes occurring in the metamodel do break the models, which can be automatically resolved. Breaking and non-resolvable changes, changes do break the models and cannot be automatically resolved and user intervention is required. However, a uniform formalization of metamodel evolution is still lacking. The relation between metamodel and model changes should be formalized in order to allow reasoning about the correctness of migration definitions. Logic programming Logic programming is a programming paradigm based on formal logic. A program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses (H :-B1,, Bn). These clauses are called definite clauses or Horn clauses and are read declaratively as logical implications (H if B1 and and Bn). Logic programming is used in artificial Intelligence knowledge representation and reasoning. We have find this formalism very powerful to represent relationships between changes and consequently, from an initial set of changes inferring all possible evolution strategies. Currently, to our best knowledge, there is no approach that uses an intelligent reasoning for defining model migrations. Therefore, we have integrated logic programming in our proposal to resolve model co-evolution problem. Proposed Approach In this section we describe our proposal to ensure the co-evolution of model with their metamodels. The overall evolution and co-evolution process is presented in Fig.3. Conforms to Conforms to Evolution Adaptation Our approach is hybrid because it exports techniques from state-based and operator based approaches and uses also a reasoning mechanism from artificial intelligence. It contains four phases: changes detection, generation and validation of evolution strategies, determination of migration strategies and migration of models. In the first step; differences between two metamodel versions need to be determined by using matching technique. In the second step we use an inference engine to generate different evolution strategies by assembling atomic changes in possible compound ones; in the third step we explore a library of operators to obtain different migration procedures, which will be assembled to constitute migration strategies. In the last step users employ a selected evolution strategy and consequently, the migration strategy will be applied over a specific model conforming to the old version in order to obtain a new model conforming to the newer metamodel version. Detection of changes The detection of differences between models is essential to model development and management practices. Thus evolution from one metamodel version to the next can be described by a sequence of changes. Understanding how metamodels evolve or dis- covering changes that have been performed on a metamodel is a key requirement before undertaking any migration operation on models to co-evolve them. In fact, we distinguish two ways for discovering changes: matching approaches and recording approaches. In Our approach, for detecting the set of changes performed to the older version of the metamodel in order to produce the new one, we use generic algorithm. Whereas current generic approaches only support detecting atomic changes, some language-specific approaches also allow detecting composite changes; but only for one specific modeling language. Primitive differences between metamodels versions are classified in three basic categories: additions, deletions, and updates of metamodel elements. These differences represent elementary changes (i.e. atomic). In fact, composite or compound changes have been already considered in previous works like. But, we envision tackle the problem differently. We call evolution strategy a possible sequence of changes; here changes are either elementary or composite. Thus, a set of composite changes is inferred from the detected set of atomic changes, by using rules that define composite changes in terms of atomic changes. This mechanism is detailed in the following section. Generation and validation of evolution strategies Detected differences are represented as elementary changes specifying fine-grained changes that can be performed in the course of metamodel evolution. There are a number of primitive metamodel changes like create element, rename element, delete element, and so on. One or more of such primitive changes compose a specific metamodel adaptation However, this granularity of metamodel evolution changes is not always appropriate. Often, intent of the changes may be expressed on a higher level. Thus, a set of atomic changes can have together the intent of a composite change. For example, generation of a common superclass sc of two classes c1 and c2 can be done through successive applications of a list of elementary changes, such as "Add_class sc", "Add_reference from c1 to sc", and "Add_reference" from c2 to sc. One way to resolve the problem of identifying composite changes is to use operation recording. But, this solution has some drawbacks. In our proposal, we use logical predicate with the language Prolog. Horn clauses are used to represent knowledge. Therefore, we formally characterize changes. Detected atomic changes are represented as positive clauses (i.e. facts) and composite changes are specified by rules such as Left hand side contains the composite change and the Right hand side contains a set of associated atomic changes. Thus, the applicability of a compound change can be restricted by conditions in the form of rules. According to this principle, we have formalized a knowledge base. The definition of changes is inspired from the literature. The knowledge base is used by the inference engine to generate possible evolution strategies. Finally, evolution strategies must be validated. This step consists of applying each evolution scenario defined by the strategy on the old input version of the metamodel. If it results the newer input version then the tested strategy is valid and it is retained else the strategy in test is rejected. The final output is a set of valid evolution strategies. Complex changes are specified through rules. As an instance, we consider extract super class operation where a class is generalized in a hierarchy by adding a new general class and two references to their subclasses. Determination of migration strategies In this step we import techniques of operator based-approaches. We use in this phase a library of operators. Thus, we specify a change as an evolution operation. An operation evolution can be either simple or composite and every operation is defined through a set of parameters. We associate to it information about how to migrate corresponding models in response to a metamodel evolution forming a migration procedure. Migration procedure is encoded as a model transformation that transforms a model such that the new model conforms the metamodel undergoing the change. Furthermore, we explicitly specify in migration procedures some assistance specifications for each change requiring additional information from user to solve it. This makes our library different of that used in previous works. The library does not contain evolution steps but only the migration procedure referenced with evolution operation. In our proposal we take from the library migration procedures corresponding to changes in the evolution strategy; after their instantiation, we assemble them to constitute the complete migration strategy which will be associated to the evolution strategy. The final result in this step is a set of couples (evolution strategy, migration strategy) specified to co-evolve input models conforming to the specified metamodel. Migration This phase takes as input an instance model conforming to the initial metamodel. This model is also called user model. To transform the model to newer version of the met-amodel, firstly one of available evolution strategies previously inferred is considered. According to the taken evolution strategy associated migration strategy will be automatically generated and then applied to the input model. For breaking and irresolvable changes, the system assists establish adequate migration procedure by presenting alternative solutions. Additionally, users can provide additional information to complete the change on the model if necessary. For instance, if the new attribute must be initialized, the user must also be requested for the initial value. If the user is satisfied by the resulted model the process is achieved, otherwise he can try again by selecting other proposed evolution strategy and the process continues so that, until user satisfaction or no choice is available. Implementation In this section, we give details and technical choices made to implement a prototype of the proposed framework. As meta-metamodel, we use Ecore from the Eclipse Modeling Framework (EMF). However, our approach is not restricted to Ecore, as it can be transferred to all object-oriented metamodeling formalisms. For the definition of rules specifying knowledge base used to infer evolution strategies, we have adopted an adequate formalism for logic programming Prolog. Prolog is chosen because in one hand it is a language of knowledge representation and in the other hand using inference rules eliminates programming to get eventual compound changes, the task is performed by the inference engine of Prolog. Furthermore, Prolog interpreters are developed in several languages, which facilitates the use of the prolog formalism. The computation of the differences between metamodel versions is performed with The Eclipse plug-in EMF Compare. This tool provides algorithms to calculate the delta between two versions of a model and visualizes them using tree representations. EMF Compare is capable of detecting the following types of atomic operations: Related Works In this section we will give an overview of current metamodel and model co-evolution approaches and already implemented systems. Over the last few years, the problem of metamodel evolution and model co-evolution has been investigated by several works like,,,. Currently, there are several approaches that focus on resolving inconsistencies occurring in models after metamodel evolution, a classification of these model migration approaches is proposed in. This classification highlights three ways to identify needed model updates: manually, based on operators, and by using metamodel matching. When manually approaches like in, updates are defined by hand. In operators based approaches, like,, metamodels changes are defined in terms of co-evolutionary operators. Those operators define conjointly the evolution on the metamodel and its repercussion on the models. Finally, in metamodel matching, like,,, versions of metamodels are compared and differences between them are used to semi-automatically infer a transformation that expresses models updates. Manual specification approach like Flock is very expressive, concise, and correctness is also assured but finds difficulties with large metamodels since there is no tool support for analyzing the changes between original and evolved metamodels. Operator based approaches like ensure expressiveness, automaticity, and reuse, it was been perceived as strong in correctness, conciseness and understandability but its lack is in determining which sequence of operations will produce a correct migration. Analysis of existing model co-evolution approaches, and comparison results of some works, has yielded guidance for defining some requirements to our approach. To take advantage of state-based and operator-based approaches, previously discussed. We have proposed an alternative solution where we applied a hybrid approach to define model migration. The solution presented in this paper has a number of similarities with the techniques illustrated in, but it differs from this approach because it takes as input results of a matching process. Therefore, it permits evolving models with different tools. Another, strength of our solution is the proposed reasoning mechanism, which allows finding different evolution strategies and consequently different migration strategies. Proposed solution minimizes as far as possible the user effort to migrate models. Thus user intervention is limited to a control task in the end of the process to validate results which permits to increase expressivity and correctness. Conclusion In this paper we have proposed an alternative solution to automate the co-evolution of models and metamodels. In our proposal we use a hybrid approach. It takes advantages from state-based and operator based approaches. This solution consists of using a library of coupled operation and also a knowledge base of changes definition The benefits of this approach are numerous, notably automaticity of the coevolution is augmented compared with other techniques because even for changes requiring specific information, we have predict automatic model migration with user assistance. Moreover, our solution is independent from any modeling environment. It is easily adapted to various modeling environment. Using an intelligent logic mechanism to infer compound changes and evolution strategies increase effectiveness of our proposal. This makes our solution distinguishable from existing works. However, currently the evaluation of the proposed framework is not performed. For a complete validation, we will conduct case studies with industrial models. In the long term, we want to study the possibilities to extend our solution to support repre-sentation of semantic in models and preserving semantics within the migration process as introduced in. |
/**
* An Inbound LTP Block - A Block received or in the process of being Received
*/
public class InboundBlock extends Block {
private static final Logger _logger =
Logger.getLogger(InboundBlock.class.getCanonicalName());
/**
* LTP Receiver States -- state of an Inbound Block
* Corresponding to RFC5326 8.2 Receiver State Machine.
*/
public enum LtpReceiverState {
/** Idle, closed, not active, cancelled */
CLOSED,
/** Receiving data segments, any color */
DS_REC,
/** Receiving Red data segments */
RCV_RP,
/** Receiving Green data segments */
RCV_GP,
/** Received EOB, awaiting full ack */
WAIT_RP_REC,
/** Send Cancel Request, awaiting Cancel Ack */
CR_SENT
}
// State of the LTP Receiver Machine ala RFC5326 8.2
private LtpReceiverState _ltpReceiverState = LtpReceiverState.CLOSED;
// List of ReportSegments issued on this Block; We keep these around since
// we need their information when we receive a ReportAckSegment so we
// know what was acked.
private ArrayList<ReportSegment> _reportSegmentsList =
new ArrayList<ReportSegment>(100);
// Map ReportSerialNumber to ReportSegment for faster lookup
private HashMap<ReportSerialNumber, ReportSegment> _reportSegmentsMap =
new HashMap<ReportSerialNumber, ReportSegment>();
// Serial number reported in a ReportSegment for an InboundBlock. Set to
// Random when first Segment of Block received. Incremented by 1 for each
// Resend of a ReportSegment.
private ReportSerialNumber _reportSerialNumber;
//
private int numberReceptionProblems = 0;
/**
* Constructor for an inbound Block whose first Segment is the given
* DataSegment.
* @param dataSegment Given DataSegment. Note that this is not necessarily
* the DataSegment with offset 0. It is merely the first DataSegment
* received.
* @throws JDtnException On various errors
*/
public InboundBlock(DataSegment dataSegment) throws JDtnException {
super(dataSegment);
_reportSerialNumber = new ReportSerialNumber();
}
/**
* Determine if this Inbound Block has been completely received; i.e.,
* <ul>
* <li> We have received EOB.
* <li> All Red Segments are contiguous with each other
* <li> All Red Segments have been "Acked", i.e., we have sent report
* segments covering all Red Segments and such report segments
* have received corresponding Report Acks.
* </ul>
* @return True if Inbound Block completely received
*/
public boolean isInboundBlockComplete() {
long expectedOffset = 0;
boolean eobSeen = false;
if (!_reportSegmentsList.isEmpty()) {
// Not all outstanding ReportSegments have been acked
if (GeneralManagement.isDebugLogging()) {
_logger.fine("Not Complete: Outstanding Report Segments");
}
return false;
}
for (DataSegment dataSegment : this) {
if (dataSegment.isEndOfBlock()) {
eobSeen = true;
}
if (dataSegment.isRedData()) {
if (!dataSegment.isAcked()) {
if (GeneralManagement.isDebugLogging()) {
_logger.fine("Not Complete: Report for this segment not acked");
}
return false;
}
if (dataSegment.getClientDataOffset() != expectedOffset) {
if (GeneralManagement.isDebugLogging()) {
_logger.fine("Not Complete: missing data");
}
return false;
}
}
expectedOffset += dataSegment.getClientDataLength();
}
if (eobSeen) {
if (GeneralManagement.isDebugLogging()) {
_logger.fine("Complete");
}
return true;
} else {
if (GeneralManagement.isDebugLogging()) {
_logger.fine("Not Complete: EOB not seen");
}
return false;
}
}
/**
* Determine if all Red Segments have been received. I.e.:
* <ul>
* <li> We have received EORP.
* <li> All Red Segments are contiguous with each other
* </ul>
* @return True if Inbound Block all red segments received
*/
public boolean isAllRedDataReceived() {
long expectedOffset = 0;
boolean eorpSeen = false;
for (DataSegment dataSegment : this) {
if (dataSegment.isEndOfRedPart()) {
eorpSeen = true;
}
if (dataSegment.isRedData()) {
if (dataSegment.getClientDataOffset() != expectedOffset) {
return false;
}
}
expectedOffset += dataSegment.getClientDataLength();
}
if (eorpSeen) {
return true;
} else {
return false;
}
}
/**
* Determine if all Green Segments have been received. I.e.:
* <ul>
* <li>We have received EOB
* <li>All Green Segments are contiguous with each other
* </ul>
* @return True if all green segments received for this block
*/
public boolean isAllGreenDataReceived() {
long expectedOffset = 0;
boolean eobSeen = false;
for (DataSegment dataSegment : this) {
if (dataSegment.isEndOfBlock()) {
eobSeen = true;
}
if (dataSegment.isGreenData()) {
if (dataSegment.getClientDataOffset() != expectedOffset) {
return false;
}
}
expectedOffset += dataSegment.getClientDataLength();
}
if (eobSeen) {
return true;
} else {
return false;
}
}
/**
* Complete this inbound Block by getting all of the Block's data in one
* place; either in a buffer if it's small enough, or in a File if its
* too large.
* @throws JDtnException on errors
*/
public void inboundBlockComplete() throws JDtnException {
// If the Block data length is past a Threshold, spill all Segment
// data to a file.
if (_dataLength > LtpManagement.getInstance().getBlockLengthFileThreshold()) {
// Spill all DataSegments to this block's file
_dataInFile = true;
_dataFile = Store.getInstance().createNewBlockFile();
spillSegmentDataToBlockFile();
} else {
// Else gather all Segment data to a buffer
_dataInFile = false;
_dataBuffer = new byte[(int)_dataLength];
for (DataSegment segment : this) {
gatherSegmentDataToBuffer(segment, _dataBuffer);
segment.discardData();
}
}
}
/**
* Gather the data for the given DataSegment to the given buffer at the
* DataSegment's clienDataOffset, for length given by DataSegment's
* clientDataLength.
* @param dataSegment Given DataSegment
* @param buffer Buffer to gather into
* @throws LtpException On I/O errors if involves file operations
*/
private void gatherSegmentDataToBuffer(DataSegment dataSegment, byte[] buffer)
throws JDtnException {
if (dataSegment.isClientDataInFile()) {
// Gather from segment file to buffer[]
FileInputStream fis = null;
try {
fis = new FileInputStream(dataSegment.getClientDataFile());
int nRead = fis.read(buffer, (int)dataSegment.getClientDataOffset(), dataSegment.getClientDataLength());
if (nRead != dataSegment.getClientDataLength()) {
throw new JDtnException("nRead " + nRead + " != amount requested " + dataSegment.getClientDataLength());
}
} catch (IOException e) {
throw new LtpException(e);
} finally {
try {
if (fis != null) {
fis.close();
}
} catch (IOException e) {
// Nothing
}
fis = null;
dataSegment.discardData();
}
} else {
// Gather from segment buffer to buffer[]
Utils.copyBytes(
dataSegment.getClientData(),
0,
buffer,
(int)dataSegment.getClientDataOffset(),
dataSegment.getClientDataLength());
}
}
/**
* Spill data from all DataSegments to this block's file.
* @throws LtpException on errors
*/
private void spillSegmentDataToBlockFile()
throws LtpException {
FileInputStream fis = null;
FileOutputStream fos = null;
if (_dataFile == null) {
try {
_dataFile = Store.getInstance().createNewBlockFile();
} catch (JDtnException e) {
throw new LtpException("spillSegmentDataToBlockFile()", e);
}
}
try {
fos = new FileOutputStream(_dataFile);
} catch (FileNotFoundException e) {
throw new LtpException(e);
}
try {
byte[] buffer = new byte[4096];
for (DataSegment dataSegment : this) {
if (dataSegment.isClientDataInFile()) {
// Spill segment file to block file
// Make sure the file exists
if (!dataSegment.getClientDataFile().exists()) {
_logger.severe("InboundBlock contains Segment in file " +
dataSegment.getClientDataFile().getAbsolutePath() +
" but that file doesn't exist");
_logger.severe(dataSegment.dump("", true));
throw new IOException("File " +
dataSegment.getClientDataFile().getAbsolutePath() +
" does not exist");
}
fis = new FileInputStream(dataSegment.getClientDataFile());
long remaining = dataSegment.getClientDataLength();
while (remaining > 0) {
int nRead = fis.read(buffer);
if (nRead <= 0) {
throw new LtpException("Read count returned " + nRead);
}
remaining -= nRead;
fos.write(buffer, 0, nRead);
dataSegment.discardData();
}
fis.close();
fis = null;
} else {
// Spill segment buffer to block file
fos.write(dataSegment.getClientData(), 0, dataSegment.getClientDataLength());
}
}
} catch (IOException e) {
throw new LtpException("Spilling block data", e);
} finally {
try {
fos.close();
} catch (IOException e) {
// Nothing
}
if (fis != null) {
try {
fis.close();
} catch (IOException e) {
// Nothing
}
}
}
}
/**
* Called when this Block is closed, giving the Block an opportunity
* to clean up. Clean up consists of:
* <ul>
* <li> Remove outstanding report segments and kill their Checkpoint timers.
* </ul>
*/
@Override
public void closeBlock() {
super.closeBlock();
removeOutstandingReportSegments();
}
/**
* Remove outstanding Report Segments and kill their RS Timers
*/
private void removeOutstandingReportSegments() {
while (!_reportSegmentsList.isEmpty()) {
ReportSegment reportSegment = _reportSegmentsList.remove(0);
_reportSegmentsMap.remove(reportSegment.getReportSerialNumber());
if (reportSegment.getRsTimerTask() != null) {
reportSegment.getRsTimerTask().cancel();
reportSegment.setRsTimerTask(null);
}
}
}
/**
* Dump the state of this object
* @param indent Amount of indentation
* @param detailed True if want verbose details
* @return String containing dump
*/
@Override
public String dump(String indent, boolean detailed) {
StringBuffer sb = new StringBuffer(indent + "InboundBlock\n");
sb.append(super.dump(indent + " ", detailed));
sb.append(indent + " Ltp Receiver State=" + _ltpReceiverState + "\n");
sb.append(indent + " Block ReportSerialNumber\n");
sb.append(getReportSerialNumber().dump(indent + " ", detailed));
if (detailed) {
sb.append(indent + " Outstanding Report Segments\n");
for (ReportSegment reportSegment : _reportSegmentsList) {
sb.append(reportSegment.dump(indent + " ", detailed));
}
}
return sb.toString();
}
/**
* State of the LTP Receiver Machine ala RFC5326 8.2
*/
public LtpReceiverState getLtpReceiverState() {
return _ltpReceiverState;
}
/**
* State of the LTP Receiver Machine ala RFC5326 8.2
*/
public void setLtpReceiverState(LtpReceiverState ltpReceiverState) {
this._ltpReceiverState = ltpReceiverState;
}
/**
* Get an Iterator over the list of outstanding ReportSegments issued by
* this InboundBlock.
* @return What I said
*/
public Iterable<ReportSegment> iterateOutstandingReportSegments() {
return _reportSegmentsList;
}
/**
* Get the outstanding Report Segment referenced by given ReportSerialNumber.
* @param reportSerialNumber Given ReportSerialNumber.
* @return What I said
*/
public ReportSegment getOutstandingReportSegment(ReportSerialNumber reportSerialNumber) {
return _reportSegmentsMap.get(reportSerialNumber);
}
/**
* Add given ReportSegment to list of ReportSegments issued for this
* InboundBlock. We need to keep them around because when a ReportAck
* comes in, we need to know what is being acked.
* @param reportSegment
*/
public void addReportSegment(ReportSegment reportSegment) {
ReportSegment existingReportSegment = _reportSegmentsMap.get(
reportSegment.getReportSerialNumber());
if (existingReportSegment != null) {
throw new IllegalArgumentException(
"ReportSerialNumber " +
reportSegment.getReportSerialNumber() +
" is already in the block's outstanding list");
}
_reportSegmentsList.add(reportSegment);
_reportSegmentsMap.put(reportSegment.getReportSerialNumber(), reportSegment);
}
/**
* Determine if given newly arrived DataSegment is miscolored.
* 6.21. Handle Miscolored Segment
* This procedure is triggered by the arrival of either (a) a red-part
* data segment whose block offset begins at an offset higher than the
* block offset of any green-part data segment previously received for
* the same session or (b) a green-part data segment whose block offset
* is lower than the block offset of any red-part data segment
* previously received for the same session. The arrival of a segment
* matching either of the above checks is a violation of the protocol
* requirement of having all red-part data as the block prefix and all
* green-part data as the block suffix.
* @param block Block of which given Segment is a part
* @param dataSegment Given newly arrived DataSegment
* @return True if miscolored according to above definition
*/
public boolean isMiscolored(InboundBlock block, DataSegment dataSegment) {
if (dataSegment.isRedData()) {
// Given Segment is Red
for (DataSegment otherSegment : block) {
if (otherSegment.isGreenData()) {
if (dataSegment.getClientDataOffset() > otherSegment.getClientDataOffset()) {
// Found a Green Segment whose offset <= given Red Segment
return true;
}
}
}
} else {
// Given Segment is Green
for (DataSegment otherSegment : block) {
if (otherSegment.isRedData()) {
if (dataSegment.getClientDataOffset() < otherSegment.getClientDataOffset()) {
// Found a Red Segment whose offset >= given Green Segment
return true;
}
}
}
}
return false;
}
/**
* Remove given Report Segment from lisxt of ReportSegments issued for this
* InboundBlock.
* @param reportSegment
*/
public void removeReportSegment(ReportSegment reportSegment) {
_reportSegmentsMap.remove(reportSegment.getReportSerialNumber());
_reportSegmentsList.remove(reportSegment);
}
/**
* Serial number reported in a ReportSegment for an InboundBlock. Set to
* Random when first Segment of Block received. Incremented by 1 for each
* Resend of a ReportSegment. For an OutboundBlock, xxx
*/
public ReportSerialNumber getReportSerialNumber() {
return _reportSerialNumber;
}
/**
* Serial number reported in a ReportSegment for an InboundBlock. Set to
* Random when first Segment of Block received. Incremented by 1 for each
* Resend of a ReportSegment.
*/
public ReportSerialNumber incrementReportSerialNumber() {
_reportSerialNumber.incrementSerialNumber();
return _reportSerialNumber;
}
/**
* Number of problems encountered in receipt of this inbound block
*/
public int getNumberReceptionProblems() {
return numberReceptionProblems;
}
/**
* Increment Number of problems encountered in receipt of this inbound block
* @return incremented value
*/
public int incrementNumberReceptionProblems() {
return ++numberReceptionProblems;
}
/**
* Determine if this session has encountered too many reception problems
* @return True if number of reception problems > system limit
*/
public boolean isTooManyReceptionProblems() {
return numberReceptionProblems >= LtpManagement.getInstance().getSessionReceptionProblemsLimit();
}
} |
Using an Implementation Research Framework to Identify Potential Facilitators and Barriers of an Intervention to Increase HPV Vaccine Uptake Background: Although the incidence of cervical cancer has been decreasing in the United States over the last decade, Hispanic and African American women have substantially higher rates than Caucasian women. The human papillomavirus (HPV) is a necessary, although insufficient, cause of cervical cancer. In the United States in 2013, only 37.6% of girls 13 to 17 years of age received the recommended 3 doses of a vaccine that is almost 100% efficacious for preventing infection with viruses that are responsible for 70% of cervical cancers. Implementation research has been underutilized in interventions for increasing vaccine uptake. The Consolidated Framework for Implementation Research (CFIR), an approach for designing effective implementation strategies, integrates 5 domains that may include barriers and facilitators of HPV vaccination. These include the innovative practice (Intervention), communities where youth and parents live (Outer Setting), agencies offering vaccination (Inner Setting), health care staff (Providers), and planned execution and evaluation of intervention delivery (Implementation Process). Methods: Secondary qualitative analysis of transcripts of interviews with 30 community health care providers was conducted using the CFIR to code potential barriers and facilitators of HPV vaccination implementation. Results: All CFIR domains except Implementation Process were well represented in providers' statements about challenges and supports for HPV vaccination. Conclusion: A comprehensive implementation framework for promoting HPV vaccination may increase vaccination rates in ethnically diverse communities. This study suggests that the CFIR can be used to guide clinicians in planning implementation of new approaches to increasing HPV vaccine uptake in their settings. Further research is needed to determine whether identifying implementation barriers and facilitators in all 5 CFIR domains as part of developing an intervention contributes to improved HPV vaccination rates. |
Young adults: beloved by food and drink marketers and forgotten by public health? Young adults are a highly desirable target population for energy-dense, nutrient-poor (EDNP) food and beverage marketing. But little research, resources, advocacy and policy action have been directed at this age group, despite the fact that young adults are gaining weight faster than previous generations and other population groups. Factors such as identity development and shifting interpersonal influences differentiate young adulthood from other life stages and influence the adoption of both healthy and unhealthy eating behaviours. EDNP food and beverage marketing campaigns use techniques to normalize brands within young adult culture, in particular through online social media. Young adults must be a priority population in future obesity prevention efforts. Stronger policies to protect young adults from EDNP food and beverage marketing may also increase the effectiveness of policies that are meant to protect younger children. Restrictions on EDNP food and beverage marketing should be extended to include Internet-based advertising and also aim to protect vulnerable young adults. |
//
// Created by lifan on 2021/4/30.
//
#include "apue.h"
#include "fcntl.h"
int main(void) {
char *file = "huahua.txt";
char *file2 = "huahua2.txt";
int fd;
char buf[] = "hello world";
if ((fd = open(file, O_RDWR|O_CREAT, 0600)) == -1)
err_sys("open error\n");
// if (write(fd, buf, strlen(buf)) < 0)
// err_sys("write error\n");
printf("open done\n");
// sleep(5);
//复制file->file2
// if (link(file, file2) == -1)
// err_sys("link error\n");
// else
// printf("link done\n");
//
// sleep(5);
//删除file2
// if (unlink(file2) == -1)
// err_sys("unlink error\n");
// else
// printf("unlink file done\n");
//
// sleep(5);
//删除file
if (unlink(file) == -1)
err_sys("unlink error\n");
else
printf("unlink file2 done\n");
sleep(5);
printf("done\n");
exit(0);
}
|
Keratinocytes contribute intrinsically to psoriasis upon loss of Tnip1 function Significance Psoriasis is a complex inflammatory disease with clear genetic contribution that affects roughly 2% of the population in Europe and North America. Inflammation of the skin, and in many cases the joints, leads to severe clinical symptoms, including disfiguration and disability. Immune cells and their inflammatory effector functions have been identified as critical factors for disease development; however, how genetic susceptibility contributes to disease remains largely unclear. Here we developed mouse models based on the gene TNIP1, whose loss-of-function in humans is linked to psoriasis. Based on these models, we provide evidence that nonimmune cells, specifically skin-resident keratinocytes, contribute causally to disease. This work shifts attention to keratinocytes as causal contributors and therapeutic targets in psoriasis. Psoriasis is a chronic inflammatory skin disease with a clear genetic contribution, characterized by keratinocyte proliferation and immune cell infiltration. Various closely interacting cell types, including innate immune cells, T cells, and keratinocytes, are known to contribute to inflammation. Innate immune cells most likely initiate the inflammatory process by secretion of IL-23. IL-23 mediates expansion of T helper 17 (Th17) cells, whose effector functions, including IL-17A, activate keratinocytes. Keratinocyte activation in turn results in cell proliferation and chemokine expression, the latter of which fuels the inflammatory process through further immune cell recruitment. One question that remains largely unanswered is how genetic susceptibility contributes to this process and, specifically, which cell type causes disease due to psoriasis-specific genetic alterations. Here we describe a mouse model based on the human psoriasis susceptibility locus TNIP1, also referred to as ABIN1, whose gene product is a negative regulator of various inflammatory signaling pathways, including the Toll-like receptor pathway in innate immune cells. We find that Tnip1-deficient mice recapitulate major features of psoriasis on pathological, genomic, and therapeutic levels. Different genetic approaches, including tissue-specific gene deletion and the use of various inflammatory triggers, reveal that Tnip1 controls not only immune cells, but also keratinocyte biology. Loss of Tnip1 in keratinocytes leads to deregulation of IL-17induced gene expression and exaggerated chemokine production in vitro and overt psoriasis-like inflammation in vivo. Together, the data establish Tnip1 as a critical regulator of IL-17 biology and reveal a causal role of keratinocytes in the pathogenesis of psoriasis. |
/**
* Install iPXE download protocol
*
* @v handle EFI handle
* @ret rc Return status code
*/
int efi_download_install ( EFI_HANDLE handle ) {
EFI_BOOT_SERVICES *bs = efi_systab->BootServices;
EFI_STATUS efirc;
int rc;
efirc = bs->InstallMultipleProtocolInterfaces (
&handle,
&ipxe_download_protocol_guid,
&ipxe_download_protocol_interface,
NULL );
if ( efirc ) {
rc = -EEFI ( efirc );
DBG ( "Could not install download protocol: %s\n",
strerror ( rc ) );
return rc;
}
return 0;
} |
export function isNestedInExport(node: any, document: any) {
const ancestorNodes = document.getAncestors(node.key);
const isNested = ancestorNodes.find(
ancestor => ancestor.type === "pointerExport",
);
return isNested;
}
|
#ifndef LI_RAND_H_
#define LI_RAND_H_
#include "first.h"
int li_rand_pseudo (void);
void li_rand_pseudo_bytes (unsigned char *buf, int num);
void li_rand_reseed (void);
int li_rand_bytes (unsigned char *buf, int num);
void li_rand_cleanup (void);
#endif
|
Chuck E. Cheese, the restaurant chain where kids eat pizza with their bare hands after swimming around in a plastic ball pit, is going public via a reverse merger with Leo Holdings (NYSE: LHC), a blank check acquisition company formed by private equity firm Lion Capital.
Why it matters: This would be the first restaurant company to enter the U.S. public markets in four years. Or, in this case, re-enter.
Backstory: Apollo Global Management paid $1.3 billion to take Chuck E. Cheese private in 2014 (including debt) and will hold a 51% stake in the newly formed company. Apollo held sale talks in early 2017, but no deal materialized.
Bottom line: "Its average customer is between 3 and 8 years old. That means around 20% of your customer base ages out every year and a new group ages in, so it remains quite fresh for new customers, even though it's been around for 42 years." — Lyndon Lea, co-founder of Lion Capital, speaking to Axios. |
1. Field of the Invention
The present invention relates to an electrical connector assembly, and more particularly, to an electrical connector assembly having an improved load plate to make it easy to close or open.
2. Description of the Prior Art
U.S. Pat. No. 7,160,130 issued to Ma on Jan. 9, 2007 discloses a conventional electrical connector for electrically connecting a CPU with a PCB. The electrical connector comprises a socket body having a number of terminals, a stiffener attached to the socket body, a load plate and a load lever pivotally mounted to two ends of the stiffener respectively. The load plate comprises a body plate and a pressing side with an interlocking element at one end thereof extending forwardly from the body plate. The load lever is formed by bending a single metallic wire and includes a pair of rotary shafts, a locking section disposed between the rotary shafts and an actuating section for rotating the rotary shafts is bent at a right angle with respect to the rotary shafts, a distal end of the actuating section is formed into a U-like shape in order to form a handle for ease of actuation.
When the CPU is assembled to the socket body, the load plate is pivoted to a closed position and is locked by the locking section of the load lever. Thus the load plate exerts a force on the CPU to make a good electronic connection between the CPU and the terminals of the electrical connector.
When the load lever is operated to make the load plate at the closed position, the interlocking element must be lower than the locking section, otherwise, the locking section can not press on the interlocking element, thus the locking section can not drive the load plate to create enough download force for providing a reliable interconnection between the CPU and the socket.
In view of the above, a new electrical connector assembly that overcomes the above-mentioned disadvantages is desired. |
<gh_stars>0
package com.dyllongagnier.cs4150.timing;
import java.time.Duration;
public class TimedIterations
{
private int iterations;
private Duration time;
public TimedIterations(Duration time, int iterations)
{
this.setTime(time);
this.setIterations(iterations);
}
public int getIterations()
{
return this.iterations;
}
public Duration getTime()
{
return this.time;
}
private void setIterations(int iterations)
{
this.iterations = iterations;
}
private void setTime(Duration time)
{
if (time == null)
throw new NullPointerException();
this.time = time;
}
public Duration getTimePerIteration()
{
return this.getTime().dividedBy(this.getIterations());
}
}
|
A Micron-Sized Laser Photothermal Effect Evaluation System and Method The photothermal effects of lasers have played an important role in both medical laser applications and the development of cochlear implants with optical stimulation. However, there are few methods to evaluate the thermal effect of micron-sized laser spots interacting with other tissues. Here, we present a multi-wavelength micro-scale laser thermal effect measuring system that has high temporal, spatial and temperature resolutions, and can quantitatively realize evaluations in real time. In this system, with accurate 3D positioning and flexible pulsed laser parameter adjustments, groups of temperature changes are systematically measured when the micron-sized laser spots from six kinds of wavelengths individually irradiate the Pd/Cr thermocouple junction area, and reference data of laser spot thermal effects are obtained. This work develops a stable, reliable and universal tool for quantitatively exploring the thermal effect of micron-sized lasers, and provides basic reference data for research on light-stimulated neuron excitement in the future. Introduction Photothermal effects play a very important role in not only medical treatment, but also light-regulated nerve excitement research. More and more studies have confirmed that light can directly cause neuronal excitement under non-transgenic conditions. As early as the 1960s, Arvanitaki et al. quantified the effect of visible and near-infrared light with different wavelengths on nerve cells. In 1971, Fork et al. measured nerve impulses without causing obvious irreversible damage to nerve tissue when continuously irradiated by the 488 nm laser, demonstrating the safety of light-stimulated neurons for the first time. In 2002, Hirase et al. used near-infrared pulsed lasers to stimulate the cerebral cortex, and found that the action potentials generated on neurons are likely to be related to the amount of laser radiation. In 2005, Wells et al. in Vanderbilt University revealed that mid-infrared lasers with a wavelength of 2-6 m can induce action potentials without damage in the sciatic nerves of rats. From 2006 to 2011, Richter's team at Northwestern University further tried to stimulate the auditory nervous system of guinea pigs with pulsed infrared lasers, and confirmed that it was feasible to stimulate the spiral ganglion cells to induce auditory nerve impulses with pulsed lasers. They also initially explored the influence of laser parameters such as laser wavelengths, pulse width, stimulation frequency and pulse energy. Their work created a precedent in the study of cochlear implants with optical stimulation. Many research groups have followed up ear nerve stimulation research with pulsed lasers in various bands. The latest research from our team found that a pulsed laser of 450 nm wavelength can cause calcium ion channels on spiral ganglion neurons to open. However, mechanisms of the laser-induced excitement of neurons and non-transgenic spiral ganglion cells are still unclear. The photochemical and photoelectric effects are basically excluded. At present, it is generally accepted that it is the result of individual or a combination of factors of photoacoustic and photothermal effects. The photoacoustic effect is stated as a pulsed laser which generates pressure waves in tissues and stimulates the cochlear hair cells. Studies supporting the photothermal effect have found that the absorption of lasers by nerve tissue caused a temporary local temperature increase in the target tissue, which may activate voltage-gated or heat-sensitive ion channels to generate nerve impulses. In addition, the neural electromechanical soliton model also claims the nerve impulse conduction in the form of heat. A systematic study of the photothermal response of lasers in various frequency bands is of great significance for clarifying the mechanism in light-induced nerve excitation. At present, research on laser photothermal evaluations is mostly based on the subjective feeling of subjects, which is not objective enough; or on the measurement of temperature changes resulting from the photothermal effect of lasers on a centi/millimeter scale. However, studies on light-induced nerve excitement always employ laser beams with micron-sized diameters, and there are a lack of available temperature measuring devices and systems for micron-sized lasers. Firstly, the widely developed radiative measurement methods using fluorescence or infrared light are inapplicable, because the fluorescence probes are easily affected by intracellular chemical components, and the infrared light used to measure temperature and lasers used for photothermal study will interfere with each other, resulting in the inaccuracy of photothermal effect evaluations. Secondly, the micro-scale laser irritation area determines the requirement of micron sizes for temperature sensors. In addition, it is hard to position micron-sized laser beams on a micron-sized sensing region with the naked eye and with manual operation. Moreover, the close irradiation requirements in light-stimulated nerve excitement research limit the introduction of microscopes to assist positioning. Here, we develop a non-radiative temperature measurement system constructed by a Pd/Cr thin film thermocouple array, 3D positioning module and real-time data processing program, which can be used to quantitatively evaluate the photothermal effect of micronsized lasers and other tissues. To obtain the background reference data of the system and establish a complete photothermal evaluation model in the future, the temperature response of Pd/Cr thin film thermocouples under the direct radiation of pulsed lasers in the 450-1064 nm band are systematically explored, excluding the other influences from water, protein molecules and other substances. Fabrication of Thermocouple Device The devices, including thin film thermocouple (TFTC) arrays for measuring temperature, were fabricated via standard cleanroom techniques on 4-inch wafers, as published elsewhere. The Si <100> wafers were coated with 400 nm thick Si 3 N 4, on which the Pd/Cr thermocouple arrays serving as the temperature sensors were constituted by the deposition of Pd thin films and Cr thin films. The freestanding Si 3 N 4 windows were etched using published methods to improve the sensitivity of the thermocouple sensors on them. Later, the wafer with TFTC arrays was electrically connected with the designed printed circuit board using a wire-bonding process. ∆T-∆V Here, a thermocouple made of Pd and Cr thin films was chosen as the thermometer. Their junction served as the hot end, and their electrode pads served as the cold ends. The voltage difference between the two cold ends was proportional to the temperature difference between the hot junction and the cold ends: where S is the Seebeck coefficient difference between these two metals, Pd and Cr. The thermopower of Pd/Cr thermocouples was calibrated on a homemade calibration platform, and showed a stable sensitivity of 21 ± 0.1 V/K and an accuracy of ± 20 mK at room temperature. The Relationship between the Physical Parameters of the Laser We calibrated the peak power of the semiconductor laser sources with six different wavelengths, and calculated its average power in pulse mode, P average, with the following formula: where P peak is the peak power of the pulsed laser, t duration is the duration of a pulse (200 s in this work), and f is the pulse repetition frequency. The energy of a single pulse, E peak, is: Additionally, the laser energy density of a pulsed laser irradiated on a device per second, i.e., the average energy density per second, D energy, can be calculated with the following formula: where S spot area is the laser's spot area on the device of 0.19 mm 2 when the laser beam reaches the device 1 mm away. The Measurement Method for Evaluating Photothermal Effect of Micron-Sized Lasers The photothermal temperature measurement system for micron-sized laser was composed of four parts: the laser generation device, the three-dimensional positioning module, the temperature measurement device, the data acquisition and processing module, as shown in Figure 1. The laser generator module could output six kinds of lasers, including 450 nm, 525 nm, 638 nm, 810 nm, 980 nm and 1064 nm. The laser output mode (continuous mode or pulse mode), light intensity, as well as the pulse width and repetition frequency in pulse mode, could be adjusted. The movement accuracy of the 3D positioning module could reach 0.03 mm, which helped the laser beam to irradiate the junction area of the thermocouple accurately. The temperature measurement device is made of nine dependent measurement units, on each of which four micron-sized Pd/Cr thin film thermocouples are prepared on the freestanding Si 3 N 4 platform (400 nm) to improve the sensitivity and accuracy of the measurement, as shown in Figure 2a. Based on the Seebeck effect, thermocouples can convert the temperature difference between the hot and cold ends into a weak voltage between the cold ends of Pd and Cr film arms. The data acquisition and processing module is constituted of nanovoltmeter, multiplexer and LabVIEW software. The output voltage of the Pd/Cr thermocouples was in the order of microvolts; therefore, a high-precision nanovoltmeter, Keithley 2182A, was used to measure the voltage between the ends of the two ends of the thermocouples, and transmit the data to the processing part in LabVIEW software in computer, through GPIB-HS. The introduction of the multiplexer is one of the keys to realizing precise positioning of the laser beam. Through the preset program in LabVIEW, the multiplexer can sequentially turn on the electrical path to each thermocouple to enable the nanovoltmeter to detect its output voltage. Cyclic detection for the four thermocouples needed 0.4 s, because the switching time of the used mechanical switches was almost 80 ms. As a result, the continuous cyclic temperature detection of the thermocouple array was similar to real-time measuring. The LabVIEW program can not only control the data acquisition, but also display the output voltage value of each thermocouple in real time. In this system, a method for precisely positioning the laser beam was designed by combining the real-time temperature measurement of the thermocouple array shown in Figure 2 and the 3D positioning module. As shown in Figure 2b, the distance between thermocouples was 150 m, and the laser beam had an emission spot diameter of 200 m. Only when the laser beam irradiated the thermocouple junction area and its surrounding area was there a non-noise voltage output between the two ends of the thermocouple. The output voltage of each thermocouple in the array were monitored in laser fiber's positioning operation. The 3D locator bound with the optical fiber was adjusted slowly until an output voltage of the thermocouples was produced, indicating that the laser beam was irradiating on or near the junction area. Later, we fine-tuned the position of the fiber in a selected direction (X-axis or Y-axis) as shown as the blue line in Figure 2b, until the voltage output of the thermocouple reached the maximum value. At this time, the laser beam irradiated site 3 in Figure 2b. Then, we continued to fine-tune the position of the fiber in another direction, vertical to the former direction, shown as the orange line in Figure 2b, until the voltage output reached the maximum value again. At this time, the laser beam was accurately irradiated on the junction area of the thermocouple, at site 6 in Figure 2b. The precise positioning method is illustrated in Figure 2b,c. Evaluation of the Photothermal Effect in Micron-Sized Lasers Interacting with the Pd/Cr Thermocouple The photothermal effects of the pulsed micron-sized lasers with different wavelengths (450 nm, 525 nm, 638 nm, 810 nm, 980 nm, and 1064 nm) and different pulse frequency (mainly 50 Hz, 100 Hz, 300 Hz, 400 Hz, 500 Hz, 600 Hz, 800 Hz, and 1000 Hz) were evaluated using the homemade real-time temperature measurement system. Previous experimental results showed that a laser with a pulse width of 200 s could cause neuronal excitement; therefore, the widths of the pulsed lasers in this work were all fixed at 200 s. In the preliminary experiment, we observed the voltage output of the thermocouple, which represents the temperature change at the junction of thermocouples and further reflects the photothermal effect, as the peak power of the pulsed laser gradually increased. The experimental results illustrated in Figure 3 showed that when the repetition frequency of the pulsed laser was below 50 Hz, the output voltage curve behaved as many individual spikes, in which the peak value of the temperature change gradually increased with the peak power of the lasers, as shown in Figure 3a. This is because the heat irradiated on the Pd/Cr thin film thermocouple by a single pulse was quickly dissipated when the interval time between two adjacent laser pulses was too long, i.e., the frequency is very low. When the pulse frequency was larger than 100 Hz, the voltage output curve versus time drawn by the LabVIEW program displayed stepping up with the increase in the power peak of the laser beam, as shown in Figure 3c. Furthermore, the output voltage was maintained in a relatively stable range when the peak power was constant. This phenomenon makes the temperature measurement reliable for evaluating photothermal effects of lasers-Pd/Cr interactions possible. As a result, pulse frequencies with a value of ≥50 Hz were chosen in this work, and the average temperature rise on the stairs in the voltage output curve was taken as the temperature change under the corresponding laser irradiation. The relationships between the temperature rise of the Pd/Cr thin film thermocouple junction and the peak power of lasers with different wavelengths (450 nm, 525 nm, 638 nm, 810 nm, 980 nm, and 1064 nm) are all plotted in Figure 4. For lasers with the same frequency, the local temperature increased rapidly with the pulse frequency when the peak power was fixed, which is a conceivable result. As the frequency increased, the laser energy irradiated on the thermocouple per second increased by a corresponding multiple, resulting in increasing the temperature of the thermocouple. The relationships between the temperature rise and the average energy density of the lasers with different wavelengths are also plotted in Figure 5. The results show that the temperature rise is nearly linearly proportional to the average energy density of a laser when its pulse frequency is fixed, indicating that the ratio of photothermal effects to the total laser energy is relatively stable in the laser-Pd/Cr film interaction. Moreover, when the average energy density is the same, the local temperature change will decrease with the pulse frequency for the lasers with the wavelength range within 450-980 nm. This demonstrates that compared to the total irradiation duration, the laser intensity impacts more on the local temperature rise of thermocouples. However, for a laser with 1064 nm wavelength, the experimental result was the opposite; the local temperature change increased with the pulse frequency. This seems to indicate that compared with the lasers with a shorter wavelength, the strength of the photothermal effect is more irradiation time-dependent for the 1064 nm laser. Relationship between the Photothermal Effect of Laser-Thermocouple Interactions and the Wavelength Due to differences in the laser wavelength, pulse frequency and peak power, the heat collected from the irradiation of focused laser beams with different light parameters by the same thermocouple is different. The above results show that the temperature rise of the Pd/Cr thermocouple caused by laser-Pd/Cr interactions is closely related to the peak power and pulse frequency of the laser. In order to explore the dependence of photothermal responses on wavelengths, the comparison of peak power required to increase the same temperature of the thermocouple junction area at the same pulse frequency, which is actually the required total energy, should be studied for lasers with different wavelengths. Therefore, we extracted the data of peak power required by the lasers at different pulse frequencies of 300 Hz, 500 Hz, and 1000 Hz to raise the junction temperature by 15 C and 20 C. As shown in Table 1, at a fixed pulse frequency, the peak power required for the same temperature change increased with the laser wavelength in the range of 450-810/980 nm, whereas it decreased with the wavelength for lasers of 810, 980 and 1064 nm. The results mean that the photothermal effects of lasers with a wavelength of 810 nm and 980 nm seem the worst, because they need more energy to increase the same temperature of the Pd/Cr thermocouples. The largest values of the required peak powers at each pulse frequency in Table 1 are highlighted in bold. To compare the photothermal effects of lasers with different wavelengths more directly, we analyzed the relationship between the temperature rise caused by laser-Pd/Cr metal film interactions and the laser energy at a pulse frequency of 500 Hz, as shown in Figure 6. For lasers with wavelengths below 810 or 980 nm, the temperature rise is inversely proportional to the wavelength, consistent with Table 1. In summary, in the visible light band (450-810 nm), the shorter the wavelength of the laser, the better it demonstrates the photothermal effect; however, in the infrared light band (980-1064 nm), it seems that the longer the wavelength, the better the photothermal effect. This photothermal effect law in laser-Pd/Cr metal film interactions is unexpectedly consistent with that in laser-biological tissue interactions. For example, the photothermal effects of lasers with different wavelengths were reported in 2018, where 532 and 1470 nm induced higher temperatures for liver tissues than those induced by 808 and 980 nm. Interestingly, it is well known that the composition and morphology of biological tissue is quite different from metal film. The mechanism of the similar change trend between them needs to be further studied in the future. Feasibility of Applying this System to Photothermal Evaluation The process of thermocouple temperature sensor measuring the temperature rise generated by laser irradiation can be divided into two stages from the microscopic viewpoint. First, the electron, A, in the metal junction of thermocouples jumps to the excited state A * after absorbing the photon with an energy of h. Later, the excited electron A * collides inelastically with the medium M in the surroundings, which causes A* to deactivate and increases the kinetic energy of M at the same time. This process of energy conversion from lasers into the kinetic energy of metal molecules leads to a rise in the temperature of Pd/Cr metal junctions. These two stages can be expressed as an absorption stage, A + h→A *, and a deactivation stage, A * + M (E k ) →A + M (E k + ∆E k ). The dependence of photothermal effects in laser-Pd/Cr interactions on wavelengths can be well explained with the above microscopic analysis. The shorter the wavelength, the higher the photon energy of the laser, which benefits the higher energy of metal electrons. Thus, the kinetic energy of the metal medium M will increase more via inelastic collisions with excited electrons. Temperature reflects the average thermal kinetic energy of microscopic metal particles. As a result, under the same irradiation energy, the higher temperature of the metal is caused by light with shorter wavelengths in the visible band. As for the unusual high photothermal effect of infrared band lasers, further study is needed. Another reason for the wavelength dependence of photothermal effects in laser-Pd/Cr interactions is that the absorption rate of metal to light varies with the wavelength, which also contributes to the final displayed total law of photothermal effects. This work reflects the high absorption of laser radiation by the metal junction of the Pd/Cr thermocouple. It is conceivable that when the thermocouple device is used to evaluate the photothermal effect of laser-tissue interactions, the heat generated by the thermocouple itself due to the irradiation of the attenuated laser after penetrating tissues may interfere with the measurement results. Fabrice Manns's team in Miami University reported that when the commercial stainless-steel thermocouple was close to the laser beam, the temperature of the thermocouple itself will rise by nearly 20 C, which will introduce a significant overestimation on the actual temperature of the measured tissues. To reduce this interference, the size of the thermocouple junction for detecting temperature used in this study was designed at the sub-micron scale, 3 m 3 m 100 nm. Thus, the temperature rise caused by the metal thermocouple itself should be much weaker. In addition, the thin film thermocouple employs the contact-type method to detect the tissues' temperature in the research of laser-tissue interactions, i.e., the examined tissues are located between the laser beam and the thermocouple sensor. The intensity of the laser penetrating the tissue and irradiating on the thermocouple junction will remain too low to lead to obvious impacts. An early study confirmed this conclusion indirectly. Therefore, we infer that the system based on a Pr/Cr thin film thermocouple can truly reflect the temperature change of a tissue being measured in photothermal evaluations, especially when combined with the obtained background reference data. Although lacking biological experiments, experiments on thermocouple irradiated by lasers are sufficient to verify the feasibility of the homemade system in evaluating the photothermal effects of micron-sized lasers interacting with other tissues. Referring to the obtained basic data, the mathematical model of photothermal effect in laser-tissue interactions can be established using this system. In the future, this system can be applied in laser-neuron interaction studies to quantitatively evaluate the role of photothermal effects in stimulating neuronal excitement, which is beneficial for clarifying the response mechanism of auditory nerves under laser stimulation, and to promote the study of optical cochlear implants. Conclusions Combining a micron-scale Pd/Cr thin film thermocouple array, precise 3D position module and real-time data processing program, we have developed a stable, reliable and real-time temperature measuring system to quantitatively evaluate the photothermal effects of micron-sized lasers interacting with other tissues. Using this system made in-house, we studied the influencing factors of the photothermal effect of micron-sized pulsed lasers, such as wavelengths, peak power and pulse frequencies, when they directly irradiated the thermocouples without other tissues. The measurement results showed that when the wavelength and pulse frequency were fixed, the temperature measured by the thermocouple changed linearly with the energy density or peak power, indicating the stability of the light-to-heat ratio. The photothermal effect behavior was negatively related to the laser wavelength in the visible light band, but positively in the infrared light band. This work not only provides a universal measuring system for photothermal evaluations in the interaction between micron-sized lasers and other tissues in the future, but also provides basic reference data for the construction of a photothermal effect model. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest. |
<reponame>enoua5/revengechessII
#ifndef CLOCK_H
#define CLOCK_H
#include "enum.h"
#include <chrono>
#include <string>
typedef std::chrono::milliseconds timer_res;
typedef std::chrono::time_point<std::chrono::system_clock> TimerTime;
enum IncrementMethod
{
INCREMENT,
DELAY,
BRONSTEIN,
NO_CLOCK
};
std::string inctToString(IncrementMethod);
IncrementMethod stringToInct(std::string);
TimerTime get_current_time();
unsigned int time_to_mill(TimerTime t);
class Timer
{
public:
// all times in milliseconds
Timer(unsigned int startingTime = 5*60*1000, unsigned int increment = 15000, IncrementMethod inct = NO_CLOCK);
unsigned int update(); // returns timeLeft
unsigned int toggle(); // returns timeLeft
//void set_running(bool);
inline bool isRunning();
unsigned int getDelayLeft();
unsigned int getTimeSpentThisTurn();
void setToNoClock();
inline IncrementMethod getIncType();
inline unsigned int getIncrementAmount();
protected:
IncrementMethod incType;
unsigned int increment;
bool running;
unsigned int timeLeft;
TimerTime lastToggle;
unsigned int timeLeftAtStartOfTurn;
unsigned int timeSpentThisTurn;
};
class Clock
{
public:
Clock();
Clock(unsigned int startingTime, unsigned int increment, IncrementMethod inct);
Clock(unsigned int startingTimeWhite, unsigned int incrementWhite, IncrementMethod inctWhite, unsigned int startingTimeBlack, unsigned int incrementBlack, IncrementMethod inctBlack);
unsigned int toggle();
void stop();
void setToNoClock();
unsigned int getWhiteTime();
unsigned int getBlackTime();
bool isWhiteRunning();
bool isBlackRunning();
IncrementMethod getWhiteIncType();
unsigned int getWhiteIncrementAmount();
IncrementMethod getBlackIncType();
unsigned int getBlackIncrementAmount();
unsigned int getDelayLeft();
unsigned int getTimeSpentThisTurn();
GameResult getResultFromFlag();
protected:
Timer white_timer;
Timer black_timer;
};
#endif
|
When the 165 employees in the Department of Education's New York regional office look out their Manhattan lunchroom window, they see Ground Zero.
The two floors occupied by the department's regional office had remained empty since Sept. 11, when terrorists leveled the twin towers and carved a hole in the Pentagon. The building, at 75 Park Place, was just a few hundred feet to the north of the World Trade Center site—so close that debris scarred the structure and the windows in the lobby were blown out.
But on Aug. 5, the employees moved back in.
"It was good to be back and see my colleagues I hadn't seen in a long time," said Brian Hickey, a special agent with the agency's office of inspector general.
Since the attack nearly a year ago, the employees had been working out of their homes, or out of two temporary offices in Brooklyn. They were waiting for building renovations and redecorating, and for emotions to run their course.
On the morning of Sept. 11, Mr. Hickey was on his way to work when he heard that an airplane had flown into one of the World Trade Center towers. From his car, he saw the second plane hit.
By cellphone, Mr. Hickey learned that a new hire in the New York inspector general office, a former Federal Aviation Administration air marshal, evacuated the two floors of education employees. Everyone got out alive and uninjured.
The department reacted quickly, Mr. Hickey said. Laptops, printers, and cellphones arrived days later at employees' homes. Within weeks, many were back to work in the makeshift Brooklyn offices. Last Sept. 15, Secretary Rod Paige toured Ground Zero and met with some department workers.
"He pretty much offered us anything we needed," Mr. Hickey said.
Last month, employees returned to find new carpets and furniture—and a counselor on each floor. Mr. Paige visited Aug. 9 to welcome the employees back.
The decor isn't the only thing that has changed in the office. Hundreds of tourists now throng sidewalks outside the building as they make their pilgrimage to the attack site. And the view out the lunchroom window, which used to be of the steel-and-glass towers, is now mostly a construction site. But employees have resolved to stay, Mr. Hickey said.
"We're proud of being New Yorkers," he said, "and we're proud of working here." |
use super::*;
impl<O, M, E> Category<O, M, E> {
pub fn apply_rule<L: Label>(
&mut self,
rule: &Rule<L>,
bindings: Bindings<L>,
object_constructor: impl Fn(Vec<ObjectTag<&Object<O>>>) -> O,
morphism_constructor: impl Fn(
MorphismConnection<&Object<O>>,
Vec<MorphismTag<&Object<O>, &Morphism<M>>>,
) -> M,
equality_constructor: impl Fn(&Equality) -> E,
) -> (Vec<Action<O, M, E>>, bool) {
self.apply_impl(
rule.get_statement(),
bindings,
&object_constructor,
&morphism_constructor,
&equality_constructor,
)
}
fn apply_impl<L: Label>(
&mut self,
statement: &[RuleConstruction<L>],
bindings: Bindings<L>,
object_constructor: &impl Fn(Vec<ObjectTag<&Object<O>>>) -> O,
morphism_constructor: &impl Fn(
MorphismConnection<&Object<O>>,
Vec<MorphismTag<&Object<O>, &Morphism<M>>>,
) -> M,
equality_constructor: &impl Fn(&Equality) -> E,
) -> (Vec<Action<O, M, E>>, bool) {
let construction = match statement.first() {
Some(construction) => construction,
None => return (Vec::new(), false),
};
let statement = &statement[1..];
match construction {
RuleConstruction::Forall(constraints) => self
.find_candidates(constraints, &bindings)
.map(|candidates| candidates.collect::<Vec<_>>())
.unwrap_or_else(|| vec![Bindings::new()])
.into_iter()
.map(|mut binds| {
binds.extend(bindings.clone());
self.apply_impl(
statement,
binds,
object_constructor,
morphism_constructor,
equality_constructor,
)
})
.fold(
(Vec::new(), false),
|(mut acc_actions, acc_apply), (action, apply)| {
acc_actions.extend(action);
(acc_actions, acc_apply || apply)
},
),
RuleConstruction::Exists(constraints) => {
let candidates = self
.find_candidates(constraints, &bindings)
.map(|candidates| candidates.collect::<Vec<_>>())
.unwrap_or_else(|| vec![Bindings::new()]);
if candidates.is_empty() {
let (mut actions, new_binds) = self.apply_constraints(
constraints,
&bindings,
object_constructor,
morphism_constructor,
equality_constructor,
);
actions.extend(
self.apply_impl(
statement,
new_binds,
object_constructor,
morphism_constructor,
equality_constructor,
)
.0,
);
(actions, true)
} else {
candidates
.into_iter()
.map(|mut binds| {
binds.extend(bindings.clone());
// Keep object and morphism constraints to add extra tags
let constraints =
constraints.iter().filter(|constraint| match constraint {
Constraint::Object { .. } | Constraint::Morphism { .. } => true,
Constraint::Equality(_) => false,
});
let (mut actions, binds) = self.apply_constraints(
constraints,
&binds,
object_constructor,
morphism_constructor,
equality_constructor,
);
let (new_actions, _) = self.apply_impl(
statement,
binds,
object_constructor,
morphism_constructor,
equality_constructor,
);
actions.extend(new_actions);
(actions, true)
})
.fold(
(vec![], false),
|(mut acc_actions, acc_apply), (action, apply)| {
acc_actions.extend(action);
(acc_actions, acc_apply || apply)
},
)
}
}
}
}
}
|
Factors associated with the development of equine degenerative myeloencephalopathy. A case-control study was done to identify factors associated with the development of equine degenerative myeloencephalopathy (EDM). Questionnaires were mailed to the owners of 146 horses admitted to the New York State College of Veterinary Medicine between November 1978 and June 1987 and diagnosed as having EDM by histologic examination. Questionnaires also were sent to owners of 402 clinically normal horses admitted to the college during the same period. Data were compared between the EDM-affected and control groups (56 and 179 questionnaires returned, respectively). Risk factors identified included the use of insecticide applied to foals, exposure of foals to wood preservatives, and foals frequently spending time on dirt lots while outside. Foals spending time outside on green pastures was a protective factor. Foals from dams that had had an EDM-affected foal were at higher risk of developing EDM than were foals from other dams. |
/*
* TemplateResult.java
*
* Questa classe permette di generare facilmente output a partire da template
* Freemarker. Gestisce vari modelli di dati, passati direttamente o attraverso
* la request, l'uso di outline automatici, e si configura automaticamente
* in base a una serie di init parameters del contesto (si veda il codice e
* il file web.xml per informazioni)
*
* This class supports the output generation using the Freemarkr template
* engine. It handles data models passed explicitly or through the request,
* automatic page outline, and automatically configures using the context
* init parameters (see web.xml).
*
*/
package Framework.result;
import freemarker.core.HTMLOutputFormat;
import freemarker.core.JSONOutputFormat;
import freemarker.core.XMLOutputFormat;
import freemarker.template.Configuration;
import freemarker.template.DefaultObjectWrapperBuilder;
import freemarker.template.Template;
import freemarker.template.TemplateDateModel;
import freemarker.template.TemplateException;
import freemarker.template.TemplateExceptionHandler;
import java.io.IOException;
import java.io.OutputStream;
import java.io.OutputStreamWriter;
import java.io.UnsupportedEncodingException;
import java.io.Writer;
import java.util.Calendar;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.Map;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.servlet.ServletContext;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
/**
*
* @author <NAME>
*/
public class TemplateResult {
protected ServletContext context;
protected Configuration cfg;
protected DataModelFiller filler;
public TemplateResult(ServletContext context) {
this.context = context;
init();
}
private void init() {
cfg = new Configuration(Configuration.VERSION_2_3_26);
//impostiamo l'encoding di default per l'input e l'output
//set the default input and outpout encoding
if (context.getInitParameter("view.encoding") != null) {
cfg.setOutputEncoding(context.getInitParameter("view.encoding"));
cfg.setDefaultEncoding(context.getInitParameter("view.encoding"));
}
//impostiamo la directory (relativa al contesto) da cui caricare i templates
//set the (context relative) directory for template loading
if (context.getInitParameter("view.template_directory") != null) {
cfg.setServletContextForTemplateLoading(context, context.getInitParameter("view.template_directory"));
} else {
cfg.setServletContextForTemplateLoading(context, "templates");
}
if (context.getInitParameter("view.debug") != null && context.getInitParameter("view.debug").equals("true")) {
//impostiamo un handler per gli errori nei template - utile per il debug
//set an error handler for debug purposes
cfg.setTemplateExceptionHandler(TemplateExceptionHandler.HTML_DEBUG_HANDLER);
} else {
cfg.setTemplateExceptionHandler(TemplateExceptionHandler.IGNORE_HANDLER);
}
//formato di default per data/ora
//date/time default formatting
if (context.getInitParameter("view.date_format") != null) {
cfg.setDateTimeFormat(context.getInitParameter("view.date_format"));
}
//classe opzionale che permette di riempire ogni data model con dati generati dinamicamente
//optional class to automatically fill every data model with dynamically generated data
filler = null;
if (context.getInitParameter("view.model_filler") != null) {
try {
filler = (DataModelFiller) Class.forName(context.getInitParameter("view.model_filler")).newInstance();
} catch (ClassNotFoundException ex) {
Logger.getLogger(TemplateResult.class.getName()).log(Level.SEVERE, null, ex);
} catch (InstantiationException ex) {
Logger.getLogger(TemplateResult.class.getName()).log(Level.SEVERE, null, ex);
} catch (IllegalAccessException ex) {
Logger.getLogger(TemplateResult.class.getName()).log(Level.SEVERE, null, ex);
}
}
//impostiamo il gestore degli oggetti - trasformer� in hash i Java beans
//set the object handler that allows us to "view" Java beans as hashes
DefaultObjectWrapperBuilder owb = new DefaultObjectWrapperBuilder(Configuration.VERSION_2_3_26);
owb.setForceLegacyNonListCollections(false);
owb.setDefaultDateType(TemplateDateModel.DATETIME);
cfg.setObjectWrapper(owb.build());
}
//questo metodo restituisce un data model (hash) di base,
//(qui inizializzato anche con informazioni di base utili alla gestione dell'outline)
//this method returns a base data model (hash), initialized with
//some useful information
protected Map getDefaultDataModel() {
//inizializziamo il contenitore per i dati di deafult
//initialize the container for default data
Map default_data_model = new HashMap();
//se è stata specificata una classe filler, facciamole riempire il default data model
//if a filler class has been specified, let it fill the default data model
if (filler != null) {
filler.fillDataModel(default_data_model);
}
//iniettiamo alcuni dati di default nel data model
//inject some default data in the data model
default_data_model.put("compiled_on", Calendar.getInstance().getTime()); //data di compilazione del template
default_data_model.put("outline_tpl", context.getInitParameter("view.outline_template")); //eventuale template di outline
//aggiungiamo altri dati di inizializzazione presi dal web.xml
//add other data taken from web.xml
Map init_tpl_data = new HashMap();
default_data_model.put("defaults", init_tpl_data);
Enumeration parms = context.getInitParameterNames();
while (parms.hasMoreElements()) {
String name = (String) parms.nextElement();
if (name.startsWith("view.data.")) {
init_tpl_data.put(name.substring(10), context.getInitParameter(name));
}
}
return default_data_model;
}
//questo metodo restituisce un data model estratto dagli attributi della request
//this method returns the data model extracted from the request attributes
protected Map getRequestDataModel(HttpServletRequest request) {
Map datamodel = new HashMap();
Enumeration attrs = request.getAttributeNames();
while (attrs.hasMoreElements()) {
String attrname = (String) attrs.nextElement();
datamodel.put(attrname, request.getAttribute(attrname));
}
return datamodel;
}
//questo metodo principale si occupa di chiamare Freemarker e compilare il template
//se � stato specificato un template di outline, quello richiesto viene inserito
//all'interno dell'outline
//this main method calls Freemarker and compiles the template
//if an outline template has been specified, the requested template is
//embedded in the outline
protected void process(String tplname, Map datamodel, Writer out) throws TemplateManagerException {
Template t;
//assicuriamoci di avere sempre un data model da passare al template, che contenga anche tutti i default
//ensure we have a data model, initialized with some default data
Map<String, Object> localdatamodel = getDefaultDataModel();
//nota: in questo modo il data model utente pu� eventualmente sovrascrivere i dati precaricati da getDefaultDataModel
//ad esempio per disattivare l'outline template basta porre a null la rispettiva chiave
//note: in this way, the user data model can possibly overwrite the defaults generated by getDefaultDataModel
//for example, to disable the outline generation we only need to set null the outline_tpl key
if (datamodel != null) {
localdatamodel.putAll(datamodel);
}
String outline_name = (String) localdatamodel.get("outline_tpl");
try {
if (outline_name == null || outline_name.isEmpty()) {
//se non c'è un outline, carichiamo semplicemente il template specificato
//if an outline has not been set, load the specified template
t = cfg.getTemplate(tplname);
} else {
//un template di outline � stato specificato: il template da caricare � quindi sempre l'outline...
//if an outline template has been specified, load the outline...
t = cfg.getTemplate(outline_name);
//...e il template specifico per questa pagina viene indicato all'outline tramite una variabile content_tpl
//...and pass the requested template name to the outline using the content_tpl variable
localdatamodel.put("content_tpl", tplname);
//si suppone che l'outline includa questo secondo template
//we suppose that the outline template includes this second template somewhere
}
//associamo i dati al template e lo mandiamo in output
//add the data to the template and output the result
t.process(localdatamodel, out);
} catch (IOException e) {
throw new TemplateManagerException("Template error: " + e.getMessage(), e);
} catch (TemplateException e) {
throw new TemplateManagerException("Template error: " + e.getMessage(), e);
}
}
//questa versione di activate accetta un modello dati esplicito
//this activate method gets an explicit data model
public void activate(String tplname, Map datamodel, HttpServletResponse response) throws TemplateManagerException {
//impostiamo il content type, se specificato dall'utente, o usiamo il default
//set the output content type, if user-specified, or use the default
String contentType = (String) datamodel.get("contentType");
if (contentType == null) {
contentType = "text/html";
}
response.setContentType(contentType);
//impostiamo il tipo di output: in questo modo freemarker abiliter� il necessario escaping
//set the output format, so that freemarker will enable the correspondoing escaping
switch (contentType) {
case "text/html":
cfg.setOutputFormat(HTMLOutputFormat.INSTANCE);
break;
case "text/xml":
case "application/xml":
cfg.setOutputFormat(XMLOutputFormat.INSTANCE);
break;
case "application/json":
cfg.setOutputFormat(JSONOutputFormat.INSTANCE);
break;
default:
break;
}
//impostiamo l'encoding, se specificato dall'utente, o usiamo il default
//set the output encoding, if user-specified, or use the default
String encoding = (String) datamodel.get("encoding");
if (encoding == null) {
encoding = cfg.getOutputEncoding();
}
response.setCharacterEncoding(encoding);
try {
process(tplname, datamodel, response.getWriter());
} catch (IOException ex) {
throw new TemplateManagerException("Template error: " + ex.getMessage(), ex);
}
}
//questa versione di activate estrae un modello dati dagli attributi della request
//this acivate method extracts the data model from the request attributes
public void activate(String tplname, HttpServletRequest request, HttpServletResponse response) throws TemplateManagerException {
Map datamodel = getRequestDataModel(request);
activate(tplname, datamodel, response);
}
//questa versione di activate pu� essere usata per generare output non diretto verso il browser, ad esempio
//su un file
//this activate method can be used to generate output and save it to a file
public void activate(String tplname, Map datamodel, OutputStream out) throws TemplateManagerException {
//impostiamo l'encoding, se specificato dall'utente, o usiamo il default
String encoding = (String) datamodel.get("encoding");
if (encoding == null) {
encoding = cfg.getOutputEncoding();
}
try {
//notare la gestione dell'encoding, che viene invece eseguita implicitamente tramite il setContentType nel contesto servlet
//note how we set the output encoding, which is usually handled via setContentType when the output is sent to a browser
process(tplname, datamodel, new OutputStreamWriter(out, encoding));
} catch (UnsupportedEncodingException ex) {
throw new TemplateManagerException("Template error: " + ex.getMessage(), ex);
}
}
}
|
Voltage-Gated Channels as Causative Agents for Epilepsies Problem statement: Epilepsy is a common neurological disorder that afflicts 1-2% of the general population worldwide. It encompasses a variety of disorders with seizures. Approach: Idiopathic epilepsies were defined as a heterogeneous group of seizure disorders that show no underlying cause.Voltage-gated ion channels defect were recognized etiology of epilepsy in the central nervous system. The aim of this article was to provide an update on voltage-gated channels and their mutation as causative agents for epilepsies. We described the structures of the voltage-gated channels, discuss their current genetic studies, and then review the effects of voltage-gated channels as causative agents for epilepsies. Results: Channels control the flow of ions in and out of the cell causing depolarization and hyper polarization of the cell. Voltage-gated channels were classified into four types: Sodium, potassium calcium ands chloride. Voltage-gated channels were macromolecular protein complexes within the lipid membrane. They were divided into subunits. Each subunit had a specific function and was encoded by more than one gen. Conclusion: Current genetic studies of idiopathic epilepsies show the importance of genetic influence on Voltage-gated channels. Different genes may regulate a function in a channel; the channel defect was directly responsible for neuronal hyper excitability and seizures. INTRODUCTION Epilepsy is defined as a group of diseases caused by a non-con-trolled discharge of neurons of either the whole cortex (generalized epilepsies) or localized brain areas (partial epilepsies) that show no underlying cause other than a possible inherited predisposition. The genetic basis for two idiopathic epilepsies has now been pinpointed to specific ion channel proteins for two different potassium-channel genes (KCNQ2, MIM 602235 and KCNQ3, MIM 602232) or sodium channel subunits (SCN2A, OMIM 601219). Voltages gated Channels are membranous structures formed by aggregated proteins and contain aqueous central pores that allow the passage of ions. Channels control the flow of ions in and out of the cell causing depolarization and hyper polarization of the cell. Neurotoxins selectively inactivate different sites of the ion channel thus allowing both the identification of channel components and the determination of their functions. In this review, we consider reports that focus on structure and function of the voltage-gated channel proteins and their mutation. MATERIALS AND METHODS Voltage-gated sodium channels: The voltage-gated sodium channels are integral membrane proteins essential for the generation and propagation of action potentials in excitable tissues.They are usually made up of. -1 and -2 subunits. The -subunit is the major part of channel pore, containing four homologous domains. Each domain contains six helix transmembrane segment (S1-S6). The S4 segment is rich of positive charge amino acid residues, acting as the channel-activating electrical sensor receptor. -1 and -2 are both auxiliary subunits, which have the regulating function. The gene mutation of channels can cause the structure and function abnormality of corresponding channels' protein, which cause the abnormality of neuron excitability and the epilepsy. There are two GEFS+-associated mutation (R1648H, R1657C) affecting the S4 segment of domain. The R1648H mutation affects a positive residue in the middle of D4/S4 and exhibit similar slope conductance but had an increased probability of late reopening and a sub fraction of channels with prolonged open times. while R1657C affects the innermost positive residue in this voltage-sensing segment and exhibits conductance, open probability, mean open time and latency to first opening similar to WT channels but reduced whole cell current density, suggesting decreased number of functional channels at the plasma membrane. The idiopathic epilepsy syndrome to be mapped from a single pedigree, Generalized epilepsy with febrile seizures plus (GEFS+), was reported exactly ten years ago by and was a classic example of genetic heterogeneity with so far at least 4 mutations in different gene loci producing the same phenotype and the long-awaited cloning of this gene eloquently illustrates how genetic analysis of epilepsy is contributing not only to our understanding of the disease but also to the basic molecular neurobiology of the brain. GEFS syndrome is caused by mutation of the sodium channel b1 subunit gene located on chromosome 19q13.1. They elegantly demonstrated in Xenopus laevis oocytes that this mutation interferes with the ability of the channel b-1 subunit to modulate gating kinetics, possibly leading to membrane hyper excitability. Baulac et al. have identified a family with with GEFS1, by linkage analysis, that the affected gene map locus was in the region of2q21-q33. Escayg et al. describe two separate families and found the abnormality of the gene map locus on chromosome 2q24. Another family with GEFS1 was found mapped to chromosome 2q23-24. These genes encode different isoforms of the subunit of the sodium channel. Other families have been studied and other gene loci have been detected. The human b1 mutant subunit prolongs neuronal depolarization under steadystate conditions when co-expressed in vitro with a rat brain sodium channel a subunit RBII; however, there is still little insight into how the sodium channel defect gives rise to the phenotypically diverse seizure patterns seen within a single GEFS+ pedigree. The amino acid exchange resulting from the mutation interferes with the ability of the subunit to modulate the channel-gating kinetics of Na_-channel a subunit, which is consistent with loss of function. Seizures in some affected individuals also occurred during a febrile episode, but most of these events were found to persist beyond the age of 6 years, which is the commonly used diagnostic cut-off for the clinical syndrome classified as febrile seizures. In febrile seizures, 90% of cases show seizures in the first 3 months of life and less than 10% develop afebrile seizures at a later age A new candidate sodium channel gene for epilepsy was also proposed after the discovery that it is selectively expressed in the limbic system of the brain, giving new meaning to the term 'positional cloning'. The mRNA for sodium channel SCN5A gene, which is localized to chromosome 3p24, was expressed in the brain piriform cortex and amygdala using in situ hybridization and PCR techniques. These limbic networks have been long known to possess the lowest threshold for epileptogenesis of any brain region. Voltage-gated potassium channels: Voltagedependent potassium channel is one of the most important ionic pores for generation and propagation of the action potential. Since specific genes coding for this channel is expressed in the central nervous system, it could be expected that a mutation in these genes may be at the origin of unbalance between excitation and inhibition and thus could cause epilepsy. Voltage-gated potassium (K+) channels represent the most heterogeneous class of ion channels with respect to kinetic properties, regulation and pharmacology. In neuronal cells, voltage-gated K+channels regulate excitability by controlling action potential duration, subthreshold electrical properties and responsiveness to synaptic inputs. Voltage-gated potassium channels are multi-subunit proteins with the core channel consisting of a tetramer of a-subunits, which surround a K+selective pore. Current understanding of the mechanisms that govern K+ channel assembly is incomplete. So far, two types of domains involved in K+ channel assembly have been described. In Shakerrelated K+ channels an aminoterminal domain (T1) was found important for channel assembly. In contrast, carboxy-terminal sequences are required for assembly of functional ether-a`-go-go (eag) and KCNQ(Kv7) channels.A potassium channel mutation that is linked with epilepsy provides a partially penetrant human epilepsy phenotype with variable regional brain excitability alterations. In Leppert et al., a large pedigree displaying a variety of seizure phenotypes, was linked to a point mutation of the potassium channel EBN1 gene located on chromosome20q13.2 and EBN2 gene located on 8q24, leading to identification of the KCNQ2 and KCNQ3 ion channels. These proteins are similar to the 6TM domain KCNQ1 channel that is mutated in one variant of the cardiac long QT syndrome. The KCNQ2 and KCNQ3 are believed to interact as heterodimers expressed diffusely in brain and persist in adulthoodt, although seizures associated with BFNC typically disappear by 6 months of age. The question that why KCNQ2/ KCNQ3 mutation causes epileptic paroxysm only appears in the newborn time is still discussing. One kind of explanations is the brain in the developing period compares to the mature period is easier to have the convulsion. Another kind of possibility explanation is the presentation of KCNQ2/ KCNQ3 has the small difference in the brain growing process. At first several days or weeks after birth, KCNQ is on the dominant position in the central nervous system-----the KCNQ potassium channel possibly in the higher expression level while other voltage-gated potassium channels in the lower. In addition, the human brain possibly only in some specific time expresses KCNQ2 or KCNQ3. Co-expression of KCNQ2 and KCNQ3 leads to a large increase of the potassium current. KCNQ2 and 3 are thought to contribute synergistically to the formation of M-current. M-current regulates the subthreshold electric excitability of neurons. This implies that a slight impairment of the M-current converts the firing properties of neurons from phasic to tonic, without affecting any other electrophysiological properties such as the slow after-hyperpolarization. There is a new evidence of members of the Drosophila ether-a-go-go potassium channel gene subfamily for this clinical entity, called 'benign infantile epilepsy syndrome, may contribute to the mammalian Mcurrent, suggesting that there may be additional candidate K+ channel genes involved in epilepsy, one of which, KCNJ1O, exhibits a potentially important polymorphism with regard to fundamental aspects of seizure susceptibility. The KCNJ10 gene is located lq22-q23,found in almost mammalian and code potassium inwardly rectifying channel (Kir). Kir is possible cushioning potassium concentration of cerebrum neuroglia cell. Deletion of KCNJ1O as a seizure susceptibility gene that code for inward rectifier potassium ion channels imparts protection against seizures results in spontaneous seizures and increased seizure susceptibility. Recently, about thirty Q2 and three Q3 mutations have been discovered in families affected by BFNC. Those mutations whose functional consequences have been investigated cause a small (<25%) reduction in the maximal current carried by the Q2/Q3 channels; only two Q2 mutations caused a more dramatic current reduction, consistent with a dominant-negative effect. A large fraction of BFNC-causing mutations in Q2 are represented by insertions or deletions leading to changes in the primary sequence of the long cytosolic C terminus, where relevant sites have been detected for functional regulation. In fact, specific sequences within this region (the so-called "subunit interaction domain" or sid) dictate the specificity of KCNQ subunit assembly and provide sites where other signaling proteins such as calmodulin and protein kinases and kinase-anchoring proteins interact with KCNQ subunits and modulate channel activity. Voltage-gated calcium channels: Voltage-gated calcium channels are key mediators of calcium entry into neurons in response to membrane depolarization. Calcium influx via these channels mediates a number of essential neuronal responses, such as the activation of calcium-dependent enzymes, gene expression, the release of neurotransmitters from presynaptic sites and the regulation of neuronal excitability.Voltagegated calcium channels are heteromultimers composed of an a 1 subunit and three auxiliary subunits, a 2 -d, b and g. The a 1 subunit forms the ion pore and possesses gating functions and, in some cases, drug binding sites. Ten a 1 subunits have been identified, which, in turn, are associated with the activities of the six classes of calcium channels. L-type channels have a 1C (cardiac), a 1D (neuronal/endocrine), a 1S (skeletal muscle) and a 1F (retinal) subunits,and typically found on cell bodies where they participate, among other functions, in the activation of calcium-dependent enzymes and in calcium-dependent gene transcription events, Ntype channels have a 1B subunits,and produce inactivating currents that are selectively and potently inhibited by conotoxins GVIA and MVIIA; P-and Qtype channels concentrated at presynaptic nerve terminals where they are linked to the release of neurotransmitters and have a 1A subunits and T-type channels have a 1G, a 1H and a 1I subunits. In the context of neurotransmitter release, N-type and P/Q-type channels do not appear to be created equally, as N-type channels tend to support inhibitory neurotransmission, whereas the P/Q-type channels have more frequently been linked to the release of excitatory neurotransmitters but can also support inhibitory release. R-type channels rapidly inactivating and activate at somewhat more hyperpolarized potentials compared with the other HVA calcium channel subtypes. The a 1 subunits each have four homologous domains (I-IV) that are composed of six transmembrane helices. The fourth transmembrane helix of each domain contains the voltage-sensing function. The four a 1 domains cluster in the membrane to form the ion pore. The principal pore-forming 1subunits of calcium channels are have been identified into three major classes: Cav1, Cav2 and Cav3.Among the Cav2 family, alternate splice isoforms of Cav2.1 encode P-and Q-type channels, the Cav3 family represents three different types of T-type channels (i.e., Cav3.1, Cav3.2 and Cav3.3) with distinct kinetic properties. The b-subunit is localized intracellularly and is involved in the membrane trafficking of a 1 subunits. The g-subunit is a glycoprotein having four transmembrane segments. The a 2 subunit is a highly glycosylated extracellular protein that is attached to the membrane-spanning d-subunit by means of disulfide bonds. The a 2 -domain provides structural support required for channel stimulation, while the d domain modulates the voltage-dependent activation and steadystate inactivation of the channel. A growing body of evidence firmly establishes P/Q-type and T-type channels as important contributors to seizure genesis through modulation of neuronal properties that act to shape network function, whereas other types of calcium channels do not appear to contribute to the development of seizure activity. T-type channels: T-type channels have always been likely candidates due to their eminent presence in cortical and thalamic structures and their established physiological role in modulating neuronal firing. In the Genetic Absence Epilepsy Rat from Strasbourg (GAERS) model an Increase in T-type currents in reticular neurons has been reported after the second postnatal week. This increase in T-type currents is putatively mediated by elevated Ca v 3.2 expression in reticular neurons, which as the animal develops to exhibit absence like seizures, is also accompanied by elevated expression of Ca v 3.1 in relay neurons. The causal connection between increased T-type channel activity and subsequent development of seizures remains to be established. It is conceivable that developmental causes involving either modulation of the channels, their redistribution with other isoforms and alternate splicing may be contributing factors. Recent studies involving t-type (Ca v 3.1) Knock-Out (KO) mice have provided additional role of T-type channels in absence like seizure episodes. Ablation of the Ca v 3.1 gene abolishes rebound spiking in dissociated adult thalamocortical neurons but does not alter their ability to fire tonically. The direct link involving T-type channels and the generalized spikewave epilepsies in humans was recently established. A number of missense mutations have been identified in the Cav3.2 calcium channel gene in patients diagnosed with childhood absence epilepsy and other forms of idiopathic generalized epilepsy. Several of these mutations were found to result in small changes in the gating characteristics of both rat and human Cav3.2 channels in a manner consistent with a gain of function. Specifically, some of the mutations resulted in a hyperpolarizing shift in the voltage dependence of activation, while others resulted in increased channel availability due to decreased steady-state inactivation. However, the majority of the mutations did not significantly affect the biophysical characteristics of the channel. A recent study has demonstrated that the CACNA1H (Cav3.2) T-type calcium channel gene can undergo extensive alternative gene splicing. This might be a possible mechanism by which these mutations could affect seizure threshold in certain neurons. P/Q-type calcium channels: Evidence from rat and mouse models of absence epilepsy, knockout mice and characterization of functional effects of mutations found in patients all point to the fact that inhibition of P/Q-type channel activity somehow alters neurons and neuronal networks to result in seizure activity. This may also apply for variants of ancillary subunits that can act to reduce P/Q-type channel function. There are several potential avenues by which reduced P/Q-type channel activity could affect network properties that give rise to seizure activity. First, P/Q-type channel defects preferentially affect excitatory synaptic transmission. Second, P/Q-type channel activity is directly linked to calcium-dependent gene transcription via the CREB (cAMP-response element binding protein) pathway and therefore, reduced P/Q-type channel activity may compromise appropriate gene regulation and expression. Finally, T-type calcium channel activity appears to be increased in at least four different mouse models of absence epilepsy (i.e., Cav2.1 KO, tg, lh, stg) ; therefore, it is conceivable that the epileptic phenotypes associated with compromised P/Q-type channel function may arise indirectly from enhanced neuronal excitability mediated by T types. Voltage-gated Cl-channels: Voltage-gated Clchannels are implicated in GABA (A) transmission and mutations in these channels have been described in some families with juvenile myoclonic epilepsies, epilepsy with grand mal seizures on awakening or juvenile absence epilepsy. Hyperpolarisation-activated cation channels have been implicated in spike-wave seizures and in hippocampal epileptiform discharges. The Cl-ionophore of the GABA (A) receptor is responsible for the rapid post-PDS hyperpolarisation, it has been involved in epileptogenesis both in animals and humans and mutations in these receptors have been found in families with juvenile myoclonic epilepsy or generalised epilepsy with febrile seizures plus; enhancement of GABA (A) inhibitory transmission is the primary mechanism of benzodiazepines and phenobarbital and is a mechanistic approach to the development of novel AEDs such as tiagabine or vigabatrin. Though more and more ion channel mutations are being identified in genetic studies, their small incidence is still indicative of the presence of undiscovered mutations or other causative mechanisms. Understanding wild-type channel function during epileptic activity and performing analysis of the biophysical properties of mutant channels may also provide vital insights into the remaining epilepsies and discover targets for future anti-epileptic drugs. RESULTS AND DISCUSSION Epilepsy is a disorder of recurrent spontaneous seizures affecting about 4% of individuals over their lifetimes and encloses a variety of disorders with electroencephalogram paroxysms. In spite of a genetic component in the pathogenesis of epilepsy, the molecular mechanisms of this syndrome remain poorly understood. Recently, several paroxysmal neurological disorders were considered to be caused by abnormal channel. Focusing on idiopathic epilepsy, long-lasting changes in the expression levels of voltagegated channels have been found to promote pathological brain activity, the discovery of the underlying Channel defects in these idiopathic epilepsies has several practical implications for the practicing neurologist. First, the identification of specific ion channel defects has provided insights on the pathogenesis of seizures, has identified molecular targets for the rational design of therapeutic intervention and can be used to more accurately classify patients into specific categories. This segregation is important for providing patients with a more accurate diagnosis. Recent researches of epilepsy support that at least some of the idiopathic epilepsies must be considered as channelopathies. The channelopathy seems to explain issues related to epilepsy, i.e., neuronal hyperexcitability and dominant inheritance with various penetrance. The imbalance between inhibitory and excitatory neurotransmission precipitates abnormally frequent electrical discharges in neurons. Voltage gated channels consist of several subunits and function as a hetero-multimer. Coexistence of deficient subunits in such multimers induces a renowned genetic phenomenon, i.e., the dominant negative effect. Voltage-gated sodium and potassium channels are the most common ionic pores for generation and propagation of the action potential however it is less clear how mutations of calcium and chloride channels and nicotinic receptors cause epilepsy. Mutational analyses of voltage-gated sodium (SCN1A, SCN1B) channels and potassium channel subunits (KCNQ2, KCNQ3) in GEFS1 and BFNC, suggest that allelic and non-allelic genetic heterogeneity is important in these two epileptic syndromes. As suggested by recent results, the causal gene remains unknown in most GEFS1 pedigrees. In the same way, other genes are probably involved in BFNC, since only six KCNQ2 mutations were observed in 23 BFNC probands. The major candidate genes for both of these benign familial epilepsies are the other sodium and potassium channel genes. Recently, a sodium channel polymorphism was shown to associate with antiepileptic drug dosage. Interestingly, it affects splicing of sodium channels and alters their biophysical properties that provide a possible mechanistic explanation for the difference in anti-epileptic drug responsiveness or tolerability. CONCLUSION In conclusion the elucidation of the Na+, K+, Cl+ and Ca2+channels defects of epilepsies, leads to better understanding of the pathophysiology of epilepsies. The importance of ionic channels as cause of epilepsies was demonstrated with the identification of the association between the epilepsy and mutations in genes coding for ion channel subunits. ACKNOWLEDGEMENT Special thanks to Hanan Ghazi Salem for her insightful comments and advice in the preparation of this review article. |
Examining the morphological processes in the formation of Tupuri nominals The current work examines the morphological processes that are observable in the formation of nominals in Tupuri, a Niger-Congo language spoken in the South West of Chad and in the North of the Republic of Cameroon. Structuralism was adopted as a theoretical approach in this paper. Forty native speakers of l dialect, i.e. eight from each of towns/villages Sere, Dawa, Mindaore, Lale, and Guwe were randomly selected to collect data based on a Swadesh words-list. The data revealed that the formation of nominals in Tupuri language is characterized by pre-fixation, suffixation, total reduplication, partial reduplication, total modification, and partial modification, which include subtraction and neutralization. Furthermore, compounding is another process that characterizes the formation of nouns in Tupuri language. |
class ControllerFunction:
"""Controller functions are callable function proxy objects
Any calls are forwarded to the internal function, which may be
undefined or dynamically changed. If a call is made when the
internal function is undefined, a FunctionNotImplementedError is
raised.
"""
def __init__(self, name, func=None):
# The name is needed to provide more helpful information upon
# a FunctionNotImplementedError exception.
self.name = name
self.func = func
def __call__(self, *args, **kwargs):
if self.func is None:
raise FunctionNotImplementedError(self.name)
return self.func(*args, **kwargs) |
#!venv/bin/python
import logging
from functools import wraps
import werkzeug.routing
from flask import abort
from flask import redirect
from flask import Flask
from flask import g
from flask import jsonify
from flask import make_response
from flask_httpauth import HTTPBasicAuth
from pbkdf2 import crypt
from playhouse.flask_utils import FlaskDB
from adapter import Adapter
from model import ALL_MODELS
from model import Config
from model import Device
from model import Group
from model import Message
from model import Publication
from model import Subscription
from model import User
from schema import ConfigSchema
from schema import DeviceSchema
from schema import GroupSchema
from schema import MessageSchema
from schema import PublicationSchema
from schema import SubscriptionSchema
from schema import UserSchema
from view import View
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s.%(msecs)d %(levelname)s %(threadName)s(%(thread)d) %(module)s.%(funcName)s#%(lineno)d %(message)s', datefmt='%d.%m.%Y %H:%M:%S')
app = Flask(__name__, static_url_path = '')
database = FlaskDB(app, 'sqlite:///peewee.db')
auth = HTTPBasicAuth()
class AuthExt:
@classmethod
def save(cls, user, **kwargs):
g.current_user = user if user else None
admin_group = Group.select().where(Group.name == 'admin').get()
g.is_admin = admin_group.is_member(g.current_user)
@classmethod
def is_admin(cls, *args, **kwargs):
logging.debug('args={}, kwargs={}, g={}'.format(args, kwargs, g))
try:
return g.is_admin
except:
logging.exception('is_admin missing')
return False
@classmethod
def is_parent_user(cls, *args, **kwargs):
logging.debug('args={}, kwargs={}, g={}'.format(args, kwargs, g))
try:
if int(kwargs['parent']) == g.current_user.id:
return True
except KeyError:
logging.exception('parent not specified')
except TypeError:
logging.exception('invalid parent type')
except:
logging.exception('parent not user')
return False
@staticmethod
def admin_required(f):
@wraps(f)
def decorated(*args, **kwargs):
logging.debug('args={}, kwargs={}'.format(args, kwargs))
if AuthExt.is_admin(*args, **kwargs):
return f(*args, **kwargs)
return unauthorized()
return decorated
@staticmethod
def admin_or_parent(f):
@wraps(f)
def decorated(*args, **kwargs):
logging.debug('args={}, kwargs={}'.format(id, args, kwargs))
if AuthExt.is_admin(*args, **kwargs):
return f(*args, **kwargs)
if AuthExt.is_parent_user(*args, **kwargs):
return f(*args, **kwargs)
return unauthorized()
return decorated
@auth.verify_password
def verify_password(username, alleged_password):
try:
user = User.select().where(User.username == username).get()
verification = (crypt(alleged_password, user.password) == user.password)
logging.debug('verify_password: username={}, encrypted_password={}, alleged_password={}, verification={}'.format(username, user.password, alleged_password, verification))
if verification:
AuthExt.save(user=user)
return verification
except Exception as e:
logging.exception('verify_password')
return False
@auth.error_handler
def unauthorized():
# return 403 instead of 401 to prevent browsers from displaying the default auth dialog
return make_response(jsonify({'error': 'Unauthorized access'}), 403)
@app.errorhandler(400)
def not_found(error):
return make_response(jsonify({'error': 'Bad request'}), 400)
@app.errorhandler(404)
def not_found(error):
return make_response(jsonify({'error': 'Not found'}), 404)
@app.route('/')
@auth.login_required
def index():
return redirect('/index.html')
def prepare_routes(base_url='/api/v1.0/'):
View.decorators = [AuthExt.admin_or_parent, auth.login_required]
# Admin-only.
View.add(app, base_url=[base_url + 'configs'], endpoint='configs', adapter=Adapter(model_cls=Config), schema_cls=ConfigSchema)
View.add(app, base_url=[base_url + 'groups'], endpoint='groups', adapter=Adapter(model_cls=Group), schema_cls=GroupSchema)
View.add(app, base_url=[base_url + 'users'], endpoint='users', adapter=Adapter(model_cls=User), schema_cls=UserSchema)
View.add(app, base_url=[base_url + 'users/<string:parent>/devices', base_url + 'devices'], endpoint='devices', adapter=Adapter(model_cls=Device, parent_cls=User), schema_cls=DeviceSchema)
View.add(app, base_url=base_url + 'users/<string:user_id>/devices/<string:parent>/messages', endpoint='devices.messages', adapter=Adapter(model_cls=Message, parent_cls=Device), schema_cls=MessageSchema)
View.add(app, base_url=[base_url + 'users/<string:parent>/publications', base_url + 'publications'], endpoint='publications', adapter=Adapter(model_cls=Publication, parent_cls=User), schema_cls=PublicationSchema)
View.add(app, base_url=base_url + 'users/<string:user_id>/publications/<string:parent>/subscriptions', endpoint='publication.subscriptions', adapter=Adapter(model_cls=Subscription, parent_cls=Publication), schema_cls=SubscriptionSchema)
View.add(app, base_url=base_url + 'users/<string:user_id>/publications/<string:parent>/messages', endpoint='publication.messages', adapter=Adapter(model_cls=Message, parent_cls=Publication), schema_cls=MessageSchema)
View.add(app, base_url=[base_url + 'users/<string:parent>/subscriptions', base_url + 'subscriptions'], endpoint='subscriptions', adapter=Adapter(model_cls=Subscription, parent_cls=User), schema_cls=SubscriptionSchema)
#View.add(app, base_url=base_url + 'users/<string:user_id>/subscriptions/<string:parent>/messages', endpoint='subscription.messages', adapter=Adapter(model_cls=Message, parent_cls=Subscription), schema_cls=MessageSchema)
View.add(app, base_url=[base_url + 'users/<string:parent>/messages', base_url + 'messages'], endpoint='messages', adapter=Adapter(model_cls=Message, parent_cls=User), schema_cls=MessageSchema)
if __name__ == '__main__':
prepare_routes()
app.run(debug=True)
|
In vitro antiproliferative effects of sulforaphane on human colon cancer cell line SW620. The isothiocyanate sulforaphane (SF) has been reported to possess chemopreventive efficiency towards various malignancies including colon cancer. Here, we investigated the antiproliferative and pro-apoptotic effects of SF on colon cancer cell line SW620. We found that SF at concentrations of 10-50 microM inhibits cell viability and proliferation of SW620 cells in a time- and dose-dependent manner, with IC50 being 26 microM (24 h), 24.4 microM (48 h) and 18 microM (72 h). Also, in the same cells SF caused DNA damage and chromatin condensation after 24 h and 48 h as revealed by phospho-H2A.X western blot analysis and DAPI staining of nuclei. These changes were accompanied by the elevated activity of caspase 3, although after 20 microM SF concentration only. Together, these results indicate that SF suppresses growth of human metastacic colonocytes and induces apoptotic cell death. |
<gh_stars>1-10
#pragma once
#include "../pe_resources.h"
#include "borland_types.h"
class BorlandVersion
{
public:
BorlandVersion(PackageInfoHeader* packageinfo, const PEResources::ResourceItem& resourceitem, u64 size);
bool isDelphi() const;
bool isCpp() const;
std::string getSignature() const;
private:
bool contains(const std::string& s) const;
private:
PackageInfoHeader* m_packageinfo;
PEResources::ResourceItem m_resourceitem;
u64 m_size;
};
|
Research on Professional Skills Association Helping to Cultivate Applied Talents in local Colleges and Universities Taking the training of applied talents as the starting point, this paper points out that the target orientation of applied talents training in local colleges and universities should be constructed from the aspects of knowledge, ability and quality. Taking the school of mathematics and statistics of Zhaoqing University as an example, this paper discusses the training mode of applied talents relying on the platform of professional associations, expounds the specific implementation schemes and measures, summarizes the work results, and provides ideas and reference for the training of Applied Talents in local colleges and universities. |
Nairobi, Kenya - At least three people have been killed in violence gripping Kenya's contentious presidential election rerun, which has been boycotted by the country's opposition leader.
Clashes between police and protesters began quickly after polls opened at 6am local time (03:00 GMT). More than 19 million voters are registered to cast their vote in the election.
Voting was officially closed at 5pm (14:00 GMT). But in some areas, voters who were lining up before the closing time, were allowed to cast their ballots. Voting in some areas have reportedly been postponed to Saturday.
Al Jazeera's Catherine Soi confirmed a man had been shot in Kisumu, the stronghold of opposition leader Raila Odinga, as police and protesters clashed. The victim was a 19-year-old male who had been shot in the thigh and bled profusely, Soi said.
Our correspondent reported that another fatality has been brought to the mortuary by rescue workers. She also reported that a third person was confirmed dead, although the circumstances leading to the person's death are still unknown.
Several others were injured, Soi said, confirming that police had fired tear gas.
{articleGUID}
The rerun comes after the Supreme Court nullified the August 8 presidential poll results because of "irregularities and illegalities" in the voting process.
At least 67 people were killed in the post-election violence following the August vote, according to Amnesty and Human Rights Watch.
President Uhuru Kenyatta, 55, is seeking a second and final five-year term in office. He won 54 percent of the votes in the nullified poll.
His main challenger, Odinga, who received almost 45 percent of the votes in August's election, is boycotting Thursday's vote.
Demonstrations in the opposition stronghold of Kibera in the capital also turned violent, with Odinga supporters burning tyres and barricading roads as police fired tear gas and live ammunition at protesters.
'No point in voting'
At least one polling station in the neighbourhood was closed.
"There is no point in voting. It is illegal what they are doing. They will steal again. This is sham. I'm not going to waste my time in voting," Alfred Otieno, a young stall owner in Kibera, told Al Jazeera.
In Kisumu, Odinga's stronghold, some polling stations were open, but there was no sign of voters.
"Do not participate in any way in the sham election. Convince your friends, neighbours and everyone else to not participate," Odinga told his supporters at a rally in Nairobi on Wednesday.
"We advice Kenyans who value democracy and justice to hold vigilance prayers or stay at home."
Odinga, 72, said opposition demands to reform the electoral body following the court ruling had not been met.
After successfully challenging the results of the August poll, Odinga has called on his supporters to stay away from Thursday's vote.
Although opposition supporters heeded Odinga's call, queues had already formed at some polling stations in the Kenyan capital on Thursday before sunrise.
Amid tight security, Elastus Maina, a 40-year-old businessman, told Al Jazeera: "I queued up from 5am. It took me less than five minutes to vote. Very smooth. I'm happy with how it is going. I'm voting again because it is my democratic right. I never boycotted an election and will not do that now."
In the government stronghold of Banana, Kiambu county, people formed lines at a polling station to cast their vote, but turnout was lower than the August election.
None of the six other minor candidates competing against Kenyatta received more than one percent in the previous poll.
Esther Muhindi, a 43-year-old HR manager, said she was pleased with the process on Thursday.
"I voted because the court told us to. I hope there is no more repeats. This time the queues were moving fast," she told Al Jazeera.
Thabo Mbeki, former South African president and the head of the African Union's monitoring mission to Kenya, said that observers were keeping a close watch on developments.
"We have been to two polling stations so far. In the two places we have been, there are lots of people who have turned up to vote," he said in Nairobi. "We need to have a look at more stations to see how the population is responding."
Kenyatta is the son of the country's founding father while Odinga, a former prime minister, is the son of the country's first vice president.
On Wednesday, the Supreme Court was unable to raise a quorum of judges to decide on whether the poll should go ahead.
The petition was brought by three human rights activists who claimed the electoral commission is not ready to hold a credible poll.
The East African country has witnessed almost daily street protests following the announcement of August's presidential election result. |
package de.tub.dima.babelfish.typesytem.valueTypes.number.integer;
import com.oracle.truffle.api.library.ExportLibrary;
import com.oracle.truffle.api.library.ExportMessage;
import de.tub.dima.babelfish.storage.UnsafeUtils;
@ExportLibrary(value = IntLibrary.class)
public class Lazy_Int_64 extends Int_64 {
private final long address;
public Lazy_Int_64(long address) {
this.address = address;
}
public long asLong(){
return UnsafeUtils.getLong(address);
}
@ExportMessage
public long asLongValue(){
return UnsafeUtils.getLong(address);
}
}
|
def _person_name_annotator(self, user_utterance, slots=None):
tokens = user_utterance.get_tokens()
if not slots:
slots = [Slots.ACTORS.value, Slots.DIRECTORS.value]
person_names = self.person_names
else:
slots = [slots]
person_names = self.slot_values[slots]
params = []
for ngram_size in range(self.ngram_size['person'], 0, -1):
for gram_list in ngrams(tokens, ngram_size):
gram = sum(gram_list).lemma
for _, lem_value in person_names.items():
if f' {gram} ' in f' {lem_value} ' and \
gram not in self.stop_words:
for slot in slots:
if gram in self.slot_values[slot].values():
annotation = SemanticAnnotation.from_span(
sum(gram_list), AnnotationType.NAMED_ENTITY,
EntityType.PERSON)
params.append(
ItemConstraint(slot, Operator.EQ, gram,
annotation))
break
if len(params) > 0:
return params |
<gh_stars>1-10
#include "datastorage.hpp"
#include "application.hpp"
#include "toolbox.hpp"
DataStorage::DataStorage()
{
}
/**
* Loads the data that is required by the game for
* displaying the game before it officially starts
* This includes loading screens, loading music
* and similar
* Returns 0 if successful
*/
int DataStorage::loadInitialData()
{
loadTextureAndStoreSprite("logo", "data/2D/engine_logo.png");
loadSound("biisi", "data/audio/biisi.ogg");
return 0;
}
/**
* Load all the game's content
* Returns 0 if successful
*/
int DataStorage::loadAllData()
{
return 0;
}
/**
* Generates sf::Sprite, sf::Image and sf::Texture of size s
* All three are stored in datastorage containers
* Returns SpritePtr to the newly generated sprite
*/
SpritePtr DataStorage::generateSpriteTriplet(std::string name, sf::Vector2i s)
{
SpritePtr sprite = SpritePtr(new sf::Sprite());
storeSprite(name, sprite);
TexturePtr texture = TexturePtr(new sf::Texture());
texture->create(s.x, s.y);
storeTexture(name, texture);
ImagePtr img = ImagePtr(new sf::Image());
img->create(s.x, s.y);
storeImage(name, img);
return sprite;
}
/**
* Loads a new texture from the given path and stores the produced Texture in the textureContainer as a shared_ptr
* The texture is stored based on the given name
* Returns -1 if error, 0 if success
*/
int DataStorage::loadTexture(std::string name, std::string path)
{
std::shared_ptr<sf::Texture> texture = std::shared_ptr<sf::Texture>(new sf::Texture());
if (!texture->loadFromFile(path))
{
std::cout << "-DataStorage: Error loading file " << path << std::endl;
return -1;
}
textureContainer[name] = texture;
std::cout << "+DataStorage: Successfully loaded " << path << " as " << name << std::endl;
return 0;
}
/**
* Loads a new texture from the given path and stores the produced Texture in the textureContainer as a shared_ptr
* Also creates a new sprite with the original dimensions from this texture
* Returns -1 if error, 0 if success
*/
int DataStorage::loadTextureAndStoreSprite(std::string name, std::string path)
{
TexturePtr texture(new sf::Texture());
if (!texture->loadFromFile(path))
{
std::cout << "-DataStorage: Error loading file: " << path << std::endl;
return -1;
}
textureContainer[name] = texture;
SpritePtr sp(new sf::Sprite(*texture));
storeSprite(name, sp);
std::cout << "+DataStorage: Successfully loaded " << path << " as " << name << std::endl;
return 0;
}
/**
* Stores an existing sprite into the spriteContainer
*/
int DataStorage::storeSprite(std::string name, std::shared_ptr<sf::Sprite> s)
{
spriteContainer[name] = s;
return 0;
}
/**
* Stores an existing iamge in the imageContainer
*/
int DataStorage::storeImage(std::string name, std::shared_ptr<sf::Image> i)
{
imageContainer[name] = i;
return 0;
}
/**
* Stores an existing texture in the textureContainer
*/
int DataStorage::storeTexture(std::string name, std::shared_ptr<sf::Texture> t)
{
textureContainer[name] = t;
return 0;
}
/**
* Loads a new texture from the filepath, and stores it in texture map by name
* Returns a SpritePtr containing the entire texture
*/
SpritePtr DataStorage::loadAndGiveSprite(std::string name, std::string filepath)
{
loadTextureAndStoreSprite(name, filepath);
return getSprite(name);
}
/**
* Writes an image to the disk if the image exists
*/
void DataStorage::writeImageToDisk(std::string name)
{
auto img = getImage(name);
if (img == nullptr)
{
std::cout << "!DataStorage: Unable to write file to disk: not found: " << name << std::endl;
return;
}
if (img->saveToFile(app.getToolbox()->combineStringAndString(name, ".png")))
{
std::cout << "+DataStorage: Wrote " << name << " to disk" << std::endl;
}
else
{
std::cout << "!DataStorage: Error writing file to disk: " << name << std::endl;
}
}
/**
* Creates a new texture and then returns a shared_ptr to it
* params:
* name: The name that will be given to the created sprite. Also stored in spriteContainer by this name
* textureName: The texture that will be used for this, searched from textureContainer
* sizeX, sizeY: The area of the rectangle that will be used out of the texture. You might want to use 100x100 of a 256x256 texture
* coordX, coordY: The offset in the texture. You may want to take a 100x100 region out of 256x256 texture, but not from top left corner
* Returns a pointer to the newly created texture, or nullptr if failed
*/
SpritePtr DataStorage::createAndGiveSprite(std::string name, std::string textureName, int sizeX, int sizeY, int coordX, int coordY)
{
sf::IntRect subRect;
subRect.left = coordX;
subRect.top = coordY;
subRect.width = sizeX;
subRect.height = sizeY;
TexturePtr texturePointer = getTexture(textureName);
if (texturePointer == nullptr)
{
std::cout << "-DataStorage: Cannot create texture. Desired texture not found in memory: " << textureName << std::endl;
return nullptr;
}
SpritePtr sprite(new sf::Sprite((*texturePointer), subRect));
storeSprite(name, sprite);
return sprite;
}
/**
* Loads a new sound by filename. Also creates a soundbuffer object and stores it
* Returns -1 if failure
*
* Does not support mp3. ogg is preferred over other formats
*/
int DataStorage::loadSound(std::string name, std::string path)
{
SoundBufferPtr buffer(new sf::SoundBuffer());
if (!(*buffer).loadFromFile(path))
{
std::cout << "-DataStorage: Error loading audio file " << path << std::endl;
return -1;
}
soundBufferContainer[name] = buffer;
std::shared_ptr<sf::Sound> sound = std::shared_ptr<sf::Sound>(new sf::Sound());
sound->setBuffer((*buffer));
soundContainer[name] = sound;
std::cout << "+DataStorage: Successfully loaded " << path << "as " << name << std::endl;
return 0;
}
/**
* Returns a shared_ptr<sf::Sprite> to a loaded sprite
* or nullptr if not found
*/
SpritePtr DataStorage::getSprite(std::string name)
{
std::map<std::string, SpritePtr>::iterator iter;
iter = spriteContainer.find(name);
if (iter == spriteContainer.end())
{
std::cout << "-DataStorage: Can't find requested sprite in map: " << name << std::endl;
return nullptr;
}
return iter->second;
}
/**
* Returns a shared_ptr<sf::Texture> to a loaded texture
* or nullptr if not found
*/
TexturePtr DataStorage::getTexture(std::string name)
{
std::map<std::string, TexturePtr>::iterator iter;
iter = textureContainer.find(name);
if (iter == textureContainer.end())
{
std::cout << "-DataStorage: Can't find requested texture in map: " << name << std::endl;
return nullptr;
}
return iter->second;
}
/**
* Returns a shared_ptr<sf::Sound> to a loaded sound
* or nullptr if not found
*/
SoundPtr DataStorage::getSound(std::string name)
{
std::map<std::string, SoundPtr>::iterator iter;
iter = soundContainer.find(name);
if (iter == soundContainer.end())
{
std::cout << "-DataStorage: Can't find requested soundfile in map: " << name << std::endl;
return nullptr;
}
return iter->second;
}
/**
* Returns a shared_ptr<sf::Image> to a loaded image
* or nullptr if not found
*/
ImagePtr DataStorage::getImage(std::string name)
{
auto iter = imageContainer.find(name);
if (iter == imageContainer.end())
{
std::cout << "-DataStorage: Can't find image in map: " << name << std::endl;
return nullptr;
}
return iter->second;
}
void DataStorage::deleteImage(std::string name)
{
auto iter = imageContainer.find(name);
if (iter != imageContainer.end())
imageContainer.erase(iter);
}
void DataStorage::deleteTexture(std::string name)
{
auto iter = textureContainer.find(name);
if (iter != textureContainer.end())
textureContainer.erase(iter);
}
void DataStorage::deleteSprite(std::string name)
{
auto iter = spriteContainer.find(name);
if (iter != spriteContainer.end())
spriteContainer.erase(iter);
}
|
/*
* Copyright (c) 2016 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license that can be
* found in the LICENSE file in the root of the source tree. An additional
* intellectual property rights grant can be found in the file PATENTS.
* All contributing project authors may be found in the AUTHORS file in
* the root of the source tree.
*/
/*
* \file vp9_alt_ref_aq.h
*
* This file contains public interface for setting up adaptive segmentation
* for altref frames. Go to alt_ref_aq_private.h for implmentation details.
*/
#ifndef VPX_VP9_ENCODER_VP9_ALT_REF_AQ_H_
#define VPX_VP9_ENCODER_VP9_ALT_REF_AQ_H_
#include "vpx/vpx_integer.h"
// Where to disable segmentation
#define ALT_REF_AQ_LOW_BITRATE_BOUNDARY 150
// Last frame always has overall quality = 0,
// so it is questionable if I can process it
#define ALT_REF_AQ_APPLY_TO_LAST_FRAME 1
// If I should try to compare gain
// against segmentation overhead
#define ALT_REF_AQ_PROTECT_GAIN 0
// Threshold to disable segmentation
#define ALT_REF_AQ_PROTECT_GAIN_THRESH 0.5
#ifdef __cplusplus
extern "C" {
#endif
// Simple structure for storing images
struct MATX_8U {
int rows;
int cols;
int stride;
uint8_t *data;
};
struct VP9_COMP;
struct ALT_REF_AQ;
/*!\brief Constructor
*
* \return Instance of the class
*/
struct ALT_REF_AQ *vp9_alt_ref_aq_create(void);
/*!\brief Upload segmentation_map to self object
*
* \param self Instance of the class
* \param segmentation_map Segmentation map to upload
*/
void vp9_alt_ref_aq_upload_map(struct ALT_REF_AQ *const self,
const struct MATX_8U *segmentation_map);
/*!\brief Return pointer to the altref segmentation map
*
* \param self Instance of the class
* \param segmentation_overhead Segmentation overhead in bytes
* \param bandwidth Current frame bandwidth in bytes
*
* \return Boolean value to disable segmentation
*/
int vp9_alt_ref_aq_disable_if(const struct ALT_REF_AQ *self,
int segmentation_overhead, int bandwidth);
/*!\brief Set number of segments
*
* It is used for delta quantizer computations
* and thus it can be larger than
* maximum value of the segmentation map
*
* \param self Instance of the class
* \param nsegments Maximum number of segments
*/
void vp9_alt_ref_aq_set_nsegments(struct ALT_REF_AQ *const self, int nsegments);
/*!\brief Set up LOOKAHEAD_AQ segmentation mode
*
* Set up segmentation mode to LOOKAHEAD_AQ
* (expected future frames prediction
* quality refering to the current frame).
*
* \param self Instance of the class
* \param cpi Encoder context
*/
void vp9_alt_ref_aq_setup_mode(struct ALT_REF_AQ *const self,
struct VP9_COMP *const cpi);
/*!\brief Set up LOOKAHEAD_AQ segmentation map and delta quantizers
*
* \param self Instance of the class
* \param cpi Encoder context
*/
void vp9_alt_ref_aq_setup_map(struct ALT_REF_AQ *const self,
struct VP9_COMP *const cpi);
/*!\brief Restore main segmentation map mode and reset the class variables
*
* \param self Instance of the class
* \param cpi Encoder context
*/
void vp9_alt_ref_aq_unset_all(struct ALT_REF_AQ *const self,
struct VP9_COMP *const cpi);
/*!\brief Destructor
*
* \param self Instance of the class
*/
void vp9_alt_ref_aq_destroy(struct ALT_REF_AQ *const self);
#ifdef __cplusplus
} // extern "C"
#endif
#endif // VPX_VP9_ENCODER_VP9_ALT_REF_AQ_H_
|
Modern tundra landscapes of the Kolyma Lowland and their evolution in the Holocene The different tundra landscapes of the Kolyma Lowland, northern Siberia are described, focusing on landscape typology, digital mapping and the calculation of areas occupied by different landscape complexes. The genesis, age and evolution of yedoma sediments and associated alas (thermokarst basins) topography are inferred from the calculation of lakebasin and lakearea changes over three decades. Connections between the present landscape and its evolution during the Holocene are described. Copyright © 2009 John Wiley & Sons, Ltd. |
Distributed Data Analytics Framework for Smart Transportation As the amount data from IoT devices on transportation systems increases, developing a robust pipeline to stream, store and process data became critical. In this study, We explore prediction accuracy and computational performance of various supervised and unsupervised algorithms perform on distributed systems for developing a smart transportation data pipeline. Using a subset of New York City Taxi \& Limousine Commission data, we evaluate Logistic Regression, Random Forrest Regressors and Classifiers, Principal Component Analysis, and Gradient Boosted Regression and Classification Tree machine learning techniques on a commodity computer as well as on a distributed system. Employing Amazon S3, EC2 and EMR, MongoDB, and Spark, we identify the conditionsdata size and algorithmunder which the performance of distributed systems excel. |
<filename>aemetOD_TUI.py<gh_stars>0
# -*- coding: utf-8 -*-
"""
Created on Sat Aug 3 18:57:07 2019
@author: solis
Interfaz de texto de usuario
Contiene también las funciones que leen del fichero XML de parámetros
Por rapidez, desde este módulo se pasan elementos XML al módulo
aemetOD_change_format, que depende por tanto del fichero XML y de algunas
funciones que leen sus contenidos
"""
from datetime import date
import littleLogging as logging
from xml.etree.ElementTree import Element as xmlElement
def menuMain():
"""
Selección desde el menú principal
"""
from aemetOD_constants import MSGREVISARPARAM
def clearSreen():
from os import name, system
system('cls' if name == 'nt' else 'clear')
options = ('Descargar datos desde AEMET OpenData',
'Cambiar formato de datos ya descargados')
headers = ('\nAEMET OpenData. Menú principal',
MSGREVISARPARAM, 'Opciones')
clearSreen()
selectedOption = selectOption(options, headers)
if selectedOption == 0:
return
if selectedOption == 1:
menuDataToDownLoad()
elif selectedOption == 2:
menuChangeFormat()
else:
print('Tu selección no está implementada todavía')
def menuDataToDownLoad():
"""
Menu para seleccionar los datos a descargar
"""
import xml.etree.ElementTree as ET
from aemetOpenData import meteoroEstacionesGet, meteoroDiaGet, \
meteoroMesGet
from aemetOD_constants import MSGREVISARPARAM, FILEPARAM
headers = ('\nDescarga de datos desde AEMET OpenData',
'AEMET limita el núm. de conexiones por min y API Key ' +\
'y el núm. global de conexiones',
'El tiempo de descarga puede ser muy largo',
MSGREVISARPARAM,
'Opciones')
options = ('Características de las estaciones',
'Meteoros diarios en un rango de fechas',
'Meteoros mensuales en un rango de años')
selectedOption = selectOption(options, headers)
if selectedOption == 0:
menuMain()
tree = ET.parse(FILEPARAM)
root = tree.getroot()
if selectedOption == 1:
nameTask = root.find('todasEstaciones/name')
pathOut = pathFromXML(root, 'todasEstaciones/pathOut')
check_dir(pathOut)
meteoroEstacionesGet(pathOut)
elif selectedOption == 2:
element = root.find('datosClimaticosDiarios')
nameTask = element.find('name').text.strip()
pathOut = pathFromXML(element, 'pathOut')
check_dir(pathOut)
fechaInicial, fechaFinal = datesFromXml(element)
estaciones = estacionesInFile(element)
meteoroDiaGet(pathOut, fechaInicial, fechaFinal, estaciones, nameTask)
elif selectedOption == 3:
element = root.find('datosClimaticosMensuales')
nameTask = element.find('name').text.strip()
pathOut = pathFromXML(element, 'pathOut')
check_dir(pathOut)
year1 = int(element.find('anyoInicial').text.strip())
year2 = int(element.find('anyoFinal').text.strip())
year1, year2 = check_years_range(year1, year2)
estaciones = estacionesInFile(element)
meteoroMesGet(pathOut, year1, year2, estaciones, name=nameTask)
else:
print('Tu selección no está implementada')
menuMain()
def menuChangeFormat(op: bool = True):
"""
menú para seleccionar las opciones disponibles de cambio de formato
de los ficheros json generados descargados
op: si True cuando termina enlaza con el menú general de la aplicación
"""
import xml.etree.ElementTree as ET
from aemetOD_constants import MSGREVISARPARAM, FILEPARAM
from aemetOD_change_format import estacionesToSqlite, meteoToSqlite
options = ('Estaciones en AEMET OpenData a Sqlite3',
'Meteoros diarios a Sqlite3',
'Meteoros mensuales a Sqlite3')
headers = ('\nCambiar el formato de datos ya descargados',
MSGREVISARPARAM)
selectedOption = selectOption(options, headers)
if selectedOption == 0:
return
tree = ET.parse(FILEPARAM)
root = tree.getroot()
element = root.find('changeFormatSqlite3')
db = element.find('dbFile').text.strip()
writeToNewFormat = bool(element.find('writeToNewFormat').text.strip())
if selectedOption == 1:
path_data = pathFromXML(element, 'estaciones/pathData')
pkeys = element.find('estaciones/primaryKey').text.strip()
if len(pkeys) == 0:
raise ValueError('estaciones: No tiene definidas primary keys')
estacionesToSqlite(db, writeToNewFormat, path_data, pkeys)
elif selectedOption in (2, 3):
if selectedOption == 2:
subelement, meteoros_type = 'meteoroDay', 'day'
else:
subelement, meteoros_type = 'meteoroMonth', 'month'
element2 = element.find(subelement)
pathData = pathFromXML(element2, 'pathData')
pKeys = [elementk.text.strip() for elementk in
element2.findall('primaryKey')]
if len(pKeys) == 0:
raise ValueError(f'{subelement}: No tiene definidas primary keys')
pKeys = ','.join(pKeys)
cadena = 'replaceValueInColumn'
to_replace = replace_values_rule(element2, cadena)
dataFilesWildcard = element2.find('dataFilesWildcard').text.strip()
meteoToSqlite(db, writeToNewFormat, pathData, pKeys,
to_replace, dataFilesWildcard, meteoros_type)
else:
print('Has elegido un número inválido')
if op:
menuMain()
def selectOption(options: (list, tuple), headers: (list, tuple)) -> int:
"""
Función genérica para seleccionar una opción
args
options: strings que permiten identicar las opciones
headers: : strings que explican las opciones
"""
from aemetOD_constants import MSG_NO_OPTION, MSGWRITE1OPTION, \
MSG_NO_VALID_OPTION
while True:
for header in headers:
print(header)
extendedOptions = [MSG_NO_OPTION] + list(options)
for i, option in enumerate(extendedOptions):
print('{:d}.- {}'.format(i, option))
selection = input(MSGWRITE1OPTION)
try:
selection = int(selection)
except ValueError:
print(MSG_NO_VALID_OPTION)
continue
if selection < 0 or selection > len(extendedOptions):
print(MSG_NO_VALID_OPTION)
continue
else:
return selection
def dateFromXML(element: xmlElement, nameFecha: str):
"""
lee un elemento de tipo fecha en FILEPARAM y devuelve una fecha
args:
element: xml.etree.ElementTree.element
nameFecha: nombre de un elemento en element de nombre nameFecha
returns:
: datetime.date
"""
day = int(element.find(nameFecha).get('day'))
month = int(element.find(nameFecha).get('month'))
year = int(element.find(nameFecha).get('year'))
return date(year, month, day)
def datesFromXml(element: xmlElement) -> list:
"""
Extrae las fechas en los subelementos <fechaInicial> y <fechaFinal>
"""
fechaInicial = dateFromXML(element, 'fechaInicial')
fechaFinal = dateFromXML(element, 'fechaFinal')
if fechaInicial > fechaFinal:
raise ValueError('fechaInicial > fechaFinal')
return fechaInicial, fechaFinal
def pathFromXML(element: xmlElement, subElementName: str) -> str:
"""
devuelve el nombre del directorio en el elemento xml element/text
si el directorio no existe lanza una exception
args:
element: xml.etree.ElementTree.element
sPath: nombre de un subelemento en element con el nombre de un direct.
returns
nombre del directorio
"""
from os.path import isdir
path = element.find(subElementName).text.strip()
if not isdir(path):
raise ValueError('{} no existe'.format(subElementName))
return path
def estacionesFromXML(element: xmlElement, text:str) -> list:
"""
devuelve la lista de estaciones de un elemento xml
"""
estacionesElements = element.findall(text)
estaciones = [estacion.text.strip() for estacion in estacionesElements]
if len(estaciones) == 0:
raise ValueError('no se han definido las estaciones'+
' para la opción seleccionada')
return estaciones
def estacionesInFile(element: xmlElement) -> list:
"""
lee el fichero con las estaciones cuyos datos se quieren descargar
"""
from os.path import join
subElement = element.find('estacionesIndicativo')
path = pathFromXML(subElement, 'path')
nameFile = subElement.find('nameFile').text.strip()
linesToSkip = int(subElement.find('numberLinesToSkip').text.strip())
with open(join(path, nameFile), 'r') as f:
lines = [line.strip() for line in f.readlines()]
if linesToSkip >= len(lines):
raise ValueError('linesToSkip >= líneas en el fichero')
return lines[linesToSkip:]
def check_years_range(year1: int, year2: int) -> (int, int):
from datetime import datetime
if year1 > year2:
year1, year2 = year2, year1
if year1 <= 1900:
year1 = 1900
if year2 > datetime.today().year:
year2 = datetime.today.year
return year1, year2
def check_dir(path: str):
from os import access, W_OK
from os.path import isdir
if not isdir(path):
raise ValueError(f'El directorio {path} debe existir')
if not access(path, W_OK):
raise ValueError(f'El directorio {path} debe ser escribible')
def replace_values_rule(element: xmlElement, subElement: str) -> list:
"""
Devuelve el contenido de los elementos <replaceValueInColumn> como una
lista con tantos miembreos como elementos <replaceValueInColumn>
Cada elemento de la lista es una tupla de 3 elementos: el primero
es el campo en el que se va a hacer la sustitución; el segundo
elemento es el contenido a sustituir; el tercero es el contenido
nuevo
"""
to_replace = element.findall(subElement)
if not to_replace:
return []
rows = [(column.text.strip(), column.get('old'), column.get('new'))
for column in to_replace]
return rows
|
1. Field of the Invention
The invention relates to a constant voltage output circuit and, more particularly, to a constant voltage output circuit which can reduce a restriction in a manufacturing process and can obtain a wide voltage set range.
2. Related Background Art
Hitherto, particularly, in an electronic circuit which handles an analog signal, there is a case where in addition to a ground level (ground) and a power voltage, a constant intermediate voltage source which is not susceptible to a variation in power of a power source and temperature is needed.
FIG. 1 is a diagram showing an example of a conventional constant voltage output circuit. In the diagram, reference numeral 1 denotes a bipolar transistor (hereinafter, abbreviated to BJT); 2 indicates a BJT whose size is larger than the BJT 1. The size of BJT 2 is generally just an integer value times as large as the size of BJT 1. Reference numerals 3 and 4 denote resistors having a same resistance value R.sub.0. Terminals 5 and 6 of the resistors 3 and 4 are connected to collector terminals of the BJT 1 and BJT 2, respectively. The other terminals of the resistors 3 and 4 are mutually connected and become a common terminal 7. Reference numeral 8 denotes a resistor of a resistance value R.sub.1 connecting an emitter of the BJT 2 and ground and 9 indicates an operational amplifier (hereinafter, referred to as an op-amplifier) in which a (+) input terminal (non-inverting input terminal) is connected to the terminal 5, a (-) input terminal (inverting input terminal) is connected to the terminal 6, and an output is connected to the common terminal 7. An emitter of the BJT 1 is directly connected to ground. Bases of the BJTs 1 and 2 are mutually connected to the terminal 5.
FIG. 2 shows a constructional example of the BJT 2. Collectors of four BJTs 1' of the same size as that of the BJT 1 are mutually connected, their bases are mutually connected, and their emitters are mutually connected, thereby setting the size of BJT 2 to be just four times as large as that of BJT 1.
In the circuit of FIG. 1, a point will now be described that by setting resistance values R.sub.0 and R.sub.1 in accordance with characteristics of the BJTs 1 and 2, a predetermined voltage can be generated from the terminal 7. It is now assumed that the size of BJT 2 is four times as large as that of BJT 1 and current gain of the BJT 2 is large and a emitter current and a collector current are equal.
In FIG. 1, current flowing through the resistor 3, namely, a collector current of the BJT 1 is labeled as 1.sub.0. Since electric potentials of the terminals 5 and 6 are equal due to the operation of the operational amplifier 9, a current flowing in the resistor 4, namely, the collector current of the BJT 2 is also equal to I.sub.0. Now, assuming that the output voltage of the terminal 7 is called V.sub.BG and base-emitter voltages of the BJTs 1 and 2 are set to V.sub.BE1 and V.sub.BE2, EQU V.sub.BG =V.sub.BE1 +I.sub.0 R.sub.0 (1) EQU V.sub.BE1 =V.sub.BE2 +I.sub.0 R.sub.1 (2)
are satisfied. Since the size of BJT 2 is four times as large as that of the BJT 1, EQU V.sub.BE1 -V.sub.BE2 =(kT/q).multidot.In.sub.4 (3)
is satisfied.
Where,
k: Boltzmann's constant
T: absolute temperature
q: unit charges
By deleting V.sub.BE2 and I.sub.0 from the equations (1), (2), and (3), we have EQU V.sub.BG =V.sub.BE1 +(R.sub.0 /R.sub.1).multidot.(kT/q).multidot.In.sub.4 ( 4)
By differentiating both sides of the equation (4) by T, EQU dVB.sub.BG /dT=dV.sub.BE1 /dT+(R.sub.0 /R.sub.2)*(k/q).multidot.In.sub.4 ( 5)
is satisfied.
By deciding R.sub.0 /R.sub.1 so as to obtain EQU dV.sub.BE1 /dT+(R.sub.0 /R.sub.1).multidot.(k/q).multidot.In.sub.4 =0
in accordance with the temperature characteristics of the BJT, the temperature dependency of V.sub.BG is eliminated from the equation (5). In the ordinary silicon BJT, since dV.sub.BE1 /dT is equal to about -2mV/K, R.sub.0 /R.sub.1 is equal to about 16. Generally, since the values of R.sub.0 and R.sub.1 are determined so that V.sub.BE1 is equal to about 0.6V, the value of V.sub.BG is equal to about 1.2V as will be understood from the equation (4).
As described above, by setting the values of R.sub.0 and R.sub.1 in accordance with the BJT characteristics, a predetermined output voltage is derived from the terminal 7. By using such voltage as a reference for the electronic circuit, a voltage level can be accurately set.
In the above example, however, a BJT in which an emitter, a base, and a collector can be taken out as independent terminals is necessary. Although the constant voltage output circuit is often used in a semiconductor IC, the above example can only be applied in a manufacturing process such that the BJT as mentioned above can be formed. There is a problem such that the above example cannot be applied to an IC using a manufacturing process which cannot form an independent BJT.
An output of the op-amplifier which is used in the above example is also a collector current source of the BJT and it is necessary to use an op-amplifier having a high current supplying ability. There is inevitably a problem such that the size of the op-amplifier has to be enlarged.
Further, as will be understood from the equation (4) of the constant voltage output, V.sub.BG can be changed by selecting the set potentials of V.sub.BE1. However, generally, since a range where normal current-voltage characteristics of the bipolar transistor can be held is a range of about 0.5 to 0.7V as V.sub.BE1, the constant voltage output in only a range of about 1.1 to 1.3V can be also set. In other words, there is a problem such that a selection width of the constant voltage output value is narrow. |
What are private sector physiotherapists perceptions regarding interprofessional and intraprofessional work for managing low back pain? ABSTRACT In the last decades, interactions between health professionals have mostly been discussed in the context of interprofessional teamwork where professionals work closely together and share a team identity. Comparatively, little work has been done to explore interactions that occur between professionals in contexts where traditionally formal structures have been less supporting the implementation of interprofessional teamwork, such as in the private healthcare sector. The objective of this study was to identify private sector physiotherapists perceptions of interprofessional and intraprofessional work regarding interventions for adults with low back pain. This was a cross-sectional survey of 327 randomly-selected physiotherapists. Data were analysed using descriptive statistics. A majority of physiotherapists reported positive effects of interprofessional work for their clients, themselves and their workplaces. Proximity of physiotherapists with other professionals, clinical workloads, and clients financial situation were perceived as important factors influencing the implementation of interprofessional work. Low back pain is a highly prevalent and disabling condition. The results of this study indicate that integrating interprofessional work in the management of low back pain in the private sector is warranted. Furthermore, the implementation of interprofessional work is viewed by practicing physiotherapists as dependent upon certain client-, professional- and organizational-level factors. |
When Apple’s new iPad was finally announced last week we already knew that it would be a big hit. Now we have news that pre-orders for delivery on release day, March 16, have already sold out and this has led us to wonder if many customers simply feel brainwashed into buying the new tablet and whether this has prompted the sell-out.
If you delayed pre-ordering the new iPad you’ll find that if you try to do so now with Apple online the shipping date is already put back to March 19 as the earliest possible date you’ll receive it. It’s the same story for whichever model you wanted so there’s no point trying to order your second choice instead. Apple has issued a statement saying that quantities available for pre-order have now sold out, although you can still place online orders for a later date.
Apple has also said that some of the new iPads will definitely be in stores on March 16, but of course there’s no way of telling how many will be available at each store on release day. Also this article tells of unconfirmed reports saying that worldwide shipments could be delayed by anything up to 3 weeks. If you haven’t already pre-ordered then, the only chance you still have of getting a new iPad on launch day is to head down to your nearest Apple retail store or reseller and join what will inevitably be a huge line in the hope that you might get lucky.
The news that the latest iPad is another instant success has now led to the usual debates about whether consumers are simply brainwashed by the Apple name or whether it’s a measure of just how good the products really are. Although our title may seem harsh and we don’t necessarily agree that most customers are brainwashed, it is a reflection of what is being widely said on social networking sites and the like. For example we’ve received emails and also comments on our Phones Review Facebook page and other social networking areas to this effect and of course it has also started off the inevitable Android vs. Apple debate.
We’ve been pretty impressed by the specs and features of the new iPad since its release and there will be plenty of people with the opinion that the sellout of the new iPad is merely because it is a very notable device. However there will also be many people who will always feel that any Apple product success is simply down to the branding. While it’s true that Apple seems to be on a phenomenal roll at the moment we doubt the success of the company is all down to its fan-base alone. Surely at some point the actual product that is being produced has to come into the equation?
We’d like to hear your views on this. Do you think the average consumer purchasing the new iPad is doing so in a sheep-like way, following the flock? Maybe you completely disagree with this and feel the new iPad sellout is simply down to the product alone and nothing else? Do send us your comments using the box below.
The first issue is the whole idea of telling customers that the item is sold out just to create more demand. Sony and other companies have done this in the past (just like midnight openings for blockbuster movies) to make people think that everyone is going to rush to get one.
A local news channel interviewed people in a mall to ask if they would be buying the new ipad and many of the people interviewed (mostly in their teens or twenties) said that they already had the ipad 2 but would be getting the new version. Clear example of isheep mentality – just buy something because it has a fruit logo on it, not because of need or necessity.
Same goes for when AT&T first released the iphone 4 – the majority of their early sales were from people who just wanted to upgrade their last generation iphone.
I still use the original iPad & iPhone 4. The only thing that pisses me off ever so slightly is that now, many apps are upgraded to work on the newer Pads leaving me in the dust even though I always update to the latest OS. So, fo one, I’m not a sheep but would if zi could afford it at the moment go out and order the newest iPad which seems to have really terrific specs. Hopefully in a few weeks I’ll have a few dockets stored up to be able to both afford the purchase and to avoid the rush. Luv Rick C. |
While the media focuses on Facebook's privacy policies, too many people continue to give away their personal data. Check out these evil and helpful tools to ensure your Facebook security.
Social Media and the Longest "Goal"
In the 1980's the computer industry suffered from product announcements long before product reality. This month, Facebook and Microsoft's Hotmail brought us back to those days of hype over reality.
Between Facebook, Twitter, Location Services, Google and more it seems like one of the few places you can still have a private online conversation is email. But a new startup, Cc:Everybody hopes to change even that.
A mobile device can be a great help when you're exploring a city or even just a new place in near your home. Apps like MyTours want to change the travel and tourism industry.
While Facebook continues to struggle with concerns over weak privacy policies, they are taking steps to increase security for millions of Facebook Fans. Facebook users would be wise to take advantage of these options.
Google has admitted that it gathered data about personal wireless connections from homes around New Zealand and other countries as part of gathering data for Street View maps.
While most people visit YouTube to hear some music or see a funny video - usually because a friend sent a "check this out" email, YouTube is fast becoming the top source for ways to steal software registration codes.
Admit it - you never read the terms of service (TOS) when you setup accounts on the Internet. Well, you are not alone, but last month 7,500 shoppers legally sold their souls.
Second Life's virtual environment was once hyped as the next big Internet craze. While it has never quite reached those heights, it has certainly attracted some crazed users - folks who are getting divorced and making lawsuits in the real world. |
Cyclostationary-based diversity combining for blind channel equalization using multiple receive antennas At high data rates, radio channels are characterized by severe intersymbol interference (ISI) and deep fades in the received signal levels. This paper develops an integrated approach for the mitigation of these effects using diversity combining and channel equalization in radio systems with multiple receive antennas. To accommodate higher data rates exceeding the channel coherence bandwidth, frequency selective channels are compensated utilizing blind equalization algorithms that exploit the cyclostationary signal structure inherent in communication signals. To mitigate the effects of flat fading, diversity combining is deployed which improves the bit error rate (BER) performance by merging distorted replicas of the transmitted signal in an intelligent fashion. This paper represents an effort in building on the strengths of these two distortion mitigation schemes so as to achieve additional benefit of compensating for channels that could not otherwise be compensated. The proposed combining algorithms are collectively referred to as cyclostationary-based diversity combining (CSDC). Both pre-equalization and post-equalization CSDC schemes are discussed in this paper. Simulation results for the performance of CSDC algorithms are presented. |
Ageing of equine articular cartilage: structure and composition of aggrecan and decorin. In order to identify the pathological processes involved in the destruction of articular cartilage in arthritic diseases, it is first necessary to characterise the normal homeostasis of cartilage in a healthy joint. In particular, normal age-related changes in the biochemistry of cartilage complicate any comparisons that are made between diseased and healthy tissue. There are, however, no reports in the literature detailing the influence of ageing on the biochemistry of proteoglycans in equine articular cartilage. This study addresses the absence of such information by investigating the structure of aggrecan and decorin extracted from a wide age-range of full thickness equine tissue. The total glycosaminoglycan content of articular cartilage from the metacarpophalangeal joint remained relatively constant throughout life. In contrast, specific components such as hyaluronan increased in concentration with advancing age as did the content of a structural epitope present on keratan sulphate chains. There were also significant age-related changes in the sulphation pattern of chondroitin sulphate chains. The structure of the large aggregating proteoglycan (aggrecan) became more heterogeneous in size with increasing age and each of the subspecies of aggrecan identified in the extracts was shown to carry a hyaluronan binding region (G1) domain. All subspecies of aggrecan also expressed specific epitopes to keratan sulphate, chondroitin-4-sulphate and chondroitin-6-sulphate glycosaminoglycan chains. The structure of the small nonaggregating proteoglycan decorin and the aggrecan stabilising molecule link protein were demonstrated to be similar in size and charge to that reported for other species. |
/*
+----------------------------------------------------------------------+
| HipHop for PHP |
+----------------------------------------------------------------------+
| Copyright (c) 2010-present Facebook, Inc. (http://www.facebook.com) |
+----------------------------------------------------------------------+
| This source file is subject to version 3.01 of the PHP license, |
| that is bundled with this package in the file LICENSE, and is |
| available through the world-wide-web at the following url: |
| http://www.php.net/license/3_01.txt |
| If you did not receive a copy of the PHP license and are unable to |
| obtain it through the world-wide-web, please send a note to |
| <EMAIL> so we can mail you a copy immediately. |
+----------------------------------------------------------------------+
*/
#include "hphp/runtime/vm/jit/tc-prologue.h"
#include "hphp/runtime/vm/jit/tc.h"
#include "hphp/runtime/vm/jit/tc-internal.h"
#include "hphp/runtime/vm/jit/tc-record.h"
#include "hphp/runtime/vm/jit/tc-region.h"
#include "hphp/runtime/vm/debug/debug.h"
#include "hphp/runtime/vm/jit/align.h"
#include "hphp/runtime/vm/jit/cg-meta.h"
#include "hphp/runtime/vm/jit/func-order.h"
#include "hphp/runtime/vm/jit/irgen.h"
#include "hphp/runtime/vm/jit/irgen-func-prologue.h"
#include "hphp/runtime/vm/jit/irlower.h"
#include "hphp/runtime/vm/jit/mcgen.h"
#include "hphp/runtime/vm/jit/print.h"
#include "hphp/runtime/vm/jit/prof-data.h"
#include "hphp/runtime/vm/jit/smashable-instr.h"
#include "hphp/runtime/vm/jit/trans-db.h"
#include "hphp/runtime/vm/jit/vasm-emit.h"
#include "hphp/runtime/vm/jit/vm-protect.h"
#include "hphp/runtime/vm/jit/vtune-jit.h"
#include "hphp/util/trace.h"
TRACE_SET_MOD(mcg);
namespace HPHP { namespace jit { namespace tc {
namespace {
/*
* Smash the callers of the ProfPrologue associated with `rec' to call a new
* prologue at `start' address.
*/
void smashFuncCallers(TCA start, ProfTransRec* rec) {
assertOwnsMetadataLock();
assertx(rec->isProflogue());
auto lock = rec->lockCallerList();
for (auto toSmash : rec->mainCallers()) {
smashCall(toSmash, start);
}
rec->clearAllCallers();
}
}
////////////////////////////////////////////////////////////////////////////////
void PrologueTranslator::computeKind() {
// Update the translation kind if it is invalid, or if it may
// have changed (original kind was a profililng kind)
if (kind == TransKind::Invalid || kind == TransKind::ProfPrologue) {
kind = profileFunc(func) ? TransKind::ProfPrologue
: TransKind::LivePrologue;
}
}
int PrologueTranslator::paramIndex() const {
return paramIndexHelper(func, nPassed);
}
int PrologueTranslator::paramIndexHelper(const Func* f, int passed) {
int const numParams = f->numNonVariadicParams();
return passed <= numParams ? passed : numParams + 1;
}
folly::Optional<TranslationResult> PrologueTranslator::getCached() {
if (UNLIKELY(RuntimeOption::EvalFailJitPrologs)) {
return TranslationResult::failTransiently();
}
auto const paramIdx = paramIndex();
TCA prologue = (TCA)func->getPrologue(paramIdx);
if (prologue == tc::ustubs().fcallHelperNoTranslateThunk) {
return TranslationResult::failForProcess();
}
if (prologue != ustubs().fcallHelperThunk) {
TRACE(1, "cached prologue %s(%d) -> cached %p\n",
func->fullName()->data(), paramIdx, prologue);
assertx(isValidCodeAddress(prologue));
return TranslationResult{prologue};
}
return folly::none;
}
void PrologueTranslator::resetCached() {
func->resetPrologue(paramIndex());
}
void PrologueTranslator::setCachedForProcessFail() {
TRACE(2, "funcPrologue %s(%d) setting prologue %p\n",
func->fullName()->data(), nPassed,
tc::ustubs().fcallHelperNoTranslateThunk);
func->setPrologue(paramIndex(), tc::ustubs().fcallHelperNoTranslateThunk);
}
void PrologueTranslator::smashBackup() {
if (kind == TransKind::OptPrologue) {
assertx(transId != kInvalidTransID);
auto const rec = profData()->transRec(transId);
smashFuncCallers(jit::tc::ustubs().fcallHelperThunk, rec);
}
}
void PrologueTranslator::gen() {
if (transId != kInvalidTransID && isProfiling(kind)) {
profData()->addTransProfPrologue(transId, sk, paramIndex(),
0 /* asmSize: updated below after machine code is generated */);
}
auto const context = TransContext{
transId == kInvalidTransID ? TransIDSet{} : TransIDSet{transId},
0, // optIndex
kind,
sk,
nullptr
};
tracing::Block _b{
kind == TransKind::OptPrologue ? "emit-func-prologue-opt"
: "emit-func-prologue",
[&] {
return traceProps(func)
.add("sk", show(sk))
.add("argc", paramIndex())
.add("trans_kind", show(kind));
}
};
tracing::Pause _p;
unit = std::make_unique<IRUnit>(context, std::make_unique<AnnotationData>());
irgen::IRGS env{*unit, nullptr, 0, nullptr};
irgen::emitFuncPrologue(env, func, nPassed, transId);
irgen::sealUnit(env);
printUnit(2, *unit, "After initial prologue generation");
vunit = irlower::lowerUnit(env.unit, CodeKind::Prologue);
}
void PrologueTranslator::publishMetaImpl() {
// Update the profiling prologue size now that we generated it.
if (transId != kInvalidTransID) {
profData()->transRec(transId)->setAsmSize(
transMeta->range.loc().mainSize());
}
assertOwnsMetadataLock();
assertx(translateSuccess());
assertx(code().isValidCodeAddress(entry()));
transMeta->fixups.process(nullptr);
auto const& loc = transMeta->range.loc();
TransRec tr{sk, transId, kind, loc.mainStart(), loc.mainSize(),
loc.coldStart(), loc.coldSize(), loc.frozenStart(), loc.frozenSize()};
transdb::addTranslation(tr);
FuncOrder::recordTranslation(tr);
if (RuntimeOption::EvalJitUseVtuneAPI) {
reportTraceletToVtune(func->unit(), func, tr);
}
recordGdbTranslation(sk, transMeta->view.main(), loc.mainStart(),
loc.mainEnd());
recordGdbTranslation(sk, transMeta->view.cold(), loc.coldStart(),
loc.coldEnd());
recordBCInstr(OpFuncPrologue, loc.mainStart(), loc.mainEnd(), false);
}
void PrologueTranslator::publishCodeImpl() {
assertOwnsMetadataLock();
assertOwnsCodeLock();
if (RuntimeOption::EvalEnableReusableTC) {
auto const& loc = transMeta->range.loc();
recordFuncPrologue(func, loc);
}
const auto start = entry();
assertx(start);
TRACE(2, "funcPrologue %s(%d) setting prologue %p\n",
func->fullName()->data(), nPassed, start);
func->setPrologue(paramIndex(), start);
// If we are optimizing smash the callers of the proflogue.
if (kind == TransKind::OptPrologue) {
assertx(transId != kInvalidTransID);
auto const rec = profData()->transRec(transId);
smashFuncCallers(start, rec);
}
}
}}}
|
#[macro_use] extern crate sciter;
use sciter::{HELEMENT, Element, Value};
use std::env;
struct EventHandler {}
impl sciter::EventHandler for EventHandler {
fn document_complete(&mut self, root: HELEMENT, target: HELEMENT) {
&Element::from(root).call_function("foo", &make_args!("Hello World!"));
}
}
fn main() {
let mut frame = sciter::Window::new();
frame.event_handler(EventHandler { });
let dir = env::current_dir().unwrap().as_path().display().to_string();
let filename = format!("{}\\{}", dir, "index.htm");
frame.load_file(&filename);
frame.run_app();
} |
class SnaptoGrid:
'''we are using 10x10 grid squares so
snap the object widths and heights to the nearest 10'''
@staticmethod
def snap(n):
#round to nearest 10
floor = (n//10) * 10
ceil = math.ceil(n/10)*10
return int(floor) if n-floor < 5 else int(ceil) |
/**
Used internally to parse the children of a logger element.
*/
void DOMConfigurator::parseChildrenOfLoggerElement(
XMLDOMElementPtr loggerElement, LoggerPtr logger, bool isRoot)
{
PropertySetter propSetter(logger);
logger->removeAllAppenders();
XMLDOMNodeListPtr children = loggerElement->getChildNodes();
int length = children->getLength();
for (int loop = 0; loop < length; loop++)
{
XMLDOMNodePtr currentNode = children->item(loop);
if (currentNode->getNodeType() == XMLDOMNode::ELEMENT_NODE)
{
XMLDOMElementPtr currentElement = currentNode;
String tagName = currentElement->getTagName();
if (tagName == APPENDER_REF_TAG)
{
AppenderPtr appender = findAppenderByReference(currentElement);
String refName = subst(currentElement->getAttribute(REF_ATTR));
if(appender != 0)
{
LogLog::debug(_T("Adding appender named [")+ refName+
_T("] to logger [")+logger->getName()+_T("]."));
}
else
{
LogLog::debug(_T("Appender named [")+ refName +
_T("] not found."));
}
logger->addAppender(appender);
}
else if(tagName == LEVEL_TAG)
{
parseLevel(currentElement, logger, isRoot);
}
else if(tagName == PRIORITY_TAG)
{
parseLevel(currentElement, logger, isRoot);
}
else if(tagName == PARAM_TAG)
{
setParameter(currentElement, propSetter);
}
}
}
propSetter.activate();
} |
#include <bits/stdc++.h>
using namespace std;
typedef unsigned long long ll;
typedef double ld;
#define pll pair<ll,ll>
#define pb push_back
#define mp(x, y) make_pair((x), (y))
#define F first
#define S second
#define I insert
#define vll vector<ll>
#define vpll vector<pll>
#define all(x) (x).begin(), (x).end()
#define sz(x) (ll)(x).size()
const ll Mod = 1e9 + 7;
ll findTrailingZeros(ll n , ll i)
{
ll mul = 1;
ll count = 0;
while (mul <= n / i) {mul *= i; count += n / mul;}
return count;
}
ll primeFactors(ll n , ll b)
{
// Print the number of 2s that divide n
ll mn = Mod*Mod;
for (ll i = 2; i <= b; i++)
{
ll count = 0;
if (1LL * i * i > b) i = b;
// While i divides n, print i and divide n
while (b%i == 0)
{
count++;
b = b/i;
}
if(count > 0)
mn = min( mn , ll(findTrailingZeros(n, i)/count));
}
// This condition is to handle the case when n
// is a prime number greater than 2
if (b > 2)
mn = min(mn , ll(findTrailingZeros(n , b)));
return mn;
}
int main(){
ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0);
ll n , b; cin >> n >> b;
ll count = primeFactors(n , b);
cout << count << endl;
return 0;
} |
OnePlus 5 (Review) has grabbed all the attention in the smartphone industry over the past week as it joined the ranks of the handful of devices with the Snapdragon 835 processor inside. The smartphone has gone on sale once already, but is out of stock on Amazon India for now, though you can grab one at the pop-up events in Hyderabad, Chennai, and Bengaluru on Saturday. If buying OnePlus 5, whether offline or online, is indeed on your mind, there are a few things you need to know about it as there are a few things that have gotten buried in all the hype surrounding the smartphone.
Tune in to our 'Ask us anything' on YouTube and Facebook at 2pm today if you have any questions related to the OnePlus 5, and we will do our best to answer them.
Much like the OnePlus 3 and OnePlus 3T, the OnePlus 5 comes with a scratch guard installed out of the box. For some, it’s a convenience, as it means you don’t have to spend extra on the scratch guard, but for others there’s an unnecessary step of removing it prior to use. It’s a perplexing addition all the same considering that the OnePlus 5 boasts 2.5D Corning Gorilla Glass 5. In the box you get a Dash Charger, SIM tray ejection pin, and a post card from OnePlus co-founder Carl Pei talking up your new purchase.
Despite most Android flagships moving on from full-HD to QHD - and some even 4K - for a couple of years now, OnePlus is still making do with 1080x1920 pixels on the OnePlus 5 screen. It doesn’t look as sharp or crisp as say, the LG G6 (review) or the Samsung Galaxy S8 (review), and is still is one area where OnePlus lags behind the competition, as every one of its flagships since the OnePlus One sports the same resolution - so if you're looking to upgrade from something like the Nexus 6P, for example, then you'll actually be going down in resolution. But for most users, this shouldn’t be too much of a concern.
Another miss is the lack of any form waterproofing or dust resistance. Don’t expect Samsung Galaxy S8-like IP68 certification either. Perhaps it’s the price to pay for a slimmer profile of 7.25mm, versus the S8's 8mm thickness? Regardless of the reason, it’s a perplexing omission in 2017, as even the iPhone 7 (Review) is now fairly water resistant.
As expected, Dash Charging - the company’s proprietary quick-charging tech - makes a welcome return with the OnePlus 5. And while the OnePlus 5’s charger is bulkier and boxier than that of previous variants - it looks more like an iPad charger - you can use a OnePlus 3 or 3T charger as well, which is smaller and easier to carry. You won’t be able to charge as fast as compared to using the OnePlus 5’s charger, but it still works a fair bit faster than using a generic cable and charger.
It might be one of the thinnest phones on the market, but the OnePlus5 still sports a camera bump, although not without good reason. There’s a 16-megapixel main camera, along with a 20-megapixel telephoto camera, to allow for portraits shots and clearer zoom. What this means is that much like the iPhone 7 Plus, LG G6, and Huawei P10 (among others) the OnePlus 5 is capable of taking those bokeh shots that are all the rage, though the dual camera setup has a few quirks. For example, the second camera can only be used to zoom in when in brightly lit surroundings. Also, the video mode is lacking compared to the OnePlus 3T, with 4K recording having no stabilisation at all. You can read about these concerns at length in our OnePlus 5 review. |
def invocations():
if request.content_type == 'application/json':
data = request.get_json()
else:
data = request.get_data()
result = func(data)
return result |
The second-order problem for $k$-presymplectic Lagrangian field theories. Application to the Einstein--Palatini model In general, the system of $2$nd-order partial differential equations made of the Euler-Lagrange equations of classical field theories are not compatible for singular Lagrangians. This is the so-called second-order problem. The first aim of this work is to develop a fully geometric constraint algorithm which allows us to find a submanifold where the Euler-Lagrange equations have solution, and split the constraints into two kinds depending on their origin. We do so using $k$-symplectic geometry, which is the simplest intrinsic description of classical field theories. As a second aim, the Einstein-Palatini model of General Relativity is studied using this algorithm. Introduction It is an established fact that symplectic geometry is the most suitable geometric framework to describe Lagrangian and Hamiltonian (autonomous) mechanics. As classical field theories appear many times as a generalisation of mechanics, it seems very natural to try to describe classical field theories with a generalization of symplectic geometry. One of the most generic and complete approaches is the multisymplectic description, where jet bundles and bundles of forms are used as the manifolds where the Lagrangian and the Hamiltonian formalisms take place (see, for instance, and the references therein). However, the simplest approach is the k-symplectic description, which is used to describe systems in field theory whose Lagrangian and Hamiltonian functions depend only on the fields and derivatives of them, or their associate momenta in the Hamiltonian formulation. In the Lagrangian formalism, this formulation takes place in the k-tangent bundle of some manifold Q, which is denoted T 1 k Q and, in the Hamiltonian formalism, in the k-cotangent bundle (T 1 k ) * Q. These bundles are the Whitney sum of k copies of the tangent bundle TQ and the cotangent bundle T * Q, respectively. The k-symplectic framework is well suited to dealing with regular Lagrangians. Nevertheless, many of the relevant models in physics feature singular Lagrangians and this leads to work in the so-called kpresymplectic framework. Singular systems are important because some of the most important physical theories are singular; for instance, Maxwell's electromagnetism, all the models in General Relativity, string theory and gauge theories in general. The main problem of these singular theories is the failure of the usual theorems for the existence of solutions of the differential equations which describe them. This problem is usually solved by applying suitable constraint algorithms which allow us to find a submanifold of the phase space of the system where the existence of solutions is assured. The first of these constraint algorithms was given by P.G. Bergmann and P.A.M. Dirac, using a local coordinate language, for the Hamiltonian formalism of singular mechanics. Geometric versions of this algorithm were developed later, both for the Hamiltonian and Lagrangian formalisms of autonomous mechanical systems and also for non-autonomous systems. Furthermore, the problem of the compatibility of the Hamiltonian field equations (1st-order PDE's) for singular field theories has already been solved in the (pre)multisymplectic and the k-presymplectic and kprecosymplectic frameworks. One of the characteristic features of the Lagrangian formalism is that physical and variational motivations demand that the equations that describe the behaviour of the system must be ordinary second-order differential equations (SODE), in the case of mechanics, and second-order partial differential equations (SOPDE), in field theories: the Euler-Lagrange equations. In the above mentioned geometric descriptions of mechanics and field theories, solutions to the equations of the system are represented by vector fields and k-vector fields, respectively. Then, the physical solutions are the integral curves or the integral sections to these vector and k-vector fields, which verify the Euler-Lagrange equations. But, in order to assure this last fact, the vector and k-vector fields must fulfil that their integral curves and sections must be holonomic; that is, canonical liftings of curves and maps to the bundles where they are defined. This is the so-called second-order condition and the vector fields and k-vector fields fulfilling this condition are called SODE's and SOPDE's, respectivelly. In the regular case the second-order condition holds for every solution to the geometrical equations of the system; whereas in the case of singular Lagrangian systems these solutions, if they exist, do not satisfy this condition, in general, and hence it is an additional problem to study, besides the existence of solutions. One of the first geometric analysis of this problem for singular Lagrangian (autonomous) mechanics was done in, where the authors found a submanifold of the velocity phase space where a SODE solution exists; although, in general, this is not a maximal submanifold and the procedure to obtain it is not algorithmic. A complete and constructive algorithm for finding a maximal submanifold was developed in, but the treatment is local-coordinate. The procedure was intrinsically reformulated (partially) later in, but a complete geometric algorithm is given, for the first time, in where, given a singular Lagrangian system (TQ; L), an algorithm that finds the maximal submanifold of TQ on which one can find SODE solutions was developed. Later, this algorithm was extended to singular non-autonomous Lagrangians. Nevertheless, an equivalent geometric algorithm for solving the second-order problem for singular Lagrangian field theories is not known yet. The first part of this work is an attempt to solve the problem for singular Lagrangians in the k-symplectic formulation. Thus, given a singular Lagrangian in T 1 k Q, we provide an algorithm that allows us to find the maximal submanifold of T 1 k Q on which we can find SOPDE solutions to the Lagrangian field equations. In addition, the emergent constraints are classified into two groups, depending on whether they are a consequence of the compatibility of field equations or the requirement that solutions verify the second order condition. This algorithm is the generalization of what is presented in for singular Lagrangian mechanics to k-presymplectic Lagrangian systems, and completes the algorithm presented in, where the second-order problem was not considered. As a very interesting application of this method we study the k-symplectic description of the Einstein-Palatini Lagrangian model of General Relativity. This system, which is also known as the metric-affine model, consists in considering the Hilbert Lagrangian for the Einstein equations, but taking an arbitrary connection instead of the Levi-Civita connection associated with the metric. The resulting Lagrangian is affine and then singular; thus, as a previous step, it is very useful to make a preliminary analysis on the characteristics of the affine Lagrangians in general. In particular, the constraint algorithm for the Einstein-Palatini Lagrangian gives different kinds of constraints, and the results obtained here are discussed and compared with those obtained in the multisymplectic description of this model. The organisation of the paper is the following: Section 2 is devoted to review the main features on the k-symplectic approach to Lagrangian and Hamiltonian field theories. In Section 3, we present the generalisation of the constraint algorithm to k-symplectic singular Lagrangian systems. Finally, Section 4 is devoted to apply this method to affine Lagrangians in general and, in particular, to the Einstein-Palatini model of General Relativity. Throughout the text we use the summation convention for repeated crossed tensorial indices. All the manifolds are real, second countable and C ∞. Manifolds and mappings are assumed to be smooth. k-symplectic field theories In this Section we review the k-symplectic description of the Lagrangian formalism of classical field theory following the presentation given in. 2.1 k-symplectic geometry, k-tangent bundles and geometric structures Definition 1. Let M be a manifold of dimension n(k + 1), (V ; ) a nk-dimensional integrable distribution and 1,..., k a family of closed differentiable 2-forms on M. We say that ( 1,..., Then (M ; 1,..., k ; V ) is said to be k-symplectic manifold. If some of the conditions in the above definition are not satisfied then (M ; 1,..., k, V ) is called a k-presymplectic manifold and, similarly, A simple example of a k-presymplectic manifold is a submanifold of a k-symplectic manifold, where the pull-back of the 2-forms of the k-symplectic structure by the embedding map yields the 2-forms of the k-presymplectic structure in the submanifold. Furthermore, the canonical model for k-symplectic manifolds is the k-cotangent bundle of a manifold Q, which is the Whitney sum of k copies of the cotangent bundle T * Q; that is, denotes the canonical 2-form in the cotangent bundle, we can construct a canonical k-symplectic structure in (T 1 k ) * Q by taking := ( ) * o, and being V the vertical subbundle of (T 1 k ) * Q for the natural projection (T 1 k ) * Q → Q. Given a n-dimensional manifold Q, the k-tangent bundle of Q (which is also called the tangent bundle of k 1 -velocities of Q) is defined as the Whitney sum of k copies of the tangent bundle TQ; that Given a map : ). In the same way, the prolongation of a map : where t 1,..., t k are the natural coordinates of R k. Then is said to be holonomic. The vertical endomorphisms of T 1 k Q are the -tensor fields J on T 1 k Q given by In natural coordinates, they are given by We also define the vector fields ∆ by (∆ ) vq := (v ) ∧ vq so that ∆ = ∆ 1 + + ∆ k. The Liouville vector field is the generator of dilatations, i.e., its flow is given by the curves (q; tv 1,..., tv k ). In natural coordinates, we Definition 2. A k-vector field in Q is a section of the canonical projection : Equivalently, a k-vector field is determined by k vector fields X 1,..., X k ∈ X(Q) defined by X = X. Thus we write X = (X 1,..., X k ). The set of k-vector fields on Q is denoted by X k (Q). An integral section we have that satisfies the system of differential (Notice that, despite their name, in the k-symplectic formulation these maps are not sections of any bundle projection). In natural coordinates, a SOPDE X = (X 1,..., X k ) has the expression Note that X is a SOPDE if, and only if, If X = (X 1,..., X k ) is an integrable SOPDE, then a map :, is an integral section of X if, and only if, its components are solution to the system of second order partial differential equations and thus, is a holonomic section. Observe that if X is integrable, from we deduce that (X ) i = (X ) i. This fact justifies the name SOPDE for these kinds of k-vector fields; although the second order equations only refer really to their integrable sections and then to integrable k-vector fields. When these k-vector fields satisfy the condition of the definition 3 but they are not integrable, they are also called semiholonomic k-vector fields, and SOPDE k-vector fields which are integrable are called holonomic k-vector fields. k-symplectic Lagrangian field theory Let L ∈ C ∞ (T 1 k Q) be a Lagrangian function. Definition 5. A Lagrangian function is regular if the above equivalent conditions hold. Then The Lagrangian field equations are established as follows: and X is called a Lagrangian k-vector field. In addition, if X is a SOPDE then is the k-symplectic Euler-Lagrange equation and, if X is integrable, it is called an Euler-Lagrange k-vector field. A k-vector field X = (X 1,..., X k ) ∈ X k (T 1 k Q) is locally given by Using the local expression of X, L, and E L, as If L is a regular Lagrangian or X is required to be a SOPDE, then the above equations are equivalent to Theorem 1. Let L ∈ C ∞ (T 1 k Q). Then: (i) If L is regular, then there exist X ∈ X k (T 1 k Q) solutions to to the Lagrangian equations and they are SOPDE. an Euler-Lagrange vector field and is an integral section of X, then is a solution to the Euler-Lagrange field equations Thus, if L is regular, the existence of solutions to the Lagrangian equations is assured, although they are neither unique, nor necessarily integrable. Furthermore, they are SOPDEs and, if X is integrable, their integral sections are holonomic and they are solutions to the Euler-Lagrange equations. If L is a singular Lagrangian, then the SOPDE condition (X ) i = v i is not obtained from the Lagrangian equations and it must be imposed as an additional condition in order to obtain holonomic solutions to the field equations (as it is required by the variational principles). As it is usual in mechanics, we will restrict our study to a particular family of singular Lagrangians. First the Legendre transformation induced by L is the map FL : Observe that FL = and then this is a fiber-preserving map. In natural coordinates we have This assumption assures that certain regularity conditions hold which, in particular, guarantee the existence of a Hamiltonian counterpart for these kinds of systems. In fact; let P 0 = FL(T 1 k Q) with the natural embedding j 0 : which is FL-related with the Lagrangian system (T 1 k Q, L) (see for the details). Statement of the problem and previous considerations The problem we want to solve consists in finding a submanifold S of T 1 k Q and a k-vector field X = (X 1,..., X k ) ∈ X k (T 1 k Q) such that, on the points of S, 2. X is a SOPDE, and 3. X is tangent to S. In addition, X should be an integrable k-vector field in C. Nevertheless, the integrability problem will not be considered in depth in this work. Notation: The set of k-vector fields on T 1 k Q that are solutions to the Lagrangian equation on a subset In the following, we consider the map Then, for S ⊆ X k (T 1 k Q) and L, the annihilator of S is defined as The following properties hold: Ker L. Ker L, and then we conclude that Conversely, for every, from the coordinate expression we see that a vector field Z = Compatibility conditions: First generation constraints We start by imposing the compatibility condition. The subset of T 1 k Q where the Lagrangian equation has a solution is and it is assumed to be a closed submanifold of T 1 k Q. Then, using Lemma 1 one can prove that : are called first generation k-presymplectic or dynamical constraints. They define the submanifold P 1. The name of these constraints refers to the fact that their origin has nothing to do with the secondorder condition, but only with the compatibility of Lagrangian equations in general. As it happens in mechanics, the following property characterizes these kinds of constraints: Proof. First notice that, as FL is a submersion (and hence FL 0 is too), for every Z 0 ∈ ∩ Ker 0 ⊂ X((T 1 k ) * Q) there exist Z ∈ X(T 1 k Q) such that FL 0 * Z = Z 0. In addition, from we have that i(Z) L = FL * 0 = 0, for every ; therefore Z ∈ ∩ Ker L and, furthermore, if Z, Z ∈ ∩ Ker L are such that FL 0 * Z = FL 0 * Z = Z 0, then Z − Z ∈ Ker FL *. As Z ∈ ∩ Ker L is FL-projectable then ∈ Ker FL *, for every V ∈ Ker FL *. The necessary and sufficient condition for a function 1 ) = 0, for every V ∈ Ker FL *. Therefore, taking a local set of generators of k =1 Ker L made of these FL-projectable vector fields, for the corresponding base of dynamical constraints (Z) 1 we obtain So, we have found a submanifold P 1 → T 1 k Q here we can find solutions to the Lagrangian equation, but they are not SOPDEs necessarily. Our aim now is to find the largest subset of P 1 such that some of the solutions in X k L (T 1 k Q)| P 1 can be chosen to be SOPDEs. Then, if X ∈ X k L (T 1 k Q)| P 1 is a solution on P 1, we define which is assumed to be a closed submanifold of P 1. It is clear by its definition that S 1 is the maximal subset of P 1 where we can find solutions to the Lagrangian equation that are also SOPDEs. In order to prove this theorem, we need the following lemma: Proof Proof of Theorem 2. Let S := x ∈ P 1 i(Z)L(Y) (x) = 0, for every Z ∈ M. It is clear that this definition does not depend on the choice of solution X since any two solutions differ by an element of Ker L. One can indeed prove that it is also independent of the choice of Y: in fact, let be another k-vector field such that X + is a SOPDE; then Y − is a -vertical k-vector field since it is the difference of two SOPDEs, and then, for every Z ∈ M, Ker L such that X + Y is a SOPDE and is a solution to the Lagrangian equation at x. Hence L (Y) x = 0 and, in particular, (i(Z) L (Y))(x) = 0, for every Z ∈ M. So, x ∈ S and S 1 ⊆ S. x and, by Lemma 3, there exists a -vertical k-vector Taking any V ∈ X k (T 1 k Q) such that V x = v we get that X + Y + V solves the Lagrangian equation and is a SOPDE at x. So x ∈ S 1 and S ⊆ S 1. Proof. It is clear that adding an element of Ker V L to a SOPDE solution to the Lagrangian equation we obtain another SOPDE solution. Conversely, if, ∈ X k L,S (T 1 k Q)| S 1 we have so X −X is vertical and belongs to Ker L. We have the following equivalent characterization of the submanifold S 1 : Proposition 5. For every SOPDE we have that is, we recover the dynamical constraints. On the other hand, if In the last equality we have used that the second term vanishes in the points where the dynamical constraints vanish, i.e. P 1 ; and we are precisely looking for the set where all the constraints vanish. are called first generation SOPDE or non-dynamical constraints. They define S 1 → P 1. This name is justified because these constraints appear as a consequence of demanding the SOPDE condition. They are characterized by the following condition: Proposition 6. First generation SOPDE constraints are not FL-projectable. Proof. As FL is a submersion, following the same reasoning than at the begining of the proof of Proposition 3, first we prove that, from every X 0 ∈ X k (P 0 ) solution to the Hamiltonian equations, we construct a FL-projectable solution X ∈ X k (T 1 k Q) to the Lagrangian equations. In the same way, we can obtain a set of vector fields Z ∈ M ⊥ which generate (locally) M ⊥ and are FL-projectable too. Therefore, taking this solution X, the part i(Z)L(X) in the expression gives a FL-projectable function. However, for i(Z)L() we have that, for every V ∈ Ker FL *, This means that these constraints remove degrees of freedom on the fibers of the foliation defined in T 1 k Q by the Legendre map FL and, as a consequence, FL(P 1 ) = FL(S 1 ): Tangency conditions: Second and further generation constraints In general, none of the elements of X k L,S (T 1 k Q)| S 1 may be tangent to S 1. Thus, bearing in mind Proposition 4, we have to look for the subset of S 1 where, given any solution ∈ X k L,S (T 1 k Q)| S 1, we can find an element V of Ker V L such that it renders + V tangent to S 1. At this point, the situation for k-presymplectic Lagrangian systems differs from the one in Lagrangian mechanics (k = 1). The main difference is that now, to every Euler-Lagrange k-vector field we can add elements of Ker V L ; but (Ker 1 L,..., Ker k L ) ⊂ Ker L, and this means that, if V = (V ), then V ∈ Ker L necessarily. Despite this, the origin and structure of new generation of constraints is similar in mechanics and in field theories, as we will see below. The submanifold S 1 is defined as the zero set of constraint functions, and hence we can easily impose tangency conditions. So, for ∈ X k L,S (T 1 k Q)| S 1, bearing in mind Proposition 4, we define and we assume that S 2 is a closed submanifold of S 1. Lemma 4. For a given Proof. We can take a finite set of non-dynamical constraints { }. As it has been pointed out, these constraints remove degrees of freedom on the leaves of the distribution generated by Ker FL * ; that is, in the vertical leaves of T 1 k Q. Then, in a chart of natural coordinates, the matrix of the linear system ( ) (which consists in partial derivatives of the independent constraints with respect to the coordinates in the vertical fibres) has maximal rank and, hence, the system is compatible, at least locally. Then, from the local solutions we can construct global solutions using partitions of unity. (For the case of mechanics, see also ). From this last result we obtain that, if new constraints appear, out of all the conditions that define S 2 → S 1 the only new constraints arise form the conditions Now, recall the construction made in in which, to any Lagrangian k-vector field X on P 1, we added an element of Ker L so as to render it a SOPDE on S 1. Thus, we can split = X + Y + V, with Y ∈ Ker L and V ∈ Ker V L, and such that X + V is a Lagrangian k-vector field (which can be taken to be FL-projectable). Then, we have two different situations: are called second generation k-presymplectic or dynamical constraints. As can be seen, these new constraints would arise from demanding the tangency condition of the Lagrangian k-vector field X + V on the submanifold P 1 and are independent on the SOPDE condition. Thus, together with the first generation dynamical constraints (Z) 1, they define a submanifold P 2 → P 1 where there exist Lagrangian k-vector fields (solutions to the Lagrangian equations, not necessarily SOPDE ), which are tangent to P 1 on the points of P 2, but that are generally not tangent to P 2. Furthermore, as for the first generation dynamical constraints, we have: Proposition 7. Second generation dynamical constraints can be expressed as FL-projectable functions. Proof. It is immediate since, in the expression, the solution X + V and the constraints can be taken FL-projectable. 1 )| S 1 = 0 could determine (partially or totally) V and / or originate new constraints that appear as a consequence of the SOPDE condition and hence: And, as for the second generation non dynamical constraints we have: is not FL-projectable. Thus we have that ] ⊥, and we can find Euler-Lagrange k-vector fields that are tangent to S 1 on the points of S 2, but that are generally not tangent to S 2. At this point, we are in a situation equivalent to the one before imposing tangency. We therefore keep repeating the last step; that is, imposing tangency, At every step of the algorithm we repeat the same reasoning and we obtain similar results than in the above step. In particular, we find new constraints which, in general, split into two groups: the dynamical and the SOPDE constraints which, as above, are also characterized by the fact of being FL-projectable or non FL-projectable, respectively. Both of them arise from the tangency condition of the previous dynamical constraints, whereas the tangency condition on the previous SOPDE constraints does not give new constraints. The procedure continues until the algorithm stabilizes, i.e. until S i+1 = S i =: S f. The only interesting case from the physical point of view is when S f is a submanifold of T 1 k Q, which is called the final constraint submanifold. On it, we can find solutions to the second-order problem for the Lagrangian system (T 1 k Q, L). On the F L-projectability and integrability of solutions A remaining problem concerns to the FL-projectability of the SOPDE solutions found on the final constraint submanifold S f of the algorithm. In fact a necessary condition for the existence of FL-projectable SOPDE k-vector fields which are solutions to the Lagrangian equations on the final constraint submanifold S f is that S f only contain one point in every fibre of the foliation defined by Ker FL * (i.e; the fibres of the Legendre map). In fact, suposse that this condition does not hold and there are two points x A, x B ∈ S f in the same fibre of the foliation, whose coordinates are In conclusion, the submanifold of S f where these FL-projectable SOPDE solutions exist must be diffeomorphic to the quotient S f /Ker FL *. (This problem was studied in detail for the case of mechanics, in ). As a final remark and in order to solve the problem completely, we want the SOPDE k-vector fields on S f to be integrable. In general, the existence of such k-vector fields is not assured (even if the Lagrangian is regular). Thus, if X ∈ X k L,S (T 1 k Q)| S f with X = (X 1,..., X k ), the necessary and sufficient condition for X to be integrable is | S f = 0. In most cases, these conditions lead to some relations among the remaining arbitrary coefficients of the family of solutions; but, in some cases, they can originate new constraints that define a new submanifold S f → S f. In this case, the tangency of the integrable family of solutions has to be checked and the algorithm restarts once again. The Einstein-Palatini model of General Relativity One of the most interesting singular classical field theories in physics which can be described using the k-(pre)symplectic formulation is the Einstein-Palatini model of General Relativity; also known as the metric-affine model. As we will see, it is described by an affine Lagrangian. Then, in order to study this model using the constraint algorithm, it is very relevant to apply it first to the generic case of k-presymplectic Lagrangian systems described by affine Lagrangians. Affine Lagrangians An affine Lagrangian is the sum of two functions: a linear function T 1 k Q → R on the fibers of the bundle T 1 k Q → Q and the pullback to T 1 k Q of another arbitrary function in Q. In natural coordinates, (q i, v i ), it has the shape and then we have D. Adame-Carrillo, J. Gaset, N. Romn-Roy, k-presymplectic field theories. Einstein-Palatini model.15 The behaviour of the system depends on the rank of the matrix M = (M ij ) = ∂F i ∂q j − ∂F j ∂q i, which is assumed to be constant (otherwise the analysis must be done on each one of the subsets of T 1 k Q where it is constant). Now, we follow the steps of the algorithm. First-generation dynamical constraints: The algorithm tells us that If the rank of M is maximal; i.e. rank M = n, there are no dynamical constraints and the equations give n of the coefficients (X ) i as functions of the remaining ones. In fact, in this case we have hence, for every Z ∈ ⊥, the function i(Z)dEL vanishes everywhere on T 1 k Q because Z are -vertical vector fields and dE L is a -horizontal form; then there are no dynamical constraints and and dynamical constraints may arise: Observe that these constraints are obtained also directly from the equation. First-generation non-dynamical constraints: Following the algorithm, for any Lagrangian k-vector field X and any Y ∈ X k (T 1 k Q) such that X + Y is a SOPDE, we have For affine Lagrangians, as X kV (T 1 k Q) ⊆ Ker L, we have M = X(T 1 k Q). Since all vector fields are in M, the functions defining S 1 in P 1 are obtained simply imposing L (Y) = 0. For simplicity we can take Then the constraints read As X satisfies, we have the following set of equations and, depending on the rank of M, these equations give new constraints or not; in particular: For 0 < rank M ≤ n, these equations are not satisfied in the whole P 1 and they give SOPDE first generation constraints that define the submanifold S 1 → P 1. The number of independent constraints is fixed by rank M. For M ij = 0 (rank M = 0), bearing in mind, the equalities hold everywhere in P 1 ; so no SOPDE constraints appear in this case and S 1 = P 1. Notice that are just the field equations for SOPDE k-vector fields which, when they are integrable (i.e., holonomic), these equations are the Euler-Lagrange field equations. This means that, for affine Lagrangians, the field equations are recovered as constraints of the theory. This is in accordance with a similiar result in Tangency conditions and further generation of constraints: At this point, we have SOPDE k-vector fields = ( ) ∈ X k (T 1 k Q) which are Euler-Lagrange k-vector field on S 1. Now, we impose tangency and thus, for The tangency conditions on the first-generation dynamical constraints and, if these conditions do not hold on S 1, they give new constraints that define the submanifold S 2 → S 1. When 0 < rank M = r ≤ n there are also first-generation non-dynamical constraints k 1 and the tangency conditions for them lead to the equations As the number of independent first-generation constraints is equal to r = rank M, this system is compatible and thus the tangency conditions determine r of the coefficients ( ) i as functions of the remaining ones, and no new constraints appear. In the case that new constraints have arisen, we continue demanding the tangency of the solutions until the algorithm stabilizes. An academical example Consider a manifold Q with local coordinates (q 1, q 2 ) and its 2-tangent bundle T 1 2 Q with local coordinates (q 1,. We consider the Lagrangian For this Lagrangian F 2 1 = F 1 2 ≡ 0, F 1 1 = q 2, F 2 2 = −q 1, and G = q 1 q 2. We get We also have Ker L = X kV (T 1 2 Q). From the above general discussion for affine Lagrangians we know that no dynamical constraints appear. Indeed, the Lagrangian equation has solutions X = (X 1, X 2 ) ∈ X 2 (T 1 2 Q) everywhere in T 1 2 Q given by Equivalently, the algorithm tells us that the first generation dynamical constraints come from i( Ker L ; but these equalities hold trivially because Z are -vertical vector fields and dE L is a -horizontal form. However, there are SOPDE constraints. In fact, from, and we get the constraints which define the submanifold S 1. We get no conditions on the coefficients of the solutions, and hence we have that any SOPDE is an Euler-Lagrange 2-vector field on S 1. Let us now impose tangency. According to the general discussion, it only gives conditions on the Summing up, we can find Euler-Lagrange 2-vector fields only on where A, B, C, D are free functions. If we want this family of solutions to be integrable we must impose and this equation leads to a system of partial differential equations on the functions A, B, C, D. The Einstein-Palatini model The Einstein-Palatini model is a first order singular field theory. A multisymplectic formulation of the model has been developed in several works (see, for instance ). In particular, in the constraints arising from the application of the constraint algorithm in the (pre)multisymplectic framework have been computed explicitly in coordinates. They have a diverse origin and characteristics, which makes this system an interesting test for the theory developed in this article. Moreover, we provide an intrinsic characterization of the constraints. The configuration bundle for this system is the bundle : E → M, where M is a connected orientable 4-dimensional manifold representing space-time, whose volume form is denoted ∈ 4 (M ), and E = M C(LM ), where is the manifold of Lorentzian metrics on M,with signature (− + ++), and C(LM ) is the bundle of connections on M ; that is, linear connections in TM. Integrability conditions: By completeness, the integrability conditions for the Einstein-Palatini model are obtained by imposing that = 0 (on S f ), and they are These are new integrability constraints that define a new submanifold S f → S f where there are integral sections which are solution to the Einstein equations. It can be checked that the tangency condition of holds on S f. (See for the details). Conclusions and outlook The first goal of this work has been to solve the second-order problem for singular field theories, generalizing the constraint algorithm of (for singular Lagrangian mechanical systems) to k-presymplectic Lagrangian systems and completing the results of. In particular, given a k-symplectic Lagrangian system (T 1 k Q; L), we have developed an algorithm that produces the maximal submanifold of T 1 k Q on which the Lagrangian equation has solution and the solution can be chosen to be a SOPDE. The algorithm works as follows: First, we characterize the submanifold P 1 on which the Lagrangian equation has a solution. The constraints defining P 1 are called k-presymplectic or dynamical constraints, they arise from demanding the compatibility condition, and can be chosen to be FL-projectable functions. Second, the algorithm gives the submanifold S 1 of P 1 on which the Lagrangian k-vector fields can be chosen to be SOPDEs. This produces new constraints that are called SOPDE or non-dynamical constraints. They arise from demanding the SOPDE condition and are not FL-projectable. Next, the stability or tangency condition is imposed, looking for the submanifold S 2 of S 1 where SOPDE solutions can be chosen to be tangent to S 1. Then, the tangency condition on the non-dynamical constraints give no new constraints, but on the dynamical constraints it can produce new constraints that can be classified as dynamical or non-dynamical, depending on their nature: if they are related to demanding solutions to be SOPDE k-vector fields then they are non-dynamical and they are dynamical otherwise. This last step is repeated until the algorithm stabilizes; that is, in the most favourable cases, until we have a submanifold S f on which the SOPDE k-vector fields solutions to the k-presymplectic Lagrangian equations are tangent to it. In each step, results similar to the previous ones are repeated. In this way, the behaviour of the constraint algorithm for k-presymplectic Lagrangian systems in field theory is exactly the same as for presymplectic Lagrangian systems in mechanics. Finally, the FL-projectability and the integrability of these SOPDE k-vector fields are additional conditions to be demanded that can produce new constraints. As a very interesting case, we have applied the algorithm to analyze the Einstein-Palatini or metricaffine model of General Relativity, which is very suitable to be studied using the k-symplectic formulation; and we have compared the results achieved with those obtained in the multisymplectic analysis of this model. As it is an affine Lagrangian, we have analyzed previously the general case of affine Lagrangians in field theory and, as a particular case, we have described an academical example. As further research, this constraint analysis can be implemented to analyze other extended models of General Relativity, such as Lovelock, f (R) or f (T ) theories. It should be also interesting to do a similar study of the second-order problem for the multisymplectic formulation of classical field theories in general. |
The US Government Accountability Office (GAO) revealed in a report on Thursday that the Pentagon is unaware whether $2 billion worth of military equipment sent to Iraq has ever reached its final destination.
WASHINGTON (Sputnik) — The US Defense Department cannot demonstrate if materials purchased through the $2 billion Iraq Train-and-Equip Fund (ITEF) reached their intended destinations, the Government Accountability Office (GAO) said in a new report after a months-long probe ordered by Congress.
"The United States has provided over $2 billion in equipment to Iraq's security forces through the Iraq Train and Equip Fund," the report stated on Thursday. "However, the Department of Defense does not collect timely and accurate transportation information about the equipment purchased through the fund. As a result, DOD can't demonstrate that this equipment reached its intended destinations in Iraq."
On Wednesday, Amnesty International, citing a 2016 declassified Inspector General report, said the Defense Department failed to track $1 billion in equipment in Iraq and Kuwait. A Pentagon spokesperson told Sputnik earlier on Thursday, however, that Amnesty's allegations were false.
Congress asked the GAO, the report noted, to review the Pentagon’s "accountability of ITEF-funded equipment." The review and assessment of Pentagon systems and procedures was conducted from September 2016 to May 2017, according to the report.
© AFP 2018 / Mohammed EYAD Pentagon Seeks $262M More to Hasten Training of Anti-Daesh Forces in Iraq, Syria
The GAO recommended that the Defense Department improve its systems and procedures such as how it records key transportation data in order to better track this equipment.
Defense Department officials, the report added, attributed the lack of key transportation data in its security cooperation management reporting system to potential interoperability and data reporting issues.
The GAO is the audit and investigative arm of Congress, whose mission is to improve the performance and accountability of the federal government for Americans by examining the use of public funds. |
Npower Championship side Brighton & Hove Albion are reportedly interested in signing Arsenal’s Spanish defender Ignasi Miquel on loan for next season.
The 19 year old, who joined the club from Cornella in 2008, became a familiar figure in and around the first-team squad last season, making nine senior appearances in total, including four in the Premier League, to add to his two FA Cup appearances the previous season.
Miquel has also regularly captained Arsenal’s Reserves under the stewardship of Neil Banfield, but has yet to experience a loan spell and such a move would be seen as highly beneficial for his development particularly because, despite his progress in recent years, he still remains susceptible to long balls and his decision making is still haphazard at times.
In addition, with Kyle Bartley back from Rangers and looking to stake a claim for a place in the first-team squad next season, Miquel would be unlikely to receive much game time at Arsenal barring an injury crisis of the proportions of the one that inflicted the club last season. Despite being predominantly a centre-back, Miquel also filled in at left-back for the Gunners last season, and it is understood that Brighton manager Gus Poyet would be interested in using him in that position if he secured the youngster’s services for next season.
Advertisements |
<filename>scraper/insta_story_scraper.py
############################################################################################
# Author: <NAME> #
# File Name: insta_story_scraper.py #
# Creation Date: January 11, 2021 #
# Source Language: Python #
# Repository: https://github.com/aahouzi/Instagram-Scraper-2021.git #
# --- Code Description --- #
# Implementation of an Instagram story scraper #
############################################################################################
################################################################################
# Packages #
################################################################################
from webdriver_manager.chrome import ChromeDriverManager
from selenium.common.exceptions import NoSuchElementException
from selenium import webdriver
import pandas as pd
from termcolor import colored
import os, time
################################################################################
# Main Code #
################################################################################
# Asking the client for the username or hashtag he wants to scrap
account = input(colored("\n[INFO]: Please type the username you want to scrap stories from: ", "yellow"))
story_link = "https://www.instagram.com/stories/{}/".format(account)
login_page = "https://www.instagram.com/accounts/login/"
project_direc = os.getcwd()[:-8]
# Specify Chrome driver options
options = webdriver.ChromeOptions()
options.add_argument("--window-size=1920,1080")
options.add_argument('--ignore-certificate-errors')
options.add_argument('--allow-running-insecure-content')
options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36")
options.add_argument('--headless')
driver = webdriver.Chrome(ChromeDriverManager().install(), options=options)
driver.get(login_page)
time.sleep(4)
# Accept the website cookies
driver.find_element_by_xpath("/html/body/div[4]/div/div/button[1]").click()
# Login with a random account since we can't scrap stories without being logged
driver.find_element_by_name("username").send_keys("testtest7530")
driver.find_element_by_name("password").send_keys("testtest753")
driver.find_element_by_xpath("//*[@id='loginForm']/div/div[3]/button/div").click()
time.sleep(6)
print(colored("\n[SUCCESS]: Logged into the website. \n", "green"))
# Have access to the story link
driver.get(story_link)
time.sleep(2)
# Check if there are any stories for the last 24h, if so start scraping all stories
if driver.current_url != story_link:
print(colored("\n[ERROR]: No stories are available for the last 24h. \n", "red"))
else:
rows = []
print(colored("\n[SUCCESS]: Got into the story link. \n", "green"))
driver.find_element_by_xpath("/html/body/div[1]/section/div[1]/div/section/div/div[1]/div/div/div/div[3]/button").click()
time.sleep(3)
while driver.current_url != "https://www.instagram.com/":
# Collect the link to the video content of the story if it exists, otherwise take the image link
is_video = True
try:
content_link = driver.find_element_by_xpath("//*[@id='react-root']/section/div[1]/div/section"
"/div/div[1]/div/div/video/source").get_attribute("src")
except NoSuchElementException:
content_link = driver.find_element_by_xpath("//*[@id='react-root']/section/div[1]/div/"
"section/div/div[1]/div/div/img").get_attribute("src")
is_video = False
# Get the link of the story
insta_link = driver.current_url
# Get the date of the story
date = driver.find_element_by_xpath("//*[@id='react-root']/section/div[1]/div/"
"section/div/header/div[2]/div[1]/div/div/"
"div/time").get_attribute("datetime")
# Append all collected information into a row
rows.append(
{
'Instagram URL': insta_link,
'Content URL': content_link,
'Date': date,
'is_video': is_video
}
)
# Click on the next button
driver.find_element_by_xpath("//*[@id='react-root']/section/div[1]/div/section/div/button[2]").click()
time.sleep(3)
# Save scrapped data into a dataframe
df = pd.DataFrame(rows)
df.to_pickle(os.path.join(project_direc, "collected_data/stories_{}.pkl".format(account)))
print(colored("\n[SUCCESS]: Scrapped all stories for the last 24h. \n", "green"))
|
<gh_stars>1-10
import { sToTime } from '@helper/util';
describe.each([
{ seconds: 1, expected: '01s' },
{ seconds: 79, expected: '01m 19s' },
{ seconds: 3600, expected: '01h 00m 00s' },
{ seconds: 6909559, expected: '1919h 19m 19s' },
])('$seconds seconds to time string', ({ seconds, expected }) => {
it(`returns ${expected}`, () => {
expect(sToTime(seconds)).toBe(expected);
});
});
|
/*
* Copyright 2019-2021 The Polypheny Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.polypheny.db.adapter.cottontail.algebra;
import org.junit.Assert;
import org.junit.Test;
import org.polypheny.db.adapter.cottontail.algebra.CottontailFilter.AtomicPredicate;
import org.polypheny.db.adapter.cottontail.algebra.CottontailFilter.BooleanPredicate;
import org.polypheny.db.adapter.cottontail.algebra.CottontailFilter.CompoundPredicate;
import org.polypheny.db.adapter.cottontail.algebra.CottontailFilter.CompoundPredicate.Op;
public class CottontailFilterTest {
@Test
public void simpleCnfTest() {
BooleanPredicate testPredicate = new CompoundPredicate( Op.ROOT,
new CompoundPredicate( Op.NOT,
new CompoundPredicate(
Op.OR,
new AtomicPredicate( null, false ),
new AtomicPredicate( null, false )
), null ), null );
while ( testPredicate.simplify() )
;
CompoundPredicate result = (CompoundPredicate) ((CompoundPredicate) testPredicate).left;
Assert.assertEquals( "Highest up predicate should be AND.", Op.AND, result.op );
Assert.assertEquals( "Inner operations should be negation", Op.NOT, ((CompoundPredicate) result.left).op );
Assert.assertEquals( "Inner operations should be negation", Op.NOT, ((CompoundPredicate) result.right).op );
}
}
|
MiR-146a rs2910164 polymorphism and head and neck carcinoma risk: a meta-analysis based on 10 case-control studies Two recent meta-analyses have been conducted on the relationship between miR-146a polymorphism (rs2910164) and head and neck cancer (HNC) risk. However, they have yielded conflicting results. Hence, the aim of the present study was to conduct a quantitative updated meta-analysis addressing this subject. Eligible studies up to Sep 2016 were retrieved and screened from the bio-databases and then essential data were extracted for data analysis. Next, subgroup analyses on ethnicity, source of controls, sample size, and genotyping method were also carried out. As a result, a total of 9 publications involving 10 independent case-control studies were included. The overall data indicated a significant association between miR-146a rs2910164 polymorphism and HNC risk . Variant alleles of miR-146a rs2910164 may have a correlation with increased HNC risk. Future well-designed studies containing large sample sizes are needed to verify this result. INTRODUCTION Head and neck carcinoma (HNC) has ranked the sixth most frequent malignancy worldwide, which comprises a number of epithelial cancers originated from oral, nasal cavity, pharynx and larynx. The occurrence of HNC shows a decreasing trend and its mobility is still high in spite of receiving comprehensive treatment involving radiation, chemotherapy, and surgical treatment modalities. Life qualities of patients can be seriously affected by HNC due to its specific sites that have an association with speaking and breathing. The etiology of HNC remains largely unclear. In recent years, microRNAs (miRNAs) have attracted much attention. MiRNAs are a series of single-stranded short non-coding RNAs that can inhibit gene expression by directly targeting specific mRNAs. They have been suggested to be involved in cellular processes such as cell proliferation, differentiation and apoptosis. Thus, aberrant expressions of miRNAs have been indicated to have a correlation with etiology, diagnosis and development of many cancers. MiRNAs function as either tumor suppressors or oncogenes in HNC. Single nucleotide polymorphism is a variation of DNA sequence, which occurs in a proportion of population. The variation of miRNAs might interfere with the translation of mRNA at the post-transcriptional level and suppress gene expression, thus leading to abnormal biological metabolism and cellular process. A popular miRNA, miR-146a, has been suggested to have an association with the development of a variety of disorders. It also functions as an oncogene or a tumor suppressor in various cancers. A polymorphism of miR-146a that is located on chromosome 5q34 with a nucleotide mutation from G to C (rs2910164) has been reported to be related with a number of cancers. Previously, a growing body of published literature has been devoted to the relationship between miR-146a rs2910164 polymorphism and HNC risk. Nevertheless, the results were conflicting. Hence, in 2015, two metaanalyses addressing this topic have been published. However, interestingly, the results of these two metaanalyses were conflicting. On the basis of this situation, we aimed to conduct an updated meta-analysis including recently published data up to Sep 2016 to derive a more precise estimation of the relationship. All publications were written in English, except for one in Chinese. The relevant information of the selected papers such as the first author, the number and characteristics of cases and controls for each study was listed in Table 1. There were 3 groups of Caucasians and 7 groups of Asians in the present analysis. The distribution of the miR-146a rs2910164 genotypes and the genotyping methods of the included studies were presented in Table 2. The genetic distributions of the control groups in all studies were all consistent with the HWE. Meta-analysis results The main results of the meta-analysis are listed in Table 3. The heterogeneity in the allelic contrast (P=0.000), dominant model (P=0.007) and recessive model (P=0.000) was significant, respectively. Therefore, the random-effect models were used for calculation in these models. For the overall data, a total of ten case-control studies containing 4399 cases and 8777 controls were involved. The pooled ORs for the recessive model (OR=1.16; 95% CI = 0.94-1.44) failed to show an association. Nevertheless, borderline increased cancer risks could be shown in both the allelic contrast (OR=1.14; 95% CI = 1.00-1.31) and dominant model (OR=1.21; 95% CI =1.02-1.43), implying that variant C allele of miR-146a rs2910164 may have a correlation with increased risk of HNC ( Figure 2). To evaluate the effect of confounding factors on the results, we conducted subgroup analyses according to ethnicity, source of controls, sample size, and genotyping method, respectively. However, no association could Sensitivity analysis and bias diagnostics To test the stability of the overall results, we firstly changed the effect models and found that the results were not statistically altered. Then, we deleted one study from the database each time and repeated the analyses. The data showed that the overall results were not statistically changed in the above analysis, indicating that the overall results of the present study were stable. Publication bias was an unavoidable problem in the meta-analysis. Funnel plots were generated and their symmetries were further evaluated by Egger's linear regression tests. As expected, the data showed that the plots for the three genetic models were symmetrical (C vs. G: t = 0.82, P =0.437; dominant model: t = 1.41, P =0.196; recessive model: t = 0.82, P =0.437), suggesting that the publication bias was not evident (Figure 3). DISCUSSION MiR-146a rs2910164 polymorphism may play different roles in the susceptibility to different cancers. For HNC, two meta-analyses published in 2015 had concerned the relationship between miR-146a rs2910164 polymorphism and HNC risk. However, the results were inconclusive. The study by Chen et al. failed to show an association between miR-146a rs2910164 polymorphism and HNC risk, while conversely, the study by Niu et al. showed that rs2910164 of miR-146a confers HNC risk among Caucasians. It is worthy of note that several problems might exist in the above two meta-analyses. In the paper by Chen et al., a study that was not case-control designed had been regarded as case-control study and thus were selected. This error processing might lower the credibility of the overall results. For the metaanalysis by Niu et al., the data of thyroid cancer had been combined as HNC in the database. However, both the pathology and biological behavior of thyroid cancers are obviously different from other HNCs. Thus, inclusion of thyroid cancers in HNC series for this type of meta-analysis might reduce the power to evaluate the association. In the present study, we found that miR-146a rs2910164 polymorphism might have a relationship with HNC risk. However, the association could not be observed when the data were stratified by ethnicity, inconsistent with the published two meta-analyses. This might be due to the ethnic variation or the limited number of the included studies. The mechanisms by which miR-146a rs2910164 polymorphism increases HNC risk were not fully understood. Previous reports have shown that polymorphism might result in down-regulation of miR-146a expression, which may have a correlation with distant metastasis of HNC. In some cancers, miR-146a acts as a tumor suppressor. For instance, in a study on breast cancer, activated miR-146a may attenuate epidermal growth factor receptor expression, thus influencing the disease progression, and clinical prognosis. In addition, miR-146a can inhibit epithelial mesenchymal transition and thus suppress lung cancer progression. Therefore, inhibition of miR-146a expression may have an association with cancer risk. This might help clarify the reason why miR-146a rs2910164 polymorphism has a relation to increased HNC risk. Several limitations need to be addressed in this meta-analysis. First, only English and Chinese were used in the search strategy. Thus, articles written in other languages were missed in the searching process, leading to any selection bias. Second, only Asian and Caucasian populations were involved in the present study. Other ethnicities such as African were not included. Since gene variations might be different in different ethnicities, future studies on various ethnicities are needed. In conclusion, the overall data reveal a significant association of miR-146a rs2910164 polymorphism with HNC risk, and nevertheless, future studies on different ethnicities with large sample sizes are needed to obtain a more convincing result. Ethics statement Ethical approval is not necessary for the present meta-analysis. Literature search strategy A systematic search was conducted in the databases such as Medline, EMBASE, Web of Science and Chinese National Knowledge Infrastructure (CNKI). Published papers up to Sep 2016 were covered. The following keywords were used: miRNA, miR-146a, oral, mouth, laryngx, pharyngx, nasopharynx, head and neck, neoplasm, cancer, variation, and polymorphism. All potential studies were retrieved and the bibliographies were further checked for possible publications whenever necessary. Inclusion criteria For the literature inclusion, the following criteria were used: papers should concern miR-146a rs2910164 polymorphism and HNC risk; studies should be case-control designed; papers should state adequate information for readers to calculate odds ratios (ORs) and their 95% confidence intervals (CIs). Accordingly, the following criteria were used for exclusion: duplicate publication; papers with insufficient information. Data extraction The data were extracted by two of the authors independently. If the extracted information was conflicting, a discussion was conducted to reach an agreement. If the agreement could not be reached, another author was consulted and a final decision was made according to the majority of votes. When two or more studies shared the same group of population, only the study including the larger or the largest sample size was selected. Statistical analysis The Hardy-Weinberg equilibrium (HWE) was assessed by Fisher's exact test for the controls in each study. The ORs and their 95%CIs were calculated to evaluate the strength of the association between miR-146a rs2910164 polymorphism and HNC risk. The pooled ORs were determined for an allelic contrast model (C allele vs. G allele), a dominant model (CC+CG vs. GG), and a recessive model (CC vs. CG+GG). A chi-squared-based Q-statistic test was performed to assess between-study heterogeneity. A P value for the Q-test greater than 0.05 indicates absence of heterogeneity, and then the ORs were pooled by a fixed-effect model (Mantel-Haenszel) ; otherwise, they were pooled by a random-effect model (DerSimonian and Laird). The significance of the pooled ORs was determined by Z-test. For evaluation of the publication bias, funnel plots were created. If the plot was asymmetrical, an evident publication bias might exist. To minimize the subjective influence of the visual inspection assessment, we further used Egger's linear regression test to evaluate the symmetry of the funnel plot. All statistical analysis in the present study was performed using the program Microsoft Excel 2003 and STATA 11.0 software (Stata Corporation, Texas, USA). |
from django.conf.urls import url
from rest_framework import routers
from rest_framework.urlpatterns import format_suffix_patterns
from rest_framework_nested import routers
from app_name.views.v1.test_session_controller import BusView, StationView, StationScheduleView, ReserveScheduleView, \
ReservationView
router = routers.SimpleRouter(trailing_slash=False)
# router.register(r'test_session', TestSessionView, base_name='test_session')
# urlpatterns = router.urls
# /bus, /station, /schedule, /reserve
router.register(r'bus', BusView, base_name='bus')
router.register(r'station', StationView, base_name='station')
router.register(r'schedule', ReserveScheduleView, base_name='schedule')
router.register(r'reservation', ReservationView, base_name='reservation')
# /station/pk/info
station_route = routers.NestedSimpleRouter(router, r'station', lookup='station', trailing_slash=False)
station_route.register(r'schedule', StationScheduleView, base_name='station schedule')
urlpatterns = []
urlpatterns += router.urls
urlpatterns += station_route.urls
|
<reponame>geraint0923/beam
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.beam.sdk.testing;
import static org.apache.beam.sdk.testing.FileChecksumMatcher.fileContentsHaveChecksum;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.containsString;
import static org.junit.Assume.assumeFalse;
import java.io.File;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.regex.Pattern;
import org.apache.beam.sdk.util.NumberedShardedFile;
import org.apache.beam.vendor.guava.v26_0_jre.com.google.common.io.Files;
import org.apache.commons.lang3.SystemUtils;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.ExpectedException;
import org.junit.rules.TemporaryFolder;
import org.junit.runner.RunWith;
import org.junit.runners.JUnit4;
/** Tests for {@link FileChecksumMatcher}. */
@RunWith(JUnit4.class)
@SuppressWarnings("nullness") // TODO(https://issues.apache.org/jira/browse/BEAM-10402)
public class FileChecksumMatcherTest {
@Rule public TemporaryFolder tmpFolder = new TemporaryFolder();
@Rule public ExpectedException thrown = ExpectedException.none();
@Test
public void testPreconditionChecksumIsNull() throws IOException {
thrown.expect(IllegalArgumentException.class);
thrown.expectMessage(containsString("Expected valid checksum, but received"));
fileContentsHaveChecksum(null);
}
@Test
public void testPreconditionChecksumIsEmpty() throws IOException {
thrown.expect(IllegalArgumentException.class);
thrown.expectMessage(containsString("Expected valid checksum, but received"));
fileContentsHaveChecksum("");
}
@Test
public void testMatcherThatVerifiesSingleFile() throws IOException {
File tmpFile = tmpFolder.newFile("result-000-of-001");
Files.write("Test for file checksum verifier.", tmpFile, StandardCharsets.UTF_8);
assertThat(
new NumberedShardedFile(tmpFile.getPath()),
fileContentsHaveChecksum("a8772322f5d7b851777f820fc79d050f9d302915"));
}
@Test
public void testMatcherThatVerifiesMultipleFiles() throws IOException {
// TODO: Java core test failing on windows, https://issues.apache.org/jira/browse/BEAM-10747
assumeFalse(SystemUtils.IS_OS_WINDOWS);
File tmpFile1 = tmpFolder.newFile("result-000-of-002");
File tmpFile2 = tmpFolder.newFile("result-001-of-002");
File tmpFile3 = tmpFolder.newFile("tmp");
Files.write("To be or not to be, ", tmpFile1, StandardCharsets.UTF_8);
Files.write("it is not a question.", tmpFile2, StandardCharsets.UTF_8);
Files.write("tmp", tmpFile3, StandardCharsets.UTF_8);
assertThat(
new NumberedShardedFile(tmpFolder.getRoot().toPath().resolve("result-*").toString()),
fileContentsHaveChecksum("90552392c28396935fe4f123bd0b5c2d0f6260c8"));
}
@Test
public void testMatcherThatVerifiesFileWithEmptyContent() throws IOException {
// TODO: Java core test failing on windows, https://issues.apache.org/jira/browse/BEAM-10748
assumeFalse(SystemUtils.IS_OS_WINDOWS);
File emptyFile = tmpFolder.newFile("result-000-of-001");
Files.write("", emptyFile, StandardCharsets.UTF_8);
assertThat(
new NumberedShardedFile(tmpFolder.getRoot().toPath().resolve("*").toString()),
fileContentsHaveChecksum("da39a3ee5e6b4b0d3255bfef95601890afd80709"));
}
@Test
public void testMatcherThatUsesCustomizedTemplate() throws Exception {
// Customized template: resultSSS-totalNNN
// TODO: Java core test failing on windows, https://issues.apache.org/jira/browse/BEAM-10749
assumeFalse(SystemUtils.IS_OS_WINDOWS);
File tmpFile1 = tmpFolder.newFile("result0-total2");
File tmpFile2 = tmpFolder.newFile("result1-total2");
Files.write("To be or not to be, ", tmpFile1, StandardCharsets.UTF_8);
Files.write("it is not a question.", tmpFile2, StandardCharsets.UTF_8);
Pattern customizedTemplate =
Pattern.compile("(?x) result (?<shardnum>\\d+) - total (?<numshards>\\d+)");
assertThat(
new NumberedShardedFile(
tmpFolder.getRoot().toPath().resolve("*").toString(), customizedTemplate),
fileContentsHaveChecksum("90552392c28396935fe4f123bd0b5c2d0f6260c8"));
}
}
|
In recent years, a GaN based high electron mobility transistor (HEMT), in which an AlGaN/GaN hetero junction is utilized and GaN is used as a carrier transit layer, has been actively developed. Gallium nitride is a material having a wide band gap, high breakdown field strength, and a large saturation electron velocity and, therefore, is a very promising material capable of realizing a large current, a high voltage, and a low on-resistance operation. Development to apply GaN to a next-generation high efficiency amplifier used in a base station and the like and a high efficiency switching element to control an electric power has been actively performed.
A dielectric breakdown voltage is an important parameter of a semiconductor device used as the high efficiency amplifier or the high efficiency switching element. The dielectric breakdown voltage is a maximum voltage, which can be applied between a source electrode and a drain electrode included in a semiconductor device. If a voltage exceeding the dielectric breakdown voltage is applied, the semiconductor device is broken. In particular, a semiconductor device serving as the high efficiency switching element to control an electric power is required to have a high dielectric breakdown voltage because several hundred volts of voltage is applied.
However, regarding a semiconductor device having the HEMT structure illustrated in FIG. 16, it is difficult to obtain a high dielectric breakdown voltage. In the semiconductor device having the HEMT structure illustrated in FIG. 16, an i-GaN layer 101, an AlGaN layer 102, and an n-GaN layer 103 are disposed sequentially on a substrate 100. Furthermore, in the semiconductor device having the HEMT structure illustrated in FIG. 16, a source electrode 104 and a drain electrode 105 are disposed on the AlGaN layer 102, and a gate electrode 106 is disposed on the n-GaN layer 103.
Regarding the semiconductor device having the HEMT structure illustrated in FIG. 16, several volts of voltage is applied to the gate electrode 106, and several hundred volts of voltage is applied to the drain electrode 105. Therefore, a potential difference between the drain electrode 105 and the gate electrode 106 is large, so that a large electric field is applied on a protective layer 107 disposed on the n-GaN layer 103. As for the protective layer 107, a SiN film is used in general. The dielectric breakdown voltage of the SiN film is low and; therefore, in the case where a large electric field is applied to the SiN film, the SiN film is broken. As a result, reduction in dielectric breakdown voltage of the whole semiconductor device occurs. Regarding the SiN film, film formation through thermal nitridation is difficult, and film formation is performed by a CVD method. The SiN film formed by the CVD method has poor film quality, so that the dielectric breakdown voltage of the SiN film is reduced. Like the SiN film, the dielectric breakdown voltage of a SiO2 film serving as an interlayer insulating film is low, so that in the case where a large electric field is applied to the SiO2 film, the SiO2 film is broken.
The potential of a wiring connected to the drain electrode 105 becomes very high. Consequently, potential differences between the wiring connected to the drain electrode 105 and a wiring connected to the source electrode 104 or the gate electrode 106 becomes large. As a result, it is necessary that the distances between the individual wirings are increased in order to prevent breakdown of the interlayer insulating films due to application of very high voltages to the interlayer insulating films between the individual wirings. Regarding the semiconductor device having the HEMT structure illustrated in FIG. 16, it is necessary to increase the distances between the individual wirings and; therefore, the flexibility in wiring is reduced, and an increase in chip area is provided.
For example, a method in which an improvement of dielectric breakdown voltage is attempted by disposing the gate electrode and the source electrode on the back of a substrate so as to increase the distance between the drain electrode and the source electrode has been known.
Related documents include Japanese Patent Laid-Open No. 2006-269939 and Japanese Patent Laid-Open No. 2007-128994. |
def read_disdro(fname):
try:
var = re.search('GHz_(.{,7})_el', fname).group(1)
except AttributeError:
var = ''
try:
with open(fname, 'r', newline='', encoding='utf-8', errors='ignore') as csvfile:
reader = csv.DictReader(
(row for row in csvfile if not row.startswith('#')),
delimiter=',')
nrows = sum(1 for row in reader)
variable = np.ma.empty(nrows, dtype='float32')
scatt_temp = np.ma.empty(nrows, dtype='float32')
csvfile.seek(0)
reader = csv.DictReader(
(row for row in csvfile if not row.startswith('#')),
delimiter=',')
i = 0
date = list()
preciptype = list()
for row in reader:
date.append(datetime.datetime.strptime(
row['date'], '%Y-%m-%d %H:%M:%S'))
preciptype.append(row['Precip Code'])
variable[i] = float(row[var])
scatt_temp[i] = float(row['Scattering Temp [deg C]'])
i += 1
variable = np.ma.masked_values(variable, get_fillvalue())
np.ma.set_fill_value(variable, get_fillvalue())
csvfile.close()
return (date, preciptype, variable, scatt_temp)
except EnvironmentError as ee:
warn(str(ee))
warn('Unable to read file '+fname)
return (None, None, None, None) |
Dialogue of arts in the novel The Artist is Unknown by V. Kaverin Prose about the artist is fairly popular in the world and Russian culture, but the novel The Artist is Unknown by V. Kaverin holds a special place within the Russian literature, and the title itself is precedent for the philologists it is mentioned each time when it comes to genre of the novel about artist. Kaverin not only creates a vivid image of the artist-painter, but also restores his manner via literary style. The pictorial beginning is prevalent in the text; however, orientation towards other types of art, namely sculpture and theatre, are also noticeable sculpture and theater, which is reflected in the character sphere and in the composition itself. In the novel The Artist is Unknown, theater and painting are deeply intertwined and not only scenes of the play engage painting, but also the authorial painting involves theatrical aesthetics. However, namely the art of painting, is in the center of Kaverins attention, while the ekphrasis technique becomes the fundamental principle for arranging artistic material. It should be noted that the focus of attention of the audience falls onto imaginary ekphrasis, description of the image that exists only in the author's imagination, which allows revealing the features of Kaverin's original idiostyle that correlates certain literary techniques with the painter's technique (the author thinks in the categories of color, painting, texture, and perspective). In such way, painting becomes a metalanguage, the way of understanding the laws of art as such, and thus, the laws of literature, including such categories as narrative perspective and composition. The boundary between genres of the novel about artist and the novel about the novel in Kaverin's text is quite lucid: the fate of the artist is inseparable from the fate of his creation, and the questions of skill, purpose and designation of works comprise the very essence of conflicts of the novel. |
# -- coding: utf-8 --
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import base64
from aliyunsdkcore.request import RoaRequest
class SearchItemRequest(RoaRequest):
def __init__(self):
RoaRequest.__init__(self, 'ImageSearch', '2018-01-20', 'SearchItem','imagesearch')
self.set_uri_pattern('/item/search')
self.set_method('POST')
self.set_accept_format('JSON')
self.start = 0
self.num = 10
self.cate_id = ''
self.search_picture = ''
def get_instance_name(self):
return self.get_query_params().get('instanceName')
def set_instance_name(self,instanceName):
self.add_query_param('instanceName',instanceName)
def set_start(self, start):
self.start = start
def get_start(self):
return self.start
def set_num(self, num):
self.num = num
def get_num(self):
return self.num
def set_cate_id(self, cate_id):
self.cate_id = cate_id
def get_cate_id(self):
return self.cate_id
def set_search_picture(self, search_picture):
self.search_picture = search_picture
def get_search_picture(self):
return self.search_picture
# Build Post Content
def build_post_content(self):
param_map = {}
# 参数判断
if not self.search_picture :
return False
# 构建参数
param_map['s'] = str(self.start)
param_map['n'] = str(self.num)
if self.cate_id and len(self.cate_id) > 0:
param_map['cat_id'] = self.cate_id
encoded_pic_name = base64.b64encode("searchPic")
encoded_pic_content = base64.b64encode(self.search_picture)
param_map['pic_list'] = encoded_pic_name
param_map[encoded_pic_name] = encoded_pic_content
self.set_content(self.build_content(param_map))
return True
# 构建POST的Body内容
def build_content(self, param_map):
# 变量
meta = ''
body = ''
start = 0
# 遍历参数
for key in param_map:
if len(meta) > 0:
meta += '#'
meta += key + ',' + str(start) + ',' + str(start + len(param_map[key]))
body += param_map[key]
start += len(param_map[key])
return meta + '^' + body |
The U.K.'s economic story for the past five years has been bleak.
Next week we'll find out if the U.K. has entered its first ever triple-dip recession.
Economists project the economy will post a tiny 0.1% - 0.2% growth in the first quarter.
Even if the country does manage to grow a little bit, it's clear that the economy is underperforming.
The IMF lowered its growth forecasts for the U.K. by 0.3% for 2013 and 2014. It now expects the economy to expand just 0.7% in 2013, and 1.5% next year.
This week again we saw the labor situation deteriorate.
As the U.K. economy continues to skate on the edge between growth and recession, economists have increasingly blamed the current government's commitment to austerity, for the nation's economic woes that began with a run on Northern Rock bank in 2007. This was the first run on a U.K. bank in 140 years.
As the subprime crisis crossed over to the U.K. and liquidity concerns started to hit banks in Europe, Northern Rock was nationalized. This was a time when household savings began to rise, spending declined, the government rolled out austerity, and the financial sector hemorrhaged.
In late 2008, the Bank of England cut interest rates, and announced a £50 billion plan to partially take over major U.K. banks. The U.K. entered a technical recession. Growth declined for six straight quarters tumbling as much as 2.6% in Q1 2009, before emerging from the recession in Q3 2009 with a modest 0.2%.
The trajectory was very similar to what happened in the U.S. For the year, the economy had contracted 5%.
In Q4 2009, GDP increased 0.7%, but the spending boost that led this growth didn't reflect an improved household balance sheet or increased consumer confidence. It is believed to have been boosted by consumers picking up spending to avoid the impending hike in value added tax (VAT).
In Q2 2010, GDP increased 1.1%, the fastest pace since 2007. But economists warned (accurately in hindsight) that this was the peak. By Q4 2010 GDP had again contracted 0.5%. And it was at this time that Finance Minister George Osborne was about to reveal £81 billion in spending cuts.
The austerity was so hard to stomach that over two million public sector workers went on strike in November 2011. Public sector pensions were at the heart of the dispute.
By Q1 2012, the U.K. had entered its double dip recession, driven by a decline in construction.
The biggest criticism leveled against Chancellor George Osborne and British prime minister David Cameron has been their almost undivided focus on driving down debt and deficit, at the cost of economic growth.
In 2010, when David Cameron came to power he was praised by pundits who said that Cameron was making bold decisions to reduce debt and restore confidence.
Whereas traditional Keynesian economics encourages counter-cyclical spending to address a slump, the Cameron experiment was referred to as an endeavor in "expansionary austerity."
"The market turmoil, the flight of investors, the dismay of business, the loss of confidence, the credit downgrade, the sharp rise in real interest rates, the extra debt interest, the lost jobs, the cancelled investments, the businesses destroyed, the recovery halted, the return of crippling economic instability, Britain back on the brink. We are not going to allow that to happen to our country again."
But in last year's December Autumn Statement (AS), Osborne was forced to admit that the government's austerity had not worked. At the time, he said the economy was not on course to meet its target for when debt would start to fall. Yet, he followed this up by extending austerity until 2018, a year longer than previously forecast.
"We've never been passionate about austerity. From the beginning we have always emphasized that fiscal consolidation should be slow and steady. …We still believe that. You have a budget coming in March and we think that would be a good time to take stock and make some adjustments."
Then in February, the U.K. lost its AAA rating.
Moody's attributed the downgrade to, "the increasing clarity that ... the U.K.'s economic growth will remain sluggish over the next few years due to the anticipated slow growth of the global economy and the drag on the U.K. economy from the ongoing domestic public and private-sector deleveraging process."
Moody's also said the weaker growth has limited projected tax revenue increases and impacted the pace of debt and deficit reduction.
"[George Osborne] said the number one benchmark by which this government should be judged is its ability to keep its AAA credit rating,"
David Blanchflower, economist and former member of the Monetary Policy Committee (MOC), told Business Insider, "Not that I thought it was a sensible thing to focus on that AAA rating, but given that's what the government said I'm perfectly entitled to judge them according to their own words. And I judge them harshly."
Blanchflower said that despite the government's efforts to lower deficit and debt, their approach has been all wrong.
"The way to deal with debt is to deal with growth, if you deal with growth the deficit takes care of itself. This government believes if you take care of the deficit, growth takes care of itself and that's laughable nonsense that's proved to be untrue."
This chart from Lombard Street Research compares U.S. and U.K. GDP growth and then looks at how the U.K.'s budget deficit compares with America's. As you can see, growth in the U.K. has been substantially worse, where as deficits are not looking any better.
"Immediately reinstate £20 billion in infrastructure spending, that they cut" and put it towards building homes, schools and so on.
"Completely reverse austerity" - cut the value added tax (VAT) by five percent, reduce payroll taxes.
Give companies incentives to hire and invest.
Set up a small business bank to get money to small and medium businesses.
The latest budget however brought with it more austerity.
Most government departments were asked to cut their budgets by an additional 1% in 2013 and 2014, thereby freeing up £2.5 billion for capital spending. This followed a December announcement of 1% cuts in 2013-2014, and 2% in 2014-2015, to the budgets of various departments.
The new budget planned cuts to daily government spending in order to allow for spending in longer-term projects.
Osborne's budget included a review of the Bank of England's remit that gives the central bank more policy tools to boost economic growth without drastically changing monetary policy.
While the new remit maintains the 2% inflation target, it gets around the problem of sticking to the target by "splitting 'shocks and disturbances into those that are temporary, and those that 'may persist for an extended period of time,'" according to Lombard Street Research's Dario Perkins.
"This is official government endorsement for what the Bank has already been doing. In return, the Bank must now make more effort to explain its approach - in particular, by identifying any 'trade-offs' it sees with GDP growth (e.g. 'it's not worth tightening more to bring inflation down because that would hit the economy too hard')."
Moreover, the remit now gives the central bank more freedom in using credit easing, forward policy guidance like the Federal Reserve, and it makes U.K. monetary policy more "microprudential," according to Perkins. "The MPC can now - if it chooses - use interest rates to target financial-sector risks."
Some like David Tinsley, chief U.K. economist at BNP Paribas, think it's too simplistic to characterize this as a tight fiscal/loose monetary policy problem.
Tinsley told Business Insider that there have been two big impediments to the economic recovery in the U.K.
The first has to do with external factors, like a slowdown in the global economy, which in turn has been a drag on exports. The second, has been the deleveraging in the banking sector.
What's more, the Financial Policy Committee (FPC) has asked banks to add another £25 billion in capital reserves by the end of 2013. The FPC still thinks that 25% of the U.K. banking sector's assets are in institutions with leverage that is 40 times their equity.
While this helps strengthen the banking system in the long run, "the push towards larger capital buffers and tighter regulation more generally, almost certainly has negative effects on the supply of credit, wider monetary conditions and the pace of recovery,' writes Jamie Dannhauser at Lombard Street Research.
"The MPC and FPC both complain that the dysfunctional banking system is holding back the rebalancing of the economy and productivity growth. Yet both committees are strongly supportive of measures that at least partly are making banks less willing to lend and finance the small, risky firms that play a key role in the creative destruction process.
But it is not just the asset side of banks' balance sheets that will be affected; banks' capital raising, whether this is in the form of new equity issuance, reduced employee compensation or higher profit retention, reduces the quantity of broad money directly, ceteris paribus. Almost uniformly ignored in this debate, this is a key negative side-effect of higher capital requirements for banks and operates in a similar way to asset sales by the central bank, i.e. quantitative tightening.
…But the bigger problem with the FPC's actions is this - regulators keep moving the goalposts and changing the rules which banks have to play by. Just when banks with large legacy portfolios thought they had returned their balance sheets to health, they are told that more has to be done. What the MPC giveth, the FPC taketh away."
Despite these other drags on the U.K. economy, consensus seems to be that austerity has failed. IMF's Blanchard argues that Osborne is "playing with fire" and that he should "decrease the speed of fiscal consolidation."
Blanchflower is a bit more explicit. "Almost everything this government says; do the opposite ... I would just say, anything they ever say, my default position would be do the opposite." |
David and Louise Turpin of Perris, who were arrested in January 2018, abused their children for years in their home east of Los Angeles.
The Southern California couple accused of torturing and abusing their children for years in a home 70 miles east of Los Angeles pleaded guilty to 14 felony counts, including torture, false imprisonment and child endangerment, in a Riverside courtroom on Friday morning, ensuring they will spend decades behind bars.
The plea deal for David Turpin, 57, and Louise Turpin, 50, means they will not stand trial. After initially pleading not guilty to all charges last year, the couple will be sentenced on April 19.
"Those pleas will result in life sentences," Riverside County District Attorney Mike Hestrin said during a news conference. "They're going to serve an indeterminate sentence of 25 years to life ... Unless the parole board, at some point, affirmatively decides they should be released, they will serve the rest of their lives in prison."
The plea deal involved one count of torture, four counts of false imprisonment, six counts of cruelty to a dependent adult and three counts of willful child cruelty. The couple initially faced nearly 50 counts each related to the abuse of some of their 13 children.
"I think it's just and fair that the sentence was equivalent to first-degree murder," Hestrin said.
Inside the courtroom, David Turpin sat hunched over and wore a pale yellow shirt, black dress shoes with white athletic socks. Louise Turpin wore the black jacket that she's worn in previous court appearances. Both were soft-spoken when entering their guilty pleas.
David Macher, an attorney representing David Turpin, declined to comment after Friday's deal.
In a crowded room with more than 30 journalists after the court appearance, Hestrin further explained that California's Elder Parole law guarantees defendants older than 60 automatically receive parole hearings after serving 25 years in custody.
"When I spoke to you when the case was filed last year, I commented that this office was fully prepared to seek justice in this case ... but that we were going to seek justice in a way that did not bring further harm to these victims. This is among the worst most aggravated child abuse cases I have ever seen or been involved in my career as a prosecutor. Part of what went into the decision-making process was that the victims in this case would not ultimately have to testify," Hestrin said.
Hestrin repeatedly refused to comment on the state of mind of either the parents or the children and did not speculate on a possible motive for the abuse. He said he didn't want to preempt the children from speaking at the sentencing hearing, which they "may or may not" decide to do.
However, he did say that the children "are uniformly pleased that they would not have to testify."
"The attention that they would have had to go through, the scrutiny they would have had to go through I think made this case much different than other child abuse cases," he said.
The plea deal means that the Turpins admitted to at least one crime for each of the 12 victims in the case, according to the District Attorney's Office.
The case came to light when the couple's 17-year-old daughter called police in January 2018 after escaping through the window of a tract home in Perris.
On the 911 call, the girl said she and her siblings had been chained to their beds. She added that she never finished first grade and wasn't permitted to take baths.
“Sometimes I wake up and I can’t breathe because of how dirty the house is," she told authorities.
At a preliminary hearing in June, authorities said the children were an average of 32 pounds underweight. They often starved and were deprived of water, but when they were fed, their diets mostly consisted of frozen burritos, baloney sandwiches and peanut butter and jelly sandwiches.
Investigators testified that the children lacked education and said a 22-year-old son had told them he only completed the third grade.
Photos show some of the Turpin children wearing feces-stained clothing, which investigators described as putrid.
The abuse against the couple's children, as cited in criminal charges, stretched back more than a decade to when the Turpins lived in Texas. In interviews with investigators, the Turpin children said that if they were disobedient they were frequently subjected to punishment included beatings with a paddle, an oar and a metal-tipped tent pole. They were padlocked inside feces-stained cages or dog kennels for days, weeks or even months.
In photos, the children appear bruised on their wrists and torsos from shackles and other forms of punishment. One of the children told an investigator her neck was left bruised for days after her mother choked her while asking if she wanted to die. When the girl responded that she didn't, her mother is said to have responded,"Yes, you do. You wanna die and go to hell."
In addition to abuse charges, David Turpin also faces eight perjury charges stemming from documents authorities claim were falsified to state his kids were being home-schooled.
The family home, which sits in the middle of a residential street in Perris, has remained vacant since the couple were arrested. In February 2018, the family’s cars disappeared.
About nine months later, a foreclosure took place; the home hit the auction block in December 2018. After the initial auction closed with a bid of more than $310,000, the home was on the online auction for a second time in January before the listing was removed.
It’s unclear whether the home has a new owner.
Previous reporting by Colin Atagi, Christopher Damien and other Desert Sun reporters was used in this story.
Shane Newell covers breaking news and the western Coachella Valley cities of Palm Springs, Cathedral City and Desert Hot Springs. He can be reached at [email protected], (760) 778-4649 or on Twitter at @journoshane. Sam Metz covers politics. Reach him at [email protected] or on Twitter @metzsam. |
Robust Design of a Single Tuned Mass Damper for Controlling Torsional Response of Asymmetric-Plan Systems A new robust design methodology to control the seismic performance of asymmetric structures equipped with a Single Tuned Mass Damper (STMD) is presented in this article. This design approach aims to control the seismic response of such systems by reducing both flexible-and stiff-edge maximum displacement. The dynamic problem has been investigated in the state space representation showing that the TMD works as a closed-loop feedback control action. A synthetic index to estimate the seismic performance of the main system has been defined by using H∞ norm. Wide-ranging parametric numerical experimentation has been carried out to obtain design formulae for the STMD in order to minimize such a performance index. These formulae allow for a simple design of STMD position and stiffness to optimally control both translational and rotational motion components, whereas two mass devices are generally considered to improve the seismic performance of asymmetric structural systems The effectiveness and efficiency of the obtained design formulae have been tested by investigating the dynamic behavior of the asymmetric structure after being subjected to different recorded seismic inputs. |
Pulmonary accumulation of polymorphonuclear leukocytes in the adult respiratory distress syndrome The polymorphonuclear leukocyte (PMN) plays an integral role in the development of permeability pulmonary edema associated with the adult respiratory distress syndrome (ARDS). This report describes 3 patients with ARDS secondary to systemic sepsis who demonstrated an abnormal diffuse accumulation of Indium (111In)-labeled PMNs in their lungs, without concomitant clinical or laboratory evidence of a primary chest infection. In one patient, the accumulation of the pulmonary activity during an initial pass suggested that this observation was related to diffuse leukoaggregation within the pulmonary microvasculature. A 4th patient with ARDS was on high-dose corticosteroids at the time of a similar study, and showed no pulmonary accumulation of PMNs, suggesting a possible reason for the reported beneficial effect of corticosteroids in human ARDS. |
import { isFunction } from '@antv/util';
import { Params } from '../../core/adaptor';
import { flow } from '../../utils';
import { RingProgressOptions } from './types';
/**
* 字段
* @param params
*/
function field(params: Params<RingProgressOptions>): Params<RingProgressOptions> {
const { chart, options } = params;
const { percent, color } = options;
const data = [
{
type: 'current',
percent: percent,
},
{
type: 'target',
percent: 1 - percent,
},
];
chart.data(data);
const geometry = chart.interval().position('1*percent').adjust('stack');
const values = isFunction(color) ? color(percent) : color || ['#FAAD14', '#E8EDF3'];
geometry.color('type', values);
return params;
}
/**
* meta 配置
* @param params
*/
function meta(params: Params<RingProgressOptions>): Params<RingProgressOptions> {
const { chart, options } = params;
const { meta } = options;
chart.scale(meta);
return params;
}
/**
* axis 配置
* @param params
*/
function axis(params: Params<RingProgressOptions>): Params<RingProgressOptions> {
const { chart } = params;
chart.axis(false);
return params;
}
/**
* legend 配置
* @param params
*/
function legend(params: Params<RingProgressOptions>): Params<RingProgressOptions> {
const { chart } = params;
chart.legend(false);
return params;
}
/**
* tooltip 配置
* @param params
*/
function tooltip(params: Params<RingProgressOptions>): Params<RingProgressOptions> {
const { chart } = params;
chart.tooltip(false);
return params;
}
/**
* 样式
* @param params
*/
function style(params: Params<RingProgressOptions>): Params<RingProgressOptions> {
const { chart, options } = params;
const { progressStyle } = options;
const geometry = chart.geometries[0];
if (progressStyle && geometry) {
if (isFunction(progressStyle)) {
geometry.style('1*percent*type', progressStyle);
} else {
geometry.style(progressStyle);
}
}
return params;
}
/**
* coordinate 配置
* @param params
*/
function coordinate(params: Params<RingProgressOptions>): Params<RingProgressOptions> {
const { chart, options } = params;
const { innerRadius = 0.8, radius = 1 } = options;
chart.coordinate('theta', {
innerRadius,
radius,
});
return params;
}
/**
* 环形进度图适配器
* @param chart
* @param options
*/
export function adaptor(params: Params<RingProgressOptions>) {
flow(field, meta, axis, legend, tooltip, style, coordinate)(params);
}
|
<reponame>gregrickaby/local-weather<filename>components/WeatherProvider.tsx
import {useLocalStorage} from '@mantine/hooks'
import {createContext, useContext} from 'react'
import {WeatherContextProps} from '~/lib/types'
import useWeather from '~/lib/useWeather'
// Create the WeatherContext.
const WeatherContext = createContext({} as WeatherContextProps)
// Create useWeatherContext hook.
export const useWeatherContext = () => useContext(WeatherContext)
export default function WeatherProvider({
children
}: {
children: React.ReactNode
}) {
const [location, setLocation] = useLocalStorage({
key: 'location',
defaultValue: 'Enterprise, AL'
})
const [tempUnit, setTempUnit] = useLocalStorage({
key: 'tempUnit',
defaultValue: 'f'
})
const {weather, isLoading} = useWeather(location)
return (
<WeatherContext.Provider
value={{
isLoading,
location,
setLocation,
weather,
tempUnit,
setTempUnit
}}
>
{children}
</WeatherContext.Provider>
)
}
|
package main;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Stack;
import project.Downloader;
import project.MerkleTree;
public class Main {
public static void main(String[] args) {
MerkleTree m0 = new MerkleTree("data/1_bad.txt");
String hash = m0.getRoot().getLeft().getRight().getData();
System.out.println(hash);
boolean valid = m0.checkAuthenticity("data/0meta.txt");
System.out.println(valid);
// The following just is an example for you to see the usage.
// Although there is none in reality, assume that there are two corrupt
// chunks in this example.
ArrayList<Stack<String>> corrupts = m0.findCorruptChunks("data/1meta.txt");
System.out.println("Corrupt hash of first corrupt chunk is: " + corrupts.get(0).pop());
try {
System.out.println("Corrupt hash of second corrupt chunk is: " + corrupts.get(1).pop());
} catch (IndexOutOfBoundsException e) {
}
download("secondaryPart/data/download_from_trusted.txt");
}
public static void download(String path) {
Downloader dw = null;
try {
dw = new Downloader(path);
} catch (IOException e) {
e.printStackTrace();
}
dw.createInternalFiles();
dw.createSplitFiles();
}
}
|
def make_app() -> Flask:
logger.info('creating flask application')
app = Flask(
'pasta',
static_url_path='/static',
static_folder='./static',
template_folder='./views')
config.flask.SECRET_KEY = os.urandom(32)
config.flask.SERVER_NAME = None
app.config.from_mapping(config.flask)
return app |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.