content
stringlengths 7
2.61M
|
---|
Early Detection of Serum Levels of HER-2 in Patients with Head and Neck Squamous Cell Carcinoma. INTRODUCTION The presence of HER-2 has been shown to be a prognostic factor in many kinds of cancers, but its role in head and neck squamous cell carcinoma (HNSCC) is not still defined. The purpose of the current study is to investigate the role of HER-2 in HNSCC and its correlation with various clinicopathologic parameters. MATERIALS AND METHODS Peripheral blood samples were obtained from 17 healthy volunteers and 69 patients with HNSCC before curative surgery. The HER-2 level was determined in each sample sandwich by ELISA. Statistical analysis was performed using an independent t-test, one-way ANOVA, and the Duncan procedure. RESULTS Mean HER-2 serum levels were higher in patients with HNSCC compared with healthy controls, although the difference was not statistically significant (3.85ng/ml vs. 3.75ng/ml; P>0.05). The mean serum level of HER-2 in patients with was higher in patients with lymph node involvement, metastasis, invasion, tumor size ≥2 cm, and stage>1, although the differences were not statistically significant (P>0.05). DISCUSSION Mean HER-2 serum levels in patients with tumor size T3 and higher were higher than those from patients in stage T1 and T2.over expression of these receptor translate into disease progression, growth and invasiveness, with the increase serum HER-2 levels in such patients offering some support for this theory. CONCLUSION In this study the mean HER-2 serum level in patients with HNSCC was found to be greater in comparison with the healthy control group, although the difference was statistically insignificant. From the analysis of the results of the current study we have come to the conclusion that by increasing sample size the rising of the serum HER-2 level in patients with HNSCC can be meaningful. Apart from this, the role of HER-2 as a tumor marker in patients with HNSCC is still controversial and needs further studies to clarify the significance of this biomarker for early detection or screening for HNSCC. |
/**
* \brief The class that describes properties of a monitor.
* */
class Monitor
{
public:
typedef const VideoMode* const_iterator;
virtual ~Monitor() {};
virtual const_iterator begin() const = 0;
virtual const_iterator end() const = 0;
virtual const VideoMode& getCurrentVideoMode() const = 0;
} |
DeepMon: Mobile GPU-based Deep Learning Framework for Continuous Vision Applications The rapid emergence of head-mounted devices such as the Microsoft Holo-lens enables a wide variety of continuous vision applications. Such applications often adopt deep-learning algorithms such as CNN and RNN to extract rich contextual information from the first-person-view video streams. Despite the high accuracy, use of deep learning algorithms in mobile devices raises critical challenges, i.e., high processing latency and power consumption. In this paper, we propose DeepMon, a mobile deep learning inference system to run a variety of deep learning inferences purely on a mobile device in a fast and energy-efficient manner. For this, we designed a suite of optimization techniques to efficiently offload convolutional layers to mobile GPUs and accelerate the processing; note that the convolutional layers are the common performance bottleneck of many deep learning models. Our experimental results show that DeepMon can classify an image over the VGG-VeryDeep-16 deep learning model in 644ms on Samsung Galaxy S7, taking an important step towards continuous vision without imposing any privacy concerns nor networking cost. |
59% Say U.S. Government Encourages Illegal Immigration
Most voters nationwide continue to believe government policies encourage illegal immigration and support using the military along the U.S.-Mexican border. But they remain divided as to whether the federal government or individual states should enforce immigration laws.
The latest Rasmussen Reports national telephone survey of Likely Voters shows that 59% believe the policies and practices of the federal government encourage illegal immigration. Just 23% disagree while another 18% are not sure. (To see survey question wording, click here.)
The number of voters who believe the federal government's policies encourage people to enter the country illegally is virtually unchanged from May and is in line with findings since October 2009.
Majorities of Republicans (70%) and voters not affiliated with either party (61%) believe the government’s policies encourage illegal immigration, a view shared by 46% of Democrats.
Most Mainstream voters believe the government encourages illegal immigration, while 48% of Political Class voters say that is not the case.
Forty-seven percent (47%) believe the better approach to dealing with illegal immigration is relying on the federal government to enforce the law, but the same number (46%) says it is better to allow individual states to act on their own to enforce it.
Voters have been divided on the question for the past several surveys, but the number that thinks allowing states to enforce the law is the better approach is down nine points from last September when Arizona's crackdown on illegal immigration was in the news.
(Want a free daily e-mail update? If it's in the news, it's in our polls). Rasmussen Reports updates are also available on Twitter or Facebook.
The survey of 1,000 Likely U.S. Voters was conducted on September 12-13, 2011 by Rasmussen Reports. The margin of sampling error is +/- 3 percentage points with a 95% level of confidence. Field work for all Rasmussen Reports surveys is conducted by Pulse Opinion Research, LLC. See methodology.
OR |
Three men who were involved in an elaborate cannabis factory hidden at a farm near Bexhill have been sentenced.
Keith Fieldwick, 54, of Denbigh Road in Hooe, and Thomas O’Brien, 56, of Shrub Lane in Burwash were each found guilty of producing a controlled drug after a trial in December last year. Michael Hill, 74, of High Street in Burwash pleaded guilty to the same charge before the trial began.
ll three men appeared at Lewes Crown Court this afternoon for sentencing.
At a previous hearing prosecutor Gareth Burrows described how police officers caught the three men ‘red handed’ surrounded by cannabis plants.
Officers then proceeded to search the house and found more evidence of cannabis production throughout, with six rooms being used in the production of cannabis, the court heard.
In the grounds police found huge polytunnels filled with cannabis plants. Overall 601 plants were found with an estimated street value of between £240,000 and £721,200.
Speaking at the sentencing hearing this afternoon, O’Brien’s defence counsel Jonathan Ray said: “Mr O’Brien is industrious and this is totally out of character.
He told the court that the evidence puts O’Brien at the farm on the day the police raided it and the day before, but at no other point.
Jay Shah, defending Fieldwick, argued that he played a ‘limited’ role in the cannabis factory, rather than a ‘leading’ or ‘significant role’.
He described Fieldwick as someone who is ‘deeply caring, hardworking and someone who has had significant troubles in his life’.
Mr Shah said his client was a ‘devoted father’ and the sentence would heavily impact his family.
Dale Beeson, representing Hill, said the evidence, which was accepted by the judge, only put his client at the farm on the day of the police raid.
Mr Beeson asked that Hill be given credit for his guilty plea and asked that any jail term be suspended.
Sentencing the three men, Recorder Stephen Lennard described the enterprise as a ‘very significant cannabis growing operation’.
He continued: “Mr O’Brien and Mr Fieldwick you tried to persuade the police that you knew nothing of this operation before arriving at the property on October 2, 2014.
Addressing Hill, Mr Recorder Lennard said he was satisfied that he had played a lesser role in the drugs production.
Fieldwick was jailed for four-and-a-half years.
O’Brien was jailed for three years.
Hill was given a 22-week prison sentence, suspended for 18 months. |
In recent years, identification technology where ID (identification number) is assigned to each object so as to reveal information thereon such as history which is utilized for production management and the like has attracted attention. Above all, semiconductor devices capable of wireless data transmission/reception have developed. As such a semiconductor device, in particular, an RFID (radio frequency identification) tag (also referred to as an ID tag, an IC tag, an IC chip, an RF tag, a wireless tag, an electronic tag, a wireless chip or a transponder) and the like begin to be introduced into companies, markets, and the like.
A background art will be given using a communication system which conforms to ISO/IEC15693 which is one of RFID standards as an example. This communication system encodes data by a pulse position modulation method, which modulates a carrier wave with a frequency of 13.56 MHz at 100% or 10% and changes the position of modulation to distinguish data. An example of the case where the carrier wave is modulated at 100% is shown in FIG. 3A and an example of the case where the carrier wave is modulated at 10% is shown in FIG. 3B. A carrier wave with a modulation degree of 100% includes a state having no amplitude, while a carrier wave with a modulation degree of 10% includes a state where amplitude is changed by 10%.
The method called 4PPM (pulse position modulation) which is one of the pulse position modulation methods conforming to ISO/IE15693 is described with reference to FIG. 4A.
In FIG. 4A, a rectangle portion represents a carrier wave with a frequency of 13.56 MHz and a line between rectangles represents a modulated portion. Two-bit value “00”, “01”, “10”, and “11”; and frame codes “SOF” and “EOF” are determined by respective locations of 9.44 μs of the modulated portions in 75.52 μs of duration. Note that the duration of EOF is 37.76 μs.
In FIG. 4A, “SOF” is a signal representing the start of a frame and is sent before data is sent while “EOF” is a signal representing the end of a frame and is sent after data is sent.
A transmission-side reader/writer encodes a flag signal and data such as a command by a pulse position modulation method, modulates a carrier wave with the encoded data, and sends the modulated carrier wave to an RFID tag. A reception-side RFID tag demodulates the modulated carrier wave and reads out a pulse position to obtain data.
A common method for obtaining data on the RFID tag side is described below with reference to FIG. 4B. Note that data is sent with a carrier wave modulated at 100% by the pulse position modulation method. In an example of FIG. 4B, Two-bit value “00”, “01”, “10”, and “11” are sent as data after “SOF” which is sent as a starting signal.
Note that a reference clock signal is synchronized with the portions of a carrier wave, which is modulated at 100%. Further, a half period of the clock signal has the same length as the width of the pulse modulated at 100%. A counter which performs two-bit count with the clock signal is provided as shown in FIG. 4B, count 1 and count 2. The counter counts repeatedly from “00” to “11” while “00” indicates the first position of modulation at 100% in “SOF”. The timing where each piece of data is modulated at 100% corresponds to a counter value. Data can be obtained from the signal modulated with the pulse position modulation method in accordance with the counter value which is obtained when a carrier wave is modulated at 100%.
An RFID tag needs a reference clock signal to extract data from a carrier wave. However, the signal which can be received by RFID tag from an antenna, is only the carrier wave and a demodulated signal which is obtained by demodulating the carrier wave. Therefore, a reference clock signal for detecting a timing of modulating the carrier wave (hereinafter the timing is also referred to a pulse position) needs to be generated in the RFID tag.
A PLL (phase locked loop) circuit can be used to obtain the reference clock signal. A PLL circuit detects a phase difference between an input signal and an output signal and controls a VCO (voltage controlled oscillator) from which the output signal is generated, so that the output signal with a frequency precisely synchronized with the input signal can be obtained.
The clock signal which is used for internal operation of the RFID tag can be generated by obtaining a carrier wave or a waveform synchronized with a demodulated signal with the use of a PLL circuit. The RFID tag which generates the clock signal using the PLL circuit is disclosed, for example, in FIG. 9 of Patent Document 1 (Japanese Published Patent Application No. 2008-010849). |
What can we learn from consumption-based carbon footprints at different spatial scales? Review of policy implications Background: Current climate change mitigation policies, including the Paris Agreement, are based on territorial greenhouse gas (GHG) accounting. This neglects the understanding of GHG emissions embodied in trade. As a solution, consumption-based accounting (CBA) that reveals the lifecycle emissions, including transboundary flows, is gaining support as a complementary information tool. CBA is particularly relevant in cities that tend to outsource a large part of their production-based emissions to their hinterlands. While CBA has so far been used relatively little in practical policymaking, it has been used widely by scientists. Methods and design: The purpose of this systematic review, which covers more than 100 studies, is to reflect the policy implications of consumption-based carbon footprint (CBCF) studies at different spatial scales. The review was conducted by reading through the discussion sections of the reviewed studies and systematically collecting the given policy suggestions for different spatial scales. We used both numerical and qualitative methods to organize and interpret the findings of the review. Review results and discussion: The motivation for the review was to investigate whether the unique consumption perspective of CBA leads to similarly unique policy features. We found that various carbon pricing policies are the most widely supported policy instrument in the relevant literature. However, overall, there is a shortage of discussion on policy instruments, since the policy discussions focus on policy outcomes, such as behavioral change or technological solutions. In addition, some policy recommendations are conflicting. Particularly, urban density and compact city policies are supported by some studies and questioned by others. To clarify the issue, we examined how the results regarding the relationship between urban development and the CBCF vary. The review provides a concise starting point for policymakers and future research by summarizing the timely policy implications. Introduction Current climate change mitigation policies are mainly based on territorial or production-based greenhouse gas (GHG) accounting, which allocate emissions according to the place of origin. Most importantly, the United Nations Framework Convention on Climate Change (UNFCCC), the Kyoto Protocol, and the Paris Agreement are based on territorial accounting that allocates GHG emissions according to national territories and excludes international aviation and shipping. Although the UNFCCC, the Kyoto Protocol, and now the Paris Agreement, have the principle of 'common but differentiated responsibilities' and an aim to place a heavier burden on developed countries, based on their historical emissions, they have been criticized for overlooking consumption-based emissions and the responsibility for transboundary flows (Peters 2008, Steininger et al 2014. Consumption-based accounting (CBA) allocates the GHG emissions caused by the whole supply chain of goods and services to the consumer, irrespective of where the emissions occur. Production-based accounting (PBA) is similar to territorial accounting except that it includes the GHG emissions caused by international transportation. Several studies have revealed that while the production-based emissions of some developed countries have decreased under the Kyoto Protocol, the consumption-based carbon footprints (CBCFs) of the same countries may have increased during the same period (Peters and Hertwich 2008, Clement et al 2017, Isaksen and Narbel 2017. Thus, although we can detect the decoupling of productionbased emissions from economic growth at country level, it does not mean that there is decoupling between total GHG emissions and economic growth at the global level. One of the main benefits of CBA is that it captures carbon leakage, including the so-called weak carbon leakage, which means the outsourcing of GHG emissions outside the territorial boundaries (Peters and Hertwich 2008, Davis and Caldeira 2010, Andrew et al 2013, Xie et al 2015. While the Paris Agreement tries to tackle the issue by involving all the countries of the world, it still relies on territorial accounting, which limits the understanding of the impact of trade on global emissions (Afionis et al 2017, Isaksen andNarbel 2017). CBA has the potential to prevent carbon leakage and share the responsibility for the emissions more fairly (Steininger et al 2014), but its political feasibility has been problematic (Afionis et al 2017). Yet Grasso concludes in his policy analysis that, in principle, official CBA is feasible at the national level if democratic and institutional frameworks are in place to support its implementation. CBA is not only relevant at the national and international policy level. It has been argued that it is particularly relevant for cities, which often outsource their emissions to their hinterlands (Paloheimo and Salmi 2013, Feng et al 2014, Chen et al 2016a, Mi et al 2016, Wiedmann 2016, Fry et al 2018, Moran et al 2018 see also Ramaswami et al 2016). Recently, there has been increasing interest among cities to adopt CBA as a complement to PBA. The C40 Cities Climate Leadership Group has estimated the CBCF for 79 of its member cities in order to broaden the mitigation targets and actions beyond the city boundaries (C40 cities 2018). They argue that by addressing the consumption-based emissions, in addition to production-based emissions, cities could potentially have a much greater impact on reducing global GHG emissions. However, CBA includes uncertainties due to the underlying assumptions inherent in the methodology, which restricts its usability for policymaking, particularly at detailed spatial scales (Afionis et al 2017, Owen 2017, C40 cities 2018. Thus, for example, Fry and co-authors call for investment into the development of CBCF models and underlying databases in order to increase the effectiveness of the consumption-based mitigation policies of cities. It has also been argued that consumption-based GHG emissions are difficult to address by city, and cities should rather focus on the emissions that they can directly affect (Lazarus et al 2013, Lin et al 2015, Erickson and Morgenstern 2016, Ramaswami et al 2017. Although the implementation of CBA as an official information and reporting tool is in its infancy, it has been used widely in the relevant scientific literature. The CBCF literature, meaning studies that use CBA to assess GHG emissions, provides policy recommendations ranging from international policies to city and local policies. However, it is currently unknown how well the recommendations are in line with each other. Since the CBCF literature provides a unique perspective on GHG emissions, the policy implications may have unique features as well (Wiedmann and Barret 2013). In other words, our hypothesis is that the policy implications of CBCF studies are similar to each other but differ in their focus and emphasis from the implications of broader literature on climate change. This was the motivation for our systematic review on the policy implications of CBCF literature. The review covers 103 studies that were published before July 2018. The amount of CBCF studies has increased steeply since around 2008 (Heinonen et al 2019), making this is a good moment to pause and reflect upon the results and policy implications. In this review, we analyze and summarize the policy implications of the studies. While Afionis et al provide a valuable and comprehensive policy analysis on the issue of whether CBA should be implemented as an official accounting method, particularly at national level, they do not discuss the other policy implications of the CBCF literature. In addition, we add the spatial dimension to the policy analysis. The discussed policy levels include international, national, and city levels. The focus of this review is on sub-national studies, since these provide the most relevant policy implications regarding the spatial dimension. What we find is that the policy discussions of CBCF studies focus on wanted policy outcomes rather than on practical policy instruments. In other words, the majority of the reviewed studies provide suggestions for what should be done, but do not provide guidance on how. Shifting the emphasis of policy implications towards possible policy instruments, which could be used to achieve the wanted policy outcomes, would be helpful from the policymakers' perspective. Furthermore, policy recommendations are sometimes conflicting, even within the CBCF literature. Particularly in the case of urban density policies and urban development more generally, the policy recommendations split. Urban density and compact city policies are supported by some studies and questioned by others. The missing consensus may hinder decision-making (Zborel et al 2012). Thus, we review the actual results regarding the relationship between urban development and the CBCF in order to clarify this policy topic. The research questions of the review are: RQ1: What sort of policy implications the CBCF literature gives for different spatial scales? RQ2: What do different studies find in terms of the relationship between urban structure and CBCFs? This paper is outlined as follows: section 2 presents the review process, section 3 the policy analysis, section 4 the review of the relationship between urban development and CBCFs, and section 5 the conclusions. Section 2.1 presents the selection procedure of the reviewed studies and the used review framework, and the following subsections describe how the analysis of the policy recommendations (RQ1) and the results of interest (RQ2) were done. Sections 3 and 4 provide the main results of the review and relevant discussion. The policy suggestions at each policy level are summarized in tables 1-3, in subsections 3.3-3.5. Although the review focuses on analyzing the policy recommendations given by the authors of the reviewed literature, we have taken a step further and provide suggestions for practical policy instruments, even if this is not done in the original sources. In the conclusions (section 5) we give guidelines for future research. 2. Review process 2.1. Selection and organization of the reviewed studies The purpose of the review was to analyze and summarize the policy implications of the CBCF literature from a spatial point of view. Thus, the reviewed studies were selected based on the following criteria: 1. The study presents a full CBCF (not only selected consumption categories) of a certain geographic area showing the division of emissions into different consumption categories (instead of industrial sectors). 2. The study reports original research. Reviews and discussion papers were excluded. 3. The study is peer-reviewed and published in English in an academic journal or as a book chapter. The main interest of the review were consumer carbon footprints. Thus, we excluded studies focusing on industry linkages or trade flows, which may assess consumption-based emissions but do not look at the results from a consumer's angle (Criterion 1). In addition, we excluded partial assessments, which focus on certain consumption categories instead of the full CBCF. We included only original research papers in order to organize and analyze the first-hand policy implications of the CBCF literature (Criterion 2). However, previous review and policy papers were used as additional references. We included all studies published until June 2018. We used a systematic procedure to collect the studies for our review. We started with a snowball method, collecting all publications based on our knowledge, and adding new publications to the collection based on the references of the initial set of publications. This was followed by a systematic literature search with the Scopus database using the following string: We screened all papers to exclude those not fulfilling the above three criteria. The snowball method yielded 108 studies, out of which 30 were excluded after screening. The Scopus search resulted in 2074 studies. Majority of these were excluded after screening the titles and abstracts, leaving 119 studies for closer reading. Of these, 25 were accepted to the final review collection. Thus, the total review collection was composed of 103 studies (78 from snowball collection and 25 from Scopus). The reviewed studies and some key information are presented in the supplementary information (SI) (table S1) is available online at stacks. iop.org/ERL/14/093001/mmedia. We used the same collection of papers in our separate review on the comparability of CBCF studies, which focuses on conceptual and technical issues (Heinonen et al 2019). We created a review framework to organize the reviewed studies (figure 1). We used the framework throughout the review to position the papers according to their spatial scale and policy implications. The generalizability of the results and policy implications increases with the increasing spatial scale. However, when the spatial scale is narrowed down, the level of detail of the analyzes increases. This allows more practical and individual policy implications. The spatial scale affects the research topics as well. Detailed spatial scale allows more detailed analyzes on urbanization and urban structure. The funnel of spatial scale narrows down to household level and product level carbon footprint studies. However, these were excluded from the review, which focuses on geographic spatial scales (dashed line in figure 1). Policy analysis The policy analysis was conducted by reading through the discussion sections of the 103 reviewed studies and systematically collecting the given policy suggestions for different spatial scales. In order to collect numerical information on how many times specific types of policies have been recommended, we selected upperlevel policy categories that emerged from the whole review collection. Later we divided these into policy instruments and policy outcomes. While policy outcomes include suggestions related to the wanted outcomes, for example changing consumption behavior or technological solutions, policy instruments are the actual policy tools or incentives to achieve the wanted outcomes. The selected policy categories for the numerical analysis were: Policy instruments: 1. CBA should be an official accounting method (in addition to PBA) 2. Carbon pricing policies (a carbon cap and trading, emission trading schemes, carbon tax, subsidies to renewables, etc). 4. Technological solutions (energy efficiency, production technologies, etc). 5. Tailored policies for different groups or areas, context sensitivity. 6. The compact city, urban density policies. In general, policy instruments include carbon pricing, command and control (CAC), meaning regulation, and voluntary incentives (Requate 2005). Directing demand of specific goods and raw materials to countries, where the environmental pressure caused by the production is known to be low Benefits for countries with sustainable production and raw material extraction practices -What would be the incentive or political instrument? -May conflict WTO rules E.g. companies selling carbon credits that do not correspond real emission reductions. However, only carbon pricing policies and CBA as an official accounting method were frequently explicitly mentioned in the reviewed literature, and thus included in the numerical analysis. Some studies highlight specific voluntary and regulatory tools, such as green labels. We discuss these in more detail in the qualitative policy analysis (subsections 3.2-3.5). We used keywords to search for the relevant policy discussions from the papers. The used keywords were: 1. Policy discussions: 'poli * '. The main focus of the policy analysis was on reading and qualitatively evaluating the policy implications of the discussion sections. We only noted down if the authors supported or questioned a specific policy. If the policy was mentioned but not commented upon by the authors, we did not note it down for the numerical analysis. For example, many authors mentioned some of the climate change mitigation policies of the case country, but neither supported nor criticized them. Nonetheless, policy analysis is vulnerable to subjective interpretations, which should be taken into account in the interpretation of the results. In order to analyze the impact of the spatial scale on the policy recommendations, we classified the spatial scales of the studies into seven categories: multinational, national, sub-national (regional), city, sub-city (neighborhood or similar), urban zone, and settlement type (urban-rural). Multi-national indicates studies that include several countries, for example, those of the EU or the whole world. However, studies that include case cities from several countries are classified as city-scale studies. National studies focus on one country. Subnational indicates sub-national regions other than cities, for example provinces. Sub-city indicates neighborhoods or postal code areas, that is to say, areas that are generally smaller than cities. Urban zone indicates travel zones or similar zones within a city. Settlement type indicates an urban-rural comparison based on the population and/or density of the studied settlements. Review of the results on the relationship between urban structure and CBCFs The review of the results on the relationship between urban structure and CBCFs was conducted by reading through the results sections of the reviewed 103 studies and selecting those that included sub-national comparative analyzes on the level of urbanization. Thus, out of the larger number of studies that had a subnational or more detailed spatial scale, only those that used clearly defined variables (such as area type or density) to describe the urban structure differences were used in this section. We found 35 such papers. A rather substantial share of sub-national papers approach the urbanization issue by calculating average CBCFs for a wide range of different-sized spatial unitsranging from small super-output areas in London (Minx et al 2009), through individual cities in Finland (e.g. Heinonen and Junnila 2011a, 2011b) all the way to Chinese provinces (Yan and Minjun 2009)-or their combinations (Xie et al 2015). Unfortunately, these papers rarely include a rigorous description or analysis of the characteristics of each spatial unit, for example, the city in question. Thus, it is difficult to use them in the comparative analysis of the relationship between urban development and CBCFs, even though they are useful (for example, in visualizing the spatial distribution of emissions and highlighting the differences between the production-and consumption-based approaches). Numerical policy analysis The spatial scale of the study affects the policy recommendations (figure 2). For the purpose of the The spatial scale of the study affects the discussed policy levels as well (figure 2). As can be expected, cityscale and more detailed scale studies emphasize city policies, whereas national and sub-national regional studies focus on national-level policies and multinational studies emphasize international policies. However, many studies provide a policy discussion that goes beyond the spatial scale of the study. In addition, it is common in the CBCF literature to give guidelines for households and consumers, and sometimes companies, directly. Most of the studies give no priority order for the policy level. In general, the need for international cooperation is highlighted in the literature Caldeira 2010, Levitt et al 2017). However, it is acknowledged that international cooperation is often slow, whereas cities, companies, and individual consumers can take more immediate action The time dimension reveals some interesting patterns as well (figure 2). Particularly, the call for official CBA has emerged quite recently in the empirical CBCF literature, although the benefits of CBA have been discussed more generally in some early studies as well Peters 2009, Davis andCaldeira 2010). In addition, city policies increase their role in the literature after 2010. This is probably directly connected to the spatial scales of the studies. The amount of studies with a sub-national and more detailed spatial scale started to increase steeply around 2010 (Heinonen et al 2019). It should be noted that figure 2 only illustrates how much emphasis is given to each policy recommendation and policy level in the CBCF literature. Since one paper can discuss several policy aspects, the percentages in figure 2 illustrate the 'hits' in the whole literature instead of giving the share of studies that support or question each policy. The latter are given in tables S3-S6 (in the SI). Also, some papers do not give any policy recommendations. The policy aspects included in the numerical policy analysis are not exhaustive, although the majority of the found policy recommendations fell under the chosen categories. In the following qualitative policy analysis, we discuss various policy aspects more broadly. Figure 3 provides an overview of the policy recommendations of the reviewed literature. We classify the policy recommendations into policy outcomes and policy instruments. Policy outcomes include policy recommendations that instruct what should be done, but suggest no incentives. Policy instruments are policy tools or incentives that can be used to achieve the wanted policy outcomes. We found that the emphasis of the policy recommendations in the reviewed literature is clearly on policy outcomes rather than policy instruments. A simple example of this is that the majority of the papers suggest changing consumer behavior towards more sustainable consumption patterns, but few are concerned about how the consumers are to be persuaded to make this change. Discussion of the possible policy instruments would target this question. Qualitative policy analysis We categorize the policy outcomes into behavioral change, technological solutions, tailored policies, and sustainable urban planning (figure 3). The last category is different from the one we used in the numerical analysis. In the numerical analysis, we were interested in the conflict of the recommendations related to urban density, and thus selected urban density policies as one of the examined policy category. However, in the qualitative analysis we found that the recommendations related to urban planning do not focus only on urban density, but planning more generally. Thus we use a more general 'sustainable urban planning'-category in figure 3. It should be noted that the policy outcome categories are overlapping. For example, many authors discuss sustainable consumption, which encompasses both behavioral change and technological solutions (for example, using green product labels to guide consumers). Similarly, the suggested tailored policies often include these two aspects. Tailored policies mean suggestions to target different population segments or geographic areas with different policies. In general, tailored policies can be seen as a subcategory or an overarching category for other policy outcomes. Many of the reviewed studies, 26% in total (table S3 in the SI), recommend tailored policies (e.g. We divide the policy instruments in three aggregated categories: carbon pricing, command and control (CAC), and voluntary actions (following Requate 2005, Holden and Linnerud 2011). In addition, CBCF reporting and targets themselves form an information tool that can be used either voluntarily or as a mandatory steering tool (i.e. official CBA for GHG emissions). In general, any policy instrument can be used to realize any policy outcome. Implicitly the reviewed literature seems to encourage voluntary action, since command and control policies and regulation in general are rarely discussed. Some exceptions to this are presented in the following subsections. Carbon pricing-including carbon taxes, emission trading schemes, and subsidies to renewables-is the most often discussed policy instrument: it is explicitly mentioned in 33% of the reviewed papers (table S3 in the SI). However, some authors raise it only to discuss some concerns related to it. In the numerical analysis, we only report calls for official CBA and carbon pricing, since regulation and voluntary-based policy instruments are rarely mentioned explicitly. However, some papers do promote specific voluntary or regulatory tools, which are discussed in the following subsections. The relationship between economic growth and GHG emissions underlies the policy discussions. It is explicitly mentioned in 48% of the reviewed papers (table S3 in the SI), and implicitly present in many of the rest. The need to reduce consumption is a direct policy implication of the CBCF literature, which is often lightly discussed among other policy implications. However, Reduced consumption is difficult to reconcile with the aspiration of continuous economic growth. The issue is particularly evident in the case of developing economies. Climate change mitigation policies should not jeopardize reducing the inequalities between countries and income groups (Murthy et al 1997, Hubacek et al 2017a, 2017b, Serio 2017, Wiedenhofer et al. In the following subsections, we present and analyze the policy implications of the reviewed literature at three policy levels: international, national, and city levels. A summary of the policy recommendations at each policy level is given at the end of each subsection (tables 1-3). In the summary tables, we aim to suggest the practical policy tools and policy instruments that are required to realize the policy recommendationseven when they are not directly suggested in the original sources. In addition, we list some of the benefits and challenges of each policy suggestion.. Despite some policy discussion, these papers with a broad geographic scope often lack concrete advice for policies. Perhaps it is difficult to provide policy implications that would cover various countries. For example, Hertwich and Peters highlight that policy priorities depend on the country. Steen-Olsen et al present an interesting policy idea though, they note that different regions have different advantages from an environmental perspective and that international trade could actually serve to optimize the environmental impacts globally. For example, companies could direct their demand for specific goods and raw materials to countries where the environmental pressure caused by their production is known to be low (see also Chen et al 2016b, for a similar discussion on cities). However, they do not discuss what could be the policy instruments or incentives to achieve this. Hubacek et al (2017a, 2017b) raise another important international policy issue-they discuss the global inequality of carbon footprints. They examine whether the goals of the United Nations (UN) to mitigate climate change and to end poverty are in contradiction with each other. They call for policies addressing the unfair global income distribution and the carbon intensity of lifestyles in developed countries. International policies Studies on a more detailed spatial scale provide global and international policy implications as well. Clarke et al suggest that developed countries should invest in decarbonization of their supply chains in developing countries. Dolter and Victor make similar conclusions. Both papers include the suggestion of substituting local low-carbon production for GHG-intensive imports as well. These sort of policies could be taken into practice by border tax adjustments (BTAs). However, BTAs related to embo- Sato, 2014). For example, Andrew and co-authors highlight that BTA may encourage supplying countries to regulate their GHG emissions as well. In contrast, Jakob and Marschinski and Sakai and Barrett discuss the uncertainties of BTA in reducing global GHG emissions. National policies National-level policies are a popular topic in the reviewed CBCF literature (figure 2). In particular, the national and sub-national regional scale studies focus on national-level policy implications. Similar to multinational studies, carbon leakage Mach et al suggest policies that would target the segments of society responsible for the highest carbon footprints, which often means the highest income groups. More subtle policy suggestions touching upon the issue of income differences are provided as well. In a study on China by Wiedenhofer et al they highlight that social and redistributive policies interact with climate and energy policy. They call for efforts enabling sustainable lifestyles for all and promote the coordination of social and environmental policies. Ottelin et al (2018b) give support for such a policy strategy by revealing how the redistributive policies of welfare states improve carbon equity between the different income groups. Regional equality is discussed in the literature as well. Sub-national regional studies highlight the need to take regional characteristics into account in local and national decision-making ( However, Markaki and co-authors also discuss possible rebound effects related to energy efficiency measures. Rebound effects occur when energy efficiency decreases the price of the energy service, for example, the price of heating. Due to the lower price, the consumption of the energy service (or other goods and services) may actually increase, which counteracts the original energy-saving purpose. Thomas and Azevedo specifically study the rebound effects of residential energy efficiency investments. Based on their findings, they promote carbon pricing: enacting pollution taxes or auctioned permits that internalize the externalities of energy use. Carbon pricing is supported by many other authors as well Maraseni et al (2016b) remind us that the subsidies for coal and oil must be cut. However, some concerns about carbon pricing policies are raised in the literature as well. Weber and Matthews discuss the problematics of carbon taxes. If carbon taxes are implemented at national level, they will not cover imported goods, which is particularly problematic in low-carbon economies. BTAs could solve the problem, but as discussed above, they have their own downsides. Similarly, Wood and Dey discuss the possible negative impacts of emission trading schemes on Australian industries, although they do not oppose carbon pricing directly. In addition, some authors remind us that carbon pricing affects lower-income groups more than others, since many basic needs (such as heating and daily transportation) have a relatively high GFG intensity (Gill and Moeller 2018). As a solution, Ottelin et al (2018b) suggest combining carbon pricing with additional income transfers to lower-income groups. In addition to various carbon pricing policies, information dissemination programs are suggested as a policy instrument, particularly in order to change consumer behavior (Bin and Dowlatabadi 2005, Nssn et al 2015, zba et al 2017. Curiously, Nssn and co-authors highlight that promoting pro-environmental attitudes may actually be more important regarding support for climate policy than for consumer behavior, since the impact of the latter is limited. Sustainable consumption choices may have rebound effects as well. For example, giving up car ownership and other actions that save money in addition to emissions, may lead to shifts in consumption that counteract the intended emission savings City policies Around half of the reviewed papers (53%) have a more detailed spatial scale than the national or sub-national regional level. For the purpose of the review, we further divided these studies into four classes according to the scale: city, sub-city, urban zone, and settlement type. These studies often focus on city policies in their policy discussion (figure 2). The city-scale studies highlight the benefits of CBA for cities. In order to implement effective mitigation strategies, it is important to have accurate, comparable, and comprehensive GHG accounting (Wiedmann 2016, Fry et al 2018, see also the review by Lombardi et al 2017). Several authors state clearly that CBA should be adopted routinely in cities (Paloheimo and Salmi 2013, Feng et al 2014, Chen et al 2016a. Wiedmann propose the concept of a 'city carbon map,' which is a coherent, matrix-like, simultaneous representation of CBCFs and production-based GHG emissions. In addition, several authors discuss more generally the importance of including the indirect global environmental pressure of cities in policy discussions (Schulz 2010, Athanassiadis et al 2016, Millward-Hopkins et al 2017. As in the national-scale studies, carbon pricing is a popular topic in the more detailed scale studies as well (table S3 in the SI). Particularly interesting for cities is the promotion of trans-local carbon trading schemes, meaning carbon trading among cities (Chen et al 2016a, Mi et al 2016. Net importer cities could require importing companies to purchase carbon credits from net exporter cities that would use the funds to decarbonize the production. This would lower the CBCF of the net importer cities. From the perspective of the net importer cities, carbon offsets invested in cities and countries where the imported emissions originate can, in many cases, be more efficient and economical than focusing on the territorial emissions alone (Chen et al 2016a). The policy recommendations of the CBCF literature related to urban planning and urbanization are missing a consensus. Several authors discuss urban density policies or the possible environmental benefits of urban density. Some of these authors support urban density policies (e.g.. However, whether a specific paper supports or questions urban density policies is not explained by the geographic location or spatial scale of the study alone (figure 2 in subsection 3.1). Perhaps because of the missing consensus on urban density policies, several authors suggest tailored policies for urban, suburban, and rural areas (Baiocchi et al 2010, Ala-Mantila et al 2013, Jones and Kammen 2014, Ottelin et al 2015. In order to clarify the reasons for the conflicting policy recommendations, we review the actual results on the relationship between urban development and CBCFs in the following section. We find that the results vary as well. In addition, the impact of urban variables on CBCFs is often low or statistically insignificant (see subsection 4.4 and table S2 in the SI). Thus, the given policy recommendations appear to reflect the empirical findings. The existing literature allows the justification of various policy recommendations. While carbon pricing and urban density policies are often discussed separately, Gill and Moeller and Ottelin et al (2017Ottelin et al (, 2018a point out that these policies are interrelated. If the emissions of car ownership and use are targeted with other policies, such as motor fuel taxes, it diminishes the potential impact of urban density policies due to the increasing rebound effects (Ottelin et al 2018a). High motor fuel taxes bring the GHG intensity of car ownership and use close to the GHG intensity of other forms of consumption per monetary unit-and thus it makes no difference whether consumers spend their money on car ownership and use or something else. The conclusion is that with adequate carbon pricing policies, separate urban density policies are ineffective but the demand of transit-oriented and car-free residential areas may increase, due to the increasing expenses related to car ownership and use. Sustainable urban planning covers other aspects aside from urban density and transport planning. Several recent papers highlight the potential of cities to facilitate the sharing economy (Underwood and Zahran 2015, Ala-Mantila et al 2016, Fremstad et al 2018, Jones et al 2018. Decreasing household size in cities is a global trend. This increases the CBCF per capita due to decreasing the sharing of spaces, goods, and services between household members (Ala-Mantila et al 2016). However, cities and densely populated areas in general can facilitate sharing between households. Public spaces and public transport are traditional infrastructures for sharing, while online platforms have created new forms of peer-to-peer sharing, such as car pooling and hospitality services. Fremstad and colleagues suggest that cities could increase the reliability and credibility-and thus the volume of peer-to-peer sharing-by regulation, licensing, and insurance policies for example. Ala-Mantila and co-authors remind us that sharing should focus on GHG-intensive goods and services in order to avoid the high rebound effects caused by economic savings. Carbon sinks and carbon stocks are discussed in the CBCF literature as well (Shigeto et Paloheimo and Salmi suggest investing in large-scale carbon sinks, such as forests, inside or outside city boundaries. Ottelin et al (2018a) remind us that it is important to start thinking beyond 'low carbon' and promote negative emission technologies (NETs), such as carbon capture and storage. Minx et al provide a comprehensive review of NETs. In addition, Ottelin et al highlight that design and planning in general have a low GHG intensity per monetary unit and suggest that planning should aim at creating value for the immaterial characteristics of the built environment rather than heavy construction. In addition to carbon pricing and urban planning, there are behavioral and technological policy suggestions for cities in the literature. In particular local renewable energy production is promoted ( 4. Relationship between urban structure and CBCFs 4.1. Scope of the review During the policy analysis, we found that the policy recommendations on urban density and urban development more generally are both missing a consensus. To clarify this important topic, we review here the actual results regarding the relationship between urban development and CBCFs. Out of the 103 reviewed studies, 35 include a clearly defined variable to describe urban structure and a sub-national comparative analysis. The following review of the results focuses on these studies. The main findings of the 35 studies with some key information are presented in table S2 (in the SI). Most of the 35 studies compare absolute emissions between units with different degrees of urbanization, usually ranging from urban to rural with a differing amount of categories between these extremes. A smaller share (13 of the remaining 35) focus on the effect of sub-urbanization or urban sprawl. Most of the papers, use some kind of a classification of municipalities or other administrative units that divide up the areas based on their degree of urbanization, but a dichotomous urban-rural variable, without finer-grained resolution of different types of areas, is used rather often as well (9/35). Absolute CBCF comparisons When comparing the averages of the absolute CBCF without controlling for any background variables, the more urban areas tend to have higher footprints, Serio 2017, Fremstad et al 2018 in the SI). The majority of the reviewed papers for this section (19 papers) conclude that, generally, the higher the level of urbanization, the higher the consumption-based emissions. This result seems to hold regardless of the level of development of the country as it is replicated in countries like China However, a couple of contradicting results have been reported that state that the CBCF decreases with the increasing level of urbanization. For the US, Jones and Kammen reported that the rural dwellers footprint was 0.7 tons higher than that of a resident of the urban core. For the UK, Minx et al concluded that, in their assessment, three urban settlement types had slightly lower carbon footprints than the two rural settlement types, but also highlighted the large variation within each of these groups. Also, for Finland Heinonen et al (2013b) found that the average emissions of a middle-income rural dweller are a bit higher than those of a middle-income dweller residing in the country's capital. The conclusions about urban sprawl-meaning comparisons of absolute average emissions between urban and suburban areas-are somewhat more controversial. In Finland, several studies have found inner urban residents to have the highest CBCF Areal or personal carbon footprint In our separate review on the comparability of CBCF studies, which focuses on conceptual and technical issues (Heinonen et al 2019), we discussed that there are actually two types of CBCF that differ significantly in their scope, but are reported as the same. We named these the areal carbon footprint (ACF) and the personal carbon footprint (PCF). The ACF covers all consumption-based emissions caused by economic activities within the borders of the studied area, irrespective of who causes them, whereas the PCF covers all consumption-based emissions caused by the residents of the studied area, irrespective of where the emissions are caused. There are also hybrids of the ACF and PCF in the literature, and the scope is generally not stated clearly. From the perspective of the impact of urban development on CBCFs, the most important difference between the two CBCF types is that the ACF typically includes the governmental consumption and gross fixed capital formation (GFCF) that hybrids may or may not include, and the pure PCF, by definition, does not (see Heinonen et al 2019, for details). In particular, the investments in new construction and infrastructure that are included in the GFCF are important when studying the impacts of urbanization on the CBCF. Thus, this may explain the difference in the absolute results described above since all the studies that find a lower CBCF in urban areas than in suburban or rural areas study the PCF. Another important issue is that when the level of detail of the spatial scale is increased, it becomes impossible to assess the ACF, which requires data on economic activities within the area, such as national or regional accounts. Although some studies have added the GFCF to the PCF, this has been done by giving equal shares to all residents of the area, since there is no data available for more personalized allocation (Ottelin et al 2018b). Thus, it should be noted that currently practically all CBCF studies having a more detailed spatial scale than the city-use PCF, which is typically based on household expenditure surveys and lacks insights into the consumption of public goods and services, and the GFCF. Statistical analyzes explaining CBCF The usual criticism of CBCF studies that only compare the averages of dwellers living in areas with different degrees of urbanization is that they often remain purely descriptive (Baiocchi et al 2010) and do not allow analyzing and separating the relationships between urban development and several other variables connected with the variable of interest: the CBCF. Aiming to correct these shortcomings, 21 out of the 35 studies on urbanization in our sample include some sort of statistical analysis, usually a single or multivariable regression analysis. Of course, the definitions and ranges of control variables, be they spatial or socioeconomic, vary between studies, and thus straightforwardly comparing the results has its caveats. By relatively common consent, the aforementioned 22 articles highlight the role of wealth as the main explanatory variable behind the CBCF, with the role of urban structure playing a smaller role in determining emissions. However, as follows from the methodology (environmentally extended input-output analysis), the role of expenditures or income in determining emissions is rather as expected, and thus, even though the impact of urban variables is often quantitatively small, it is still interesting. In most papers, when controlling for relevant socioeconomic background variables, such as income and household size, the relationship between urban development and CBCFs is negative: the more urban the area is, the smaller the per capita emissions are (Nssn 2014, Ala-Mantila et al 2014, Fremstadt et al 2018. However, some exceptions can be found: for example, in China Liu et al concluded that urbanization and population density increase per capita household CO2 emissions and Li et al concluded that the direct and indirect CO2 emissions of households increase by 2.9% and 1.1% respectively for every increase of one percent in urbanization. However, Li et al did not control for household size, which can have partly affected the result. Also, Serio and Klasen and Serio found similar relationships for the Philippines. Thus, many have concluded that the process of urbanization, when happening on the side of overall rising affluence levels and changing lifestyles in the less developed world, poses a problem for climate change mitigation (Heinonen et al 2013a, 2013b. Of course, the magnitude of the reported urbanization relationships differs. The highest reported difference between the most urban and most rural area type, other factors controlled for, is around 20% (Fremstad et al 2018). Quantitatively smaller differences are also found, for example, in Finland Ala-Mantila et al reported the difference between urban and rural areas to be approximately 15%, and in Sweden Nssn et al found that longer geographical distances increased emissions by about 9% relative to average emissions. Some authors have also concluded that the effect of urbanization (or a variable describing it) might not be universal. For example, Minx et al found that the CBCF decreases with population density, and decreases in the CBCF are larger at lower densities, meaning that increasing the density of denser places is not as GHG effective as is increasing the density of less dense places. Also, Jones and Kammen argued about nonlinearity, even though their conclusion is different: in their analysis, only the highest densities (3000 people per sqm and above) have a decreasing effect on emissions. Some studies have found the impact of urban variables to be statistically insignificant (Ottelin et al 2015(Ottelin et al, 2018a. When different emissions categories are explained separately, the negative relationship seems to hold especially strongly for direct emissions (Ala-Mantila et al 2014, Gill and Moeller 2018) and for mobility in particular. Also, multivariable regression analysis-the most commonly used statistical technique in the CBCF literature-is unable to identify causal relationships, and for example, Zhang et al have brought out that the science of CBCF assessments has yet to unpack various effects that occur during the process of urbanization. To combat the problem, they innovatively utilize propensity-score matching in order to be able to identify the different effects of rural-to-urban migration. They demonstrate how their technique prevents the overestimation of the effect of human settlements, apart from the socio-economic factors. Also, other kinds of more developed methods that are more common in the economics and econometrics literaturesuch as discontinuity regression and experimental designs-are still waiting to be used in order to truly find an answer about the effect of urbanization on consumption-based emissions. Perhaps some of the aforementioned differences in the reported impacts of the urban variables on the CBCF can be traced down to the sometimes very different contexts of studies. There are cultural differences (perhaps partly traceable to historical reasons) in how a city population is typically distributed to different parts of the city structure. Inner city living provides a good example: in the US, it is often associated with lower incomes (Jones and Kammen 2014), whereas in Finland those living in the urban cores tend to be wealthier than the average person (Heinonen et al 2013a). Overall, the studies on the relationship between urban development and the CBCF are not unanimous in their conclusions, nor are they coherent in the approaches used to examine the relationship. Also, the footprinting methodologies vary, for example, the way of calculating infrastructure investments is likely to affect how the urbanization relationship appears. Moreover, the used urban measures are often very aggregated and based on administrative boundaries rather than on more useful structural definitions of different area types. Thus, the comparability and practical usability of many of the results is not very strong, which reduces the suitability of the results for policymaking as well. In addition, there is a shortage of the time-series and longitudinal studies that are needed to make causal claims about the relationship. Main findings and suggestions for improvements In this systematic review, we reflected on the policy implications of the CBCF literature, meaning studies that use a consumption-based GHG assessment. We analyzed and summarized the policy implications for different spatial scales. In addition, we reviewed the results regarding the relationship between urban development and CBCFs in order to clarify why the policy implications are sometimes conflicting, particularly in the case of urban density policies and urban development more generally. For policymakers, we have summarized the current policy recommendations of the CBCF literature at international level (table 1), national level (table 2), and city level (table 3) above. Official CBA, as a complement to PBA, and carbon pricing policies are the most highlighted policy instrument in the recent literature. The review of the policy recommendations revealed that their emphasis is on policy outcomes rather than on the policy instruments that are needed to achieve the wanted outcomes. A shift towards policy instruments would be helpful from the decisionmakers' and policymakers' perspectives. In addition, the policy implications should be better grounded on the results of the study and previous literature. Then again, it is sometimes valuable to provide more visionary and creative policy suggestions as well, but it should be clarified when the policy implications are not directly derived from the results of the study. Comparing the policy recommendations of the CBCF literature to the recommendations of climate change literature in general reveals similarities as well as some significant differences. For example, carbon pricing policies, technological solutions and changing travel behavior are promoted outside the CBCF literature as well. Based on our review, we conclude that the unique features which consumption-based perspective can bring to policy discussions include responsibility of emissions awareness of rebound effects sustainable consumption and lifestyles, and tailored policies for different population segments. Adopting CBA enables wealthy cities and nations to see and take responsibility of emissions that are driven by their demand but take place outside their territorial boundaries (Wiedmann et al 2015, Afionis et al 2017. At the same time, it reveals the possible rebound effects and trade-offs related to climate actions (Ottelin et al 2018b). If we take for example the above mentioned carbon pricing policies and technological solutions, the CBCF literature reveals limitations and challenges that cannot be captured by PBA alone. National carbon pricing policies may lead to increased consumption of imported goods, which may have high embodied emissions (Peters and Hertwich 2008). Similarly, technological investments (e.g. new infrastructure) may require imported products, whose embodied emissions are not included in the territorial accounting. However, such rebounds and trade-offs are case specific and depend on time and place and existing regulation (Chitnis et al 2014, Gill and Moeller 2018. Thus, nations and cities should have a continuous CBA reporting of their own to increase their awareness and to revise policy interventions accordingly. Perhaps the most obvious unique policy feature of the CBCF literature is the direct advices for consumers and households regarding sustainable consumption and lifestyles. The reviewed literature highlights that there isn't one solution, but various paths to sustainable lifestyles ( However, there seems to be one profound shortcoming in the CBCF literature from the policy perspective. Few studies make any connection from the reported CBCFs to any suggested sustainable levelsfor example, the planetary boundary framework (Steffen et al 2015) or the IPCC 1.5°C degree warming scenario (IPCC 2018)-leaving the findings without any baseline. More discussion on the sufficiency of the suggested policy approaches is called for. In addition, the review of the results on the relationship between urban development and CBCFs revealed that more caution is needed in the interpretation of the results. Only a small share of the reviewed studies actually operates at a precise enough level to allow for making strong claims about the relationship. However, the most accepted conclusion is that when urbanization is understood as a process influencing not only the spatial location but also lifestyles and consumption choices, the urban dwellers with high levels of wealth and a low number of household members pose a challenge for climate change mitigation. On the other hand, the literature on the impacts of urban structure within cities is relatively thin and inconclusive. More studies focusing on detailed spatial scales are needed, particularly analyzes using more elaborate area descriptions than the ones based on administrative boundaries. To increase comparability, more comparative studies using larger datasets are called for. Directions of future research Below we provide guidelines for future research collected from the most recent reviewed literature, meaning studies published between January 2015 and June 2018. Several recent studies highlight that further research on the underlying factors for consumption and lifestyle choices are important in order to understand how behavior and associated carbon footprints can be influenced. This includes understanding and modeling the choices of where people live in the first place (Gill and Moeller 2018), how they travel and migrate, how they interact within social, cultural and built environment networks (Poom and Ahas 2016), and how a sharing economy with environmentally beneficial outcomes could be supported, Fremstad et al 2018. Case studies specific to local circumstances and practices are as important as conceptual and generic models (Wiedmann 2016). In addition, the need to account for the rebound effects of behavior changes has been highlighted by many studies ( For example, the community-wide infrastructure footprint (CIF) -which focuses on the urban infrastructure that serves households, businesses, and industry-may be more practical sometimes, since it focuses on significant sectors that cities have a direct influence on. Lazarus et al and Erickson and Morgenstern use similar reasoning to support a focus on energy consumption, transportation, and waste management. However, one of the main concerns arising from the CBCF literature is that focusing on specific sectors may be insufficient for addressing the continuously increasing global emissions. Rebound effects are one particular aspect that most other indicators cannot capture. However, the importance of infrastructure and other investments has been noted within the reviewed CBCF literature as well. Chen et al call for further studies of how business and government investments influence city carbon footprints. At a detailed spatial scale, there are difficulties in including investments and public consumption into the assessments (see subsection 4.3). The situation could be improved by covering the use of public goods and services at household level in household budget surveys in the future (Ottelin et al 2018b) and endogenizing capital in the multi-regional input-output (MRIO) models (Sdersten et al 2018). Several studies have highlighted the opportunities and additional insights that can be gained from scenario or dynamic analyzes that explore the consequences of certain policy options more explicitly (Chen et al 2017, Heinonen 2016, Millward-Hopkins et al 2017. In addition, there is a lack of the time-series and particularly longitudinal studies that are needed to make strong causal claims between policy actions and CBCF outcomes. Furthermore, being able to conclusively answer the question about density-CBCF relationship require use of more precise spatial classifications and GIS-based data about the urban structures. |
The Senate on Friday easily confirmed James Mattis to be President Trump’s secretary of Defense, hours after Trump’s inauguration.
Mattis, a retired Marine general who most recently served as commander of U.S. Central Command, is highly respected by both Republicans and Democrats for his military service.
ADVERTISEMENT
He retired from the military in 2013, meaning he needed a waiver to bypass a law that says Defense secretaries must be out of uniform for at least seven years.
Congress easily passed the waiver last week, and Trump signed the waiver legislation as his first act as president.
Some Democrats had expressed concern about granting Mattis the waiver, citing the need to maintain civilian control of the military. But the concerns were not enough to prevent Mattis from becoming Pentagon chief. |
Laparoscopic transverse colectomy using a new articulating instrument Laparoscopic complete mesocolic excision is the preferred approach for treating transverse colon cancer. Due to the anatomical complexity, mobilization and resection of the transverse colon can be technically challenging. This video demonstrates laparoscopic transverse colectomy using an articulating laparoscopic instrument for a 76-year-old female patient diagnosed with T-colon cancer. INTRODUCTION The proximal two-thirds of the transverse colon is derived from the midgut, while the distal one-third is derived from the hindgut. They are supplied by the middle and left colic arteries, respectively. From an anatomical perspective, the transverse colon is proximal to the upper abdominal vital structures and is not fixed to the retroperitoneal space. Laparoscopic complete mesocolic excision (CME) is the preferred treatment option for transverse colon cancer due to its positive effect in terms of short-term morbidity, length of stay, and oncological outcome. Due to the anatomical complexity, mobilization and resection of the transverse colon can be technically challenging. Manipulating tissues and obtaining the optimal angle are difficult with conventional laparoscopic instruments. A robotic surgical system, which offered multi-jointed instruments, enhanced ergonomics, and three-dimensional vision, was developed to overcome these limitations. However, robotassisted surgery is considerably more expensive than laparoscopic surgery. Several laparoscopic jointed instruments have been proposed as alternatives to the costly robotic systems. In this video, we present a laparoscopic transverse colon resection performed using a laparoscopic articulating instrument (Supplementary Video 1). METHODS The articulating laparoscopic instruments (ArtiSential; LIVSMED, Inc., Seongnam, Korea) were approved as class I medical devices by the Food and Drug Administration of Korea in 2019. The patient was a 76-year-old female with a body mass index of 22.2 kg/m 2. She was diagnosed with moderately differentiated adenocarcinoma of the mid-transverse colon. Initial computed tomography revealed clinical stage T3N1 without distant metastasis. Laparoscopic transverse colectomy was performed. After endobronchial intubation, the patient was placed in a modified lithotomy position under general anesthesia. The 12mm trocars were placed in the umbilical position and left upper quadrant. An 8-mm trocar was placed in the left lower quadrant, while 5-mm trocars were placed in the right upper and lower quadrants. During the procedure, the surgeon used the ArtiSential instrument through the left upper and lower quadrant ports. The procedure was performed with bimanual manipulation using fenestrated forceps (ArtiSential) in the left hand and a monopolar hook (ArtiSential) in the right hand. ArtiSential fenestrated forceps are designed to deliver grasping of tissue through its atraumatic horizontal serrations and unique articulating technology. These articulating instruments allow the surgeon to easily move the laparoscope in any direction without obstructing the field of view. First, intraoperative indocyanine green (ICG)enhanced f luorescence was subjected to a subserosal injection to assess the appropriate dissection range. Using the 5-trocar approach, the middle colic artery and the colic branch of the gastrocolic trunk were ligated. After skeletalization of the middle and right colic vessels, vessel ligation was performed using a harmonic scalpel (Ethicon US, Somerville, NJ, USA) and 5-mm endo-clips (Endovision Co., Ltd., Daegu, Korea) followed by laparoscopic CME (Fig. 1). The colon was transected intracorporeally using an endoscopic stapler (Signia Stapling System; Covidien, Tokyo, Japan). Following intracorporeal side-to-side anastomosis using a Signia Stapling System, a continuous suture, using barbed thread, was made from the distal edge of the enterotomy to the proximal edge. Intracorporeal side-to-side anastomosis was performed after checking the perfusion status using ICG. The specimen was placed in a specimen retrieval bag (Lapbag; Sejong Medical Corp., Paju, Korea), and the umbilical trocar site was extended to 5 cm. Then, the specimen was removed from the abdominal cavity (Fig. 2). After the wound retractor was placed through a midline incision, a pneumoperitoneum was formed. After applying surgical glue to the anastomosis site, a Jackson Pratt drain 200 mL was inserted into the Douglas pouch using the left lower quadrant port. RESULTS The total operation time was 180 minutes with an estimated blood loss of 20 mL. There were no perioperative complications, and the patient has prescribed a soft diet on postoperative day 1. The patient was discharged on postoperative day 3. The final pathological diagnosis was pT3N1bM0. There were 34 harvested regional lymph nodes with three identified metastatic nodes. DISCUSSION The disadvantages of using conventional laparoscopic instruments with fixed joints include reduced dexterity, limited range of movement, and uncomfortable ergonomics. Obtaining an effective angle, or performing traction and countertraction during laparoscopic surgery can be challenging. Although the robotic surgical system addresses these problems, the issue of cost-effectiveness persists. The new laparoscopic articulating instruments (ArtiSential) help surgeons obtain effective traction and countertraction through intuitive movements. The articulating joints of the instruments are synchronized with the user's wrist motion and can be moved by 360°, allowing for more versatile surgical procedures. Compared to the conventional laparoscopic devices, these new instruments make it easier to grab, lift, or apply traction to tissue in narrow spaces. However, its gripping force is weaker and less fixed than the conventional laparoscopic device. Hence, conventional straight instruments were used as assistive devices in the surgery. Moreover, the new instrument is larger and heavier than the conventional instruments. A smaller and lighter design will be more convenient for surgeons. Unfortunately, in cases with bulky tumors, narrow operation fields, or obese patients, surgical procedures with conventional laparoscopic instruments are more technically demanding. In this video, we present a standardized procedure for laparoscopic transverse colectomy using two ArtiSential instruments. The use of the articulating laparoscopic instrument is particularly helpful for exposing surgical planes and skeletonizing the primary feeding vessels. The surgical blood supply quality was assessed using ICG f luorescence angiography (FA). The intraoperative ICG FA was used to evaluate the perfusion of the anastomosis. A uniform green glow emission from the ICG injection indicated adequate perfusion to the anastomosed colon site. This was the first video presenting the clinical application of the newly released articulating laparoscopic instrument (Ar-tiSential). Laparoscopic transverse colectomy using an articulating laparoscopic instrument is safe and technically feasible. Ethical statements The Institutional Review Board of The Catholic University of Korea, Seoul St. Mary's Hospital approved this study and waived the requirement for informed consent (No. KC21ZISI0492), and we followed the principles of the Declaration of Helsinki for health research ethics. |
Incorporating Structural Diversity of Neighbors in a Diffusion Model for Social Networks Diffusion is known to be an important process governing the behaviours observed in network environments like social networks, contact networks, etc. For modeling the diffusion process, the Independent Cascade Model (IC Model) is commonly adopted and algorithms have been proposed for recovering the hidden diffusion network based on observed cascades. However, the IC Model assumes the effects of multiple neighbors on a node to be independent and does not consider the structural diversity of nodes' neighbourhood. In this paper, we propose an extension of the IC Model with the community structure of node neighbours incorporated. We derive an expectation maximization (EM) algorithm to infer the model parameters. To evaluate the effectiveness and efficiency of the proposed method, we compared it with the IC model and its variants that do not consider the structural properties. Our empirical results based on the MemeTracker dataset, shows that after incorporating the structural diversity, there is a significant improvement in the modelling accuracy, with reasonable increase in run-time. |
package influxdb
import (
"fmt"
"github.com/influxdata/influxdb/client/v2"
"github.com/influxdata/influxdb/models"
"github.com/luopengift/log"
"github.com/luopengift/transport"
"github.com/luopengift/types"
)
type InfluxInput struct {
Addr string `json:"addr"`
DB string `json:"database"`
Precision string `json:"precision"`
User string `json:"username"`
Pass string `json:"password"`
QueryString string `json:"query"`
Buffer chan []interface{}
client client.Client
}
func NewInfluxInput() *InfluxInput {
return new(InfluxInput)
}
func (in *InfluxInput) Query(str string) (models.Row, error) {
response, err := in.client.Query(client.Query{
Command: str,
Database: in.DB,
})
if err != nil {
return models.Row{}, fmt.Errorf("influxdb query error:%v", err)
}
if response.Error() != nil {
return models.Row{}, fmt.Errorf("influxdb response error:%v", response.Error())
}
result := response.Results[0]
if result.Err != "" {
return models.Row{}, fmt.Errorf("influxdb result error:%v", result.Err)
}
return result.Series[0], nil
}
func (in *InfluxInput) Init(cfg transport.Configer) error {
in.DB = "mydb"
in.Precision = "ns"
err := cfg.Parse(in)
if err != nil {
return err
}
in.client, err = client.NewHTTPClient(client.HTTPConfig{
Addr: in.Addr,
Username: in.User,
Password: <PASSWORD>,
})
in.Buffer = make(chan []interface{}, 1000)
return err
}
func (in *InfluxInput) Start() error {
data, err := in.Query(in.QueryString)
if err != nil {
return err
}
log.Debug("%#v, %#v", data.Name, data.Columns)
for _, dat := range data.Values {
in.Buffer <- dat
}
return nil
}
func (in *InfluxInput) Read(p []byte) (int, error) {
b, err := types.ToBytes(<-in.Buffer)
if err != nil {
return 0, err
}
n := copy(p, b)
return n, nil
}
func (in *InfluxInput) Close() error {
return in.Close()
}
func (in *InfluxInput) Version() string {
return "0.0.1"
}
func init() {
transport.RegistInputer("influxdb", NewInfluxInput())
}
|
// Dispatch and parse the underlying bits based on the Type of the
// message.
//
// One useful patter would be to check for a specific type -- or a specific
// interface (such as a messages.Locatable) to extract the information
// you need.
func (m Message) Parse() (messages.Message, error) {
switch m.Header.Type {
case 1, 2, 3:
return m.Position()
case 5:
return m.Voyage()
case 18:
return m.ClassBPosition()
case 21:
return m.NavigationAid()
case 24:
return m.StaticData()
case 4:
return m.BaseStation()
default:
return nil, nil
}
} |
/**
* Inserts a newly created node in the tree by taking care of priority operators.
* @param bistromathique: The bistromathique structure.
* @param root: The root of the expression tree.
* @param new_expression_node: The new expression node to insert in the tree.
*/
void update_root_expression(t_bistromathique bistromathique, t_expression_tree **root,
t_expression_tree *new_expression_node)
{
if ((*root)->first == NULL || (*root)->second == NULL)
{
free_expression(root);
*root = new_expression_node;
}
else if (is_priority_operator(bistromathique, new_expression_node->operator) &&
!is_priority_operator(bistromathique, (*root)->operator) && new_expression_node->level >= (*root)->level)
{
free_expression(&new_expression_node->first);
new_expression_node->first = (*root)->second;
(*root)->second = new_expression_node;
}
else
{
free_expression(&new_expression_node->first);
new_expression_node->first = *root;
*root = new_expression_node;
}
} |
package main
import (
"fmt"
)
func main() {
var resultado bool
resultado = 5 < 6
fmt.Println("5 < 6 = ", resultado)
resultado = (5 > 6) && (4 < 3)
fmt.Println("(5 > 6) && (4 < 3) = ", resultado)
resultado = (5 > 6) || (4 > 3)
fmt.Println("(5 > 6) || (4 > 3) = ", resultado)
resultado = (5 > 6) || (4 > 3)
fmt.Println("!((5 > 6) || (4 > 3)) = ", !resultado)
}
|
Dysfunctional Activation of the Cerebellum in Schizophrenia The cognitive dysmetria framework postulates that the deficits seen in schizophrenia are due to underlying cerebello-thalamo-cortical dysfunction. The cerebellum is thought to be crucial in the formation of internal models for both motor and cognitive behaviors. In healthy individuals there is a functional topography within the cerebellum. Alterations in the functional topography and activation of the cerebellum in schizophrenia patients may be indicative of altered internal models, providing support for this framework. Using state-of-the-art neuroimaging meta-analysis, we investigated cerebellar activation across a variety of task domains affected in schizophrenia and in comparison to healthy controls. Our results indicate an altered functional topography in patients. This was especially apparent for emotion and working memory tasks, and may be related to deficits in these domains. Results suggest that an altered cerebellar functional topography in schizophrenia may be contributing to the many deficits associated with the disease, perhaps because of dysfunctional internal models. |
A “limited number” of Westpac customers are still without access to online banking services following a major outage which began on Sunday morning.
Customers were left unable to view their account details in Westpac Live – the bank’s online banking platform – leaving them exasperated and turning to social media to vent their frustration.
“We have had an issue with online banking whereby some customers were unable to view their account details in Westpac Live. While we have resolved the issue for many customers, we are aware that a limited number of people are still having issues logging in,” a spokesperson for the bank said in a statement today.
Cardless Cash, ATM, EFTPOS and telephone banking were working as normal throughout the outage.
In a social media statement on Sunday, Westpac told its 200,000 plus Facebook followers that it was “terribly sorry” for the incident, and was “working as quickly as we possibly can” to fix the issue.
On Monday evening another post from the bank acknowledged that a small number of customers were still having issues logging in, advising those impacted to either send direct messages or call for assistance.
A spokesperson for Westpac today told CIO Australia that the bank was “still investigating the cause of the outage”.
The outage comes just weeks after the bank took down Westpac Live and mobile banking services during the last weekend in May to “upgrade ready for new features”. It also follows a major IT systems glitch in February which affected online banking and branch systems for more than 24 hours, which was blamed on a range of “technical issues”.
In November last year, Westpac suffered a week-long systems failure, which impacted online and mobile banking platforms, leaving it unable to process payments or provide balance statements. |
AIDS and development: an inverse correlation? There is no question about the seriousness of the AIDS epidemic in Africa. It has been clear for some years that problems associated with the epidemic are enormous. The depth of the problem has been documented repeatedly. Now that we have moved into what is described below as the acceptance-response phase, it is timely to look beyond present problems to the legacy that will remain once the epidemic has been confronted by medical science and endured by its millions of victims and their families, most especially in Africa. This article argues that as a result of many pre-existing conditions, having little to do with AIDS, aggressive responses to the epidemic, especially by the international community, are likely to undermine African autonomy and impede future development, particularly politically and psychologically. While AIDS is one of many deterrents to development, it has, in many affected countries contributed significantly to undermining their future prospects. From several perspectives the AIDS epidemic can be seen to have levelled an enormous toll on Africa, especially in the eastern and southern regions. This article confines itself to the non-medical consequences of the epidemic where it has been most profound. Although the epidemic has possibly passed its peak, evidence of the toll of the disease, with its related health problems is now clear. In its wake are serious multidimensional problems: anthropological, sociological, economic and political. AIDS has greatly diminished prospects for increased autonomy in many countries, and dashed hopes for major improvements in their quality of life. These multisectoral dimensions of the AIDS impact are likely to affect African development negatively for many years to come. As an extreme example of the enormous difficulties, a recent report suggests an HIV rate of 50 per cent in seven armies in central Africa. Such reports highlight consequences that affect development prospects in general. These and related problems undermine the sense of nationalism and national identity so eagerly fought for in all parts of Africa during the independence era. This article focuses on these dimensions of the epidemic and suggests possible consequences in terms of national development in those countries most affected. |
People lying on the sidewalk catch the eye of a pedestrian in downtown Washington. A new survey found that 1 in 10 people ages 18 to 25 had experienced homelessness in the past year. (Michael S. Williamson/The Washington Post)
Hemmed in by low wages, pricey rental markets and family instability, more young people are crashing on couches of friends or acquaintances, sleeping in cars or turning to the streets, a new study has found.
Researchers with Chapin Hall, a youth policy center at the University of Chicago, surveyed in 2016 and 2017 more than 26,000 young people and their families across the country to gauge how many of them had been homeless during some period of the previous year. Their results were alarming: One in 10 people ages 18 to 25 had experienced homelessness. For adolescents, the number was 1 in 30. They concluded that nearly 3.5 million young adults and 660,000 adolescents had been homeless within the previous year.
Matthew Morton, a Chapin Hall research fellow, said he aims to dispel the notion that homelessness afflicts mostly older men. His survey identified college students and graduates and employed young people who struggled to find a permanent place to stay. Researchers also found it was no less prevalent in rural areas than in urban ones.
"Our findings probably challenge the images of homelessness. Homelessness is young," Morton said. "It's more common than people expect and it's largely hidden."
That was true in the District, where officials counted more homeless children and parents than homeless single men last year. The number of homeless families soared by more than 30 percent between 2015 and 2016, according to a federal estimate released last spring. City officials and advocates for the poor attributed the growth in homeless families to rising home costs and a city policy of guaranteeing any homeless family shelter.
[There are now more homeless kids and parents in D.C. than homeless single adults]
The researchers relied on a broad definition of homelessness and counted as homeless young people who had run away from home — even for a night — as well as those who were forced to sleep on couches or stay with friends temporarily. Children who run away are more likely to face homelessness as adults, Morton said, and many of the young people who researchers spoke to were forced out of family homes after they came out as gay.
The study marked the first time researchers had used a nationally representative survey to capture the picture of youth homelessness. Previously, researchers relied on "point in time" counts, which tallied only people who were homeless on a particular day. Morton said those counts probably underestimated the prevalence of youth homelessness, because young people are more likely to move in and out of it than older people.
The findings "are staggering. They are alarming, but they're not necessarily surprising," Morton said. "Many young people are getting hammered in this economy . . . and far too many youth have experienced trauma and lack stable family situations. You have a major affordable housing crisis."
[These are the faces of America’s growing youth homeless population]
Neither Kera Pingree, 21, nor Dee Baillet, 27, fit the stereotype of homelessness. Pingree is pursuing a degree at Southern Maine Community College while working at the University of Southern Maine, and Baillet is a college graduate. They share their experiences as policy advisers for advocacy groups for homeless youth.
When Baillet came out as gay to his mother at 17, she told him he could no longer live at home. So he went to his bedroom and packed a duffel bag, unsure of where he would go.
"I had to turn to the streets," Baillet said. "I stayed around with family and friends, just couch surfing as much as I could."
But it didn't stop him from graduating from high school and ending up in college, where he finished early and took a teaching job in Indianapolis. But his housing situation fell through. Broke and unsure of where to go, he first considered staying in a shelter, but it felt unsafe. So he slept in his car.
Baillet works as a supervisor at an urgent care clinic in Columbus, Ohio, where he grew up. He volunteers at a homeless shelter. He said that many of the short-term solutions for homelessness are geared toward older men and that shelters can be unsafe and uncomfortable places for homeless youth.
Pingree, 21, had a daughter at 15 and was separated from the girl three years later, when family conflict forced a move. Pingree bounced from a friend's house to a partner's house, where Pingree said there were bed bugs and drugs.
Pingree remained separated from the little girl for four months until space at a family shelter opened up.
"When I was 16, I was really itching to get out of the house," said Pingree, who faced abuse and food shortages at home. "Optimally there would have been a program to allow me to live on my own with my daughter."
Pingree is a youth leader with Youth and Community Engagement, a community organizing group attached to the university. Pingree also serves as a policy adviser on homelessness for the National Youth Advisory Council.
Now living in a two-bedroom apartment in Portland, Maine, Pingree said people are surprised to learn of their homeless past.
"There's a lot of times people are shocked to find out someone was homeless," Pingree said.
Get updates on your area delivered via e-mail |
Emigrated to the U.S. in 1977, Abadan-born Keyvon Behpour, 36, is a well-established professional photographer, making his home in Bridgeport, Connecticut. His recent clients include IBM, AT&T and Virgin Atlantic Records.
But what went on display at the National Press Club in Washington, DC, last April showed a personal side; a very sensitive, poetic, even mystical side. Many of his black and white images of Iran, taken during a recent trip, seem to glow with poise and dignity.
Behpour's fine art photography has been featured in magazines in Iran and is on display in the permanent art collection titled El Espiritu de la Tierra, at the U.S. Embassy chancellery building in La Paz, Bolivia. |
Statins attenuate thrombinstimulated tissue factor expression on human endothelial cells through regulation of cJun Statins decrease agoniststimulated expression of the procoagulant protein tissue factor (TF) in human endothelial cells, but the signaling pathways affected by statins have not been clearly delineated. We previously showed that thrombin promotes the increased expression of TF on human endothelial cells through increased expression of cFos and increased phosphorylation of cJun, components of the AP1 transcription factor. When human endothelial cells were pretreated overnight with 1 M fluvastatin or simvastatin, thrombin (2 U/ml) stimulation of TF expression was greatly attenuated. Statin pretreatment did not significantly affect the expression of cFos, but decreased the expression of cJun. Moreover, cJun activation by its upstream regulator cJun NH2terminal kinase (JNK) was greatly attenuated. Thus, fluvastatin and simvastatin impede the expression of TF in human endothelial cells through regulation of the AP1 transcription factor component cJun: cJun expression is reduced and its activation is diminished due to the decreased activity of the stressactivated kinase JNK. These studies demonstrate that statins can serve as potent pharmacological inhibitors of the JNK/cJun pathway in a pleiotropic, noncholesterol dependent manner. This, then, provides a novel mechanism for statinmediated reduction in thrombotic events. |
package org.jw.vprc.repository;
import org.joda.time.DateTime;
import org.joda.time.format.DateTimeFormat;
import org.junit.Before;
import org.junit.Test;
import org.jw.vprc.TestClock;
import org.jw.vprc.domain.ReportCard;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.annotation.DirtiesContext;
import java.util.Date;
import java.util.List;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertThat;
@DirtiesContext(classMode = DirtiesContext.ClassMode.BEFORE_EACH_TEST_METHOD)
public class ReportCardRepositoryTest extends RepositoryTestBase {
@Autowired
private ReportCardRepository reportCardRepository;
@Before
public void setUp(){
customisedClock = new TestClock(DateTime.parse("04/02/2016 20:27:05", DateTimeFormat.forPattern("dd/MM/yyyy HH:mm:ss")));
}
@Test
public void verifyFindByFindByReportDate() throws Exception {
reportCardRepository.save(createReportCard("publisher1", customisedClock.now()));
reportCardRepository.save(createReportCard("publisher2", customisedClock.now()));
List<ReportCard> reportCardsByReportDate = reportCardRepository.findByReportDate(customisedClock.now().toDate());
assertEquals(2, reportCardsByReportDate.size());
}
@Test
public void verifyFindByPublisherId() throws Exception {
reportCardRepository.save(createReportCard("publisher1", customisedClock.now()));
reportCardRepository.save(createReportCard("publisher1", customisedClock.tick()));
List<ReportCard> reportCardsByPublisherId = reportCardRepository.findByPublisherId("publisher1");
assertEquals(2, reportCardsByPublisherId.size());
}
@Test
public void verifyDeleteByPublisherId() throws Exception {
reportCardRepository.save(createReportCard("publisher1", customisedClock.now()));
reportCardRepository.save(createReportCard("publisher2", customisedClock.now()));
reportCardRepository.save(createReportCard("publisher1", customisedClock.tick()));
List<ReportCard> reportCardsDeletedByPublisherId = reportCardRepository.deleteByPublisherId("publisher1");
assertEquals(2, reportCardsDeletedByPublisherId.size());
}
@Test
public void verifyDeleteByPublisherIdAndReportDate() throws Exception {
DateTime initialCustomisedTime = customisedClock.now();
DateTime customisedTimeAfterTick = customisedClock.tick();
reportCardRepository.save(createReportCard("publisher1", initialCustomisedTime));
reportCardRepository.save(createReportCard("publisher2", initialCustomisedTime));
reportCardRepository.save(createReportCard("publisher1", customisedTimeAfterTick));
List<ReportCard> reportCardsDeletedByPublisherIdAndReportDate = reportCardRepository.deleteByPublisherIdAndReportDate("publisher1", customisedTimeAfterTick.toDate());
assertEquals(1, reportCardsDeletedByPublisherIdAndReportDate.size());
assertEquals(2, reportCardRepository.count());
}
@Test
public void verifyFindByPublisherIdAndReportDateBetween() throws Exception {
Date startDate = DateTime.parse("01/02/2016 00:00:00", DateTimeFormat.forPattern("dd/MM/yyyy HH:mm:ss")).toDate();
Date endDate = DateTime.parse("28/02/2016 23:59:59", DateTimeFormat.forPattern("dd/MM/yyyy HH:mm:ss")).toDate();
reportCardRepository.save(createReportCard("publisher1", customisedClock.now()));
reportCardRepository.save(createReportCard("publisher2", customisedClock.now()));
reportCardRepository.save(createReportCard("publisher1", customisedClock.now().plusDays(1)));
reportCardRepository.save(createReportCard("publisher1", customisedClock.now().plusDays(5)));
reportCardRepository.save(createReportCard("publisher1", customisedClock.now().plusMonths(1)));
reportCardRepository.save(createReportCard("publisher1", customisedClock.now().plusMonths(1).plusDays(3)));
List<ReportCard> reportCardsForFebruary2016 = reportCardRepository.findByPublisherIdAndReportDateBetween("publisher1", startDate, endDate);
assertEquals(3, reportCardsForFebruary2016.size());
}
} |
This invention relates to a latching device for insertion into a gasoline dispensing nozzle and for holding the dispensing handle in at least two predetermined flow rate positions; the invention particularly relates to a latching device which is portable and removable from the gasoline dispensing nozzle and which may be carried about and reused with different styles and sizes of gasoline dispensing nozzles.
Gasoline dispensing nozzles of the type found in commercial retail gas stations are typically of two sizes and styles. The first type is described in U.S. Pat. No. 3,085,600, issued Apr. 16, 1963, wherein a gasoline dispensing handle having a high degree of curvature is positionable within a frame member to regulate the delivery rate of gasoline. Such gasoline nozzles when utilized by commercial gasoline service stations wherein an employee of the service station delivers the gasoline have incorporated therein a latching member as a part of the nozzle assembly which permits the attendant to latch the delivery handle into one of typically three positions for slow, medium, and fast gasoline delivery. This enables the attendant to fill the gasoline tank of the customer while he is attending to other needs and services for the customer.
A second type of gasoline dispensing nozzle is illustrated in U.S. Pat. No. 3,653,415, issued Apr. 4, 1972. This dispensing nozzle is of a different size configuration than the aforementioned nozzle, but operates generally according to the same technique. This dispensing nozzles utilizes a delivery handle of different configuration and, when used by commercial service stations wherein gas is delivered by a service station attendant, utilizes a latching mechanism attached to the delivery handle, which mechanism latches against a lever having one or more detents built into the handle guard assembly.
Both of the aforementioned gasoline dispensing nozzles have a built-in pressure sensor which automatically disconnects the delivery handle from the internal nozzle valving mechanism whenever the customer's gasoline tank approaches a filled condition. When this occurs, the handle delivery member is effectively disabled from operating the internal flow valve mechanism, which prevents a service station attendant from inadvertently permitting gasoline to overfill the tank and run out onto the ground.
Both of the aformentioned gasoline dispensing nozzles are typically used in commercial service stations throughout the United States and other countries. However, with the increasing popularity of self-service gasoline stations it has frequently become the practice for service station owners to disable and disconnect the built-in latching mechanism. Apparently this has been thought to be necessary in order to prevent the customer from utilizing the built-in latching mechanism, either because it is believed that the customer would be unfamiliar with the proper operation of this mechanism, or because the customer may not be knowledgeable as to how to disconnect the latching mechanism under conditions where the customer desires less than a full tank of gasoline. Therefore, in self-service gasoline service stations it is typically necessary for the customer to continually stand and squeeze the gasoline delivery handle for so long as he wishes gasoline to be delivered into his tank, preventing him from attending any other service needs of his vehicle. In cold weather climates this is particularly burdensome, for not only is it undesirable to stand outside in cold weather while waiting for the gasoline tank to become filled, but also the temperature of the gasoline dispensing nozzle becomes extremely cold by virture of the gasoline flow through it. Accordingly, it would be useful if the customer were provided with a portable latching mechanism which would enable the customer to set the gasoline flow rate and leave the nozzle unattended while gasoline is being delivered to the tank. The automatic pressure sensing feature in such gasoline dispensing nozzles will prevent any overfillings from occurring and will disable the gasoline delivery handle from the nozzle flow control valve whenever the tank approaches a filled condition. |
Is Divorce Counseling for Happily Married Women Really Necessary?
The marriage statistics have been drilled into everyone’s heads: half of all “I do’s” end up as “I don’ts.” But now, news of a new service that counsels happily married women on the ins and outs of divorce has some people wondering about tempting fate.
Last month, Manhattan divorce attorney Jeff Landers spoke with CBS New York about the importance of advising women who have no intention of leaving their husbands about what they’d need to know if they changed their minds. Consider it a marital insurance policy.
On the website for his firm, Bedrock Divorce, he makes the case that, for female business owners, “”divorce-proofing … is a part of a sound financial plan, like any other risk management you would normally undertake — after all, you have insurance to protect you against other unforeseen events.” And for women who don’t have businesses, it’s time to get informed about the family finances.
Just because a woman is “happily married,” doesn’t mean she shouldn’t have a solid, working knowledge of her financial status, cash flow and net worth. Researchers cite “concerns about money” as one of the number one triggers for marital arguments and conflict, and personally, I feel that many of these worries are based on misunderstanding and miscommunication. Why not eliminate some of this confusion before it causes trouble? |
Modeling and computation of the shape of a compressed axisymmetric gas bubble Abstract A mathematical model for the accurate computation of the shape of a three-dimensional axisymmetric gas bubble compressed between two horizontal plates is presented. An explicit form of the interface is given by the equation of minimal surfaces, based on an approximation of slow displacements. Conservation of the volume is guaranteed by the introduction of a multiplier. Contact angles are taken into account. The shape of the bubble is numerically solved with a quasi-Newton method that presents fast convergence properties. Numerical results are presented for various compression rates and contact angles. |
Phoenix Tashlin-Clifford continues to suffer from nausea, dizziness and headaches after suffering three concussions in the past 15 months.
His father, Neil Clifford said the head injuries have changed his 12-year-old son, who went from the second highest scorer on his North Toronto minor peewee AA team to someone "who didn't want to touch the puck" and was "afraid every time he leaves the bench."
"I didn't want to get hit really badly again," Phoenix said.
The Greater Toronto Hockey League, which has 502 competitive teams, says there are 20 concussion cases on the books for this past season, but it acknowledges the injury is under-reported. An informal weekend survey for the Star of three minor peewee teams (11-year-olds) and a bantam squad (14-year-olds) revealed 14 concussion cases.
Phoenix's father, along with several others concerned with the impact of checking in the sport, have banded together to form the Toronto Non-Contact Hockey League, a new loop expected to have six peewee teams for the 2009-10 season, with hopes to expand to older age groups. Twenty-five hopefuls showed up at Forest Hill Arena last night for the league's first tryout.
They hope to play a 25-game season (the GTHL plays 36) so families can enjoy more time together away from the rink.
"We want to offer a competitive league without bodychecking," said Clifford. "There seems to be a national collective amnesia about the game. ... This is a game, not a war. There is a strange code, a pervasive and aggressive attitude that prevails in hockey."
Bodychecking among the GTHL competitive teams begins at age 11, although GTHL executive director Scott Oakman notes that 95 per cent of house leagues within the organization offer no bodychecking.
For Phoenix, the idea of separating hockey and bodychecking makes sense. Aside from the GTHL, he also played on the non-bodychecking Palmerston Public School hockey club that just won the Toronto District School Board championship. "That," he said, "was fun."
On the other side of the issue is Joey Creery, a 12-year-old who talks of the throbbing at the back of his head after his second concussion of the season, but he wants to stay in the GTHL. His father, Michael, wants him to try the new league.
"Joey had an intentional hit that tried to knock him into the middle of next week," Michael Creery said.
"After my first one, I was a little scared to go into the corners, but I am over that now," said Joey, a Grade 6 student at John Ross Robertson Public School. "I was crying really hard after my second concussion because I was scared they wouldn't let me play any more. I'd rather play contact because it's more exciting. I feel it's more competitive if there is contact."
Clifford, Bill Robertson, David Carter-Whitney, Jacqueline Friedland and Joseph Kalenteridis created the TNCHL so that players like Phoenix and Joey have a choice.
"This is a no-bodychecking league, not a non-contact league," said Clifford. "It's a fast-paced game and there is an inherent risk involved. We aren't pretending there aren't any. Our goal would be to lessen the body contact so (there might be fewer incidents)."
Neurosurgeon Dr. Charles Tator, who has devoted three decades to spinal cord and brain injury prevention, said the idea of developing a safer environment to play hockey is "a respectable motive."
"I believe that a child who has had multiple concussions should not be playing hockey," said Tator.
"The number of concussions sustained by kids is alarming. It is believed that the child's brain is more vulnerable to head injury and concussion than the adult brain."
Dr. Pat Bishop, who for two decades has chaired the Canadian Standards Association Committee, which certifies hockey equipment, warns that eliminating bodychecking won't end concussions.
"You can put a no-bodychecking rule in, but it doesn't mean a player won't get hit in the head," Bishop said. "Eliminating the bodychecking doesn't help if it leads to high sticking to the head, collisions or players nailing others along the boards and heads hitting the ice."
He says other issues in concussions include coaches not teaching the fundamentals of checking and players not knowing how to defend themselves.
TNCHL organizers met with Oakman in March with the hopes of joining the GTHL for the 2009-10 season. But Oakman said with GTHL tryouts starting this week, it was too late to accommodate the new loop.
"The (TNCHL) wanted their program for this year," he said. "Some of the organizers had kids in the league that they wanted to play this year and they weren't willing to wait."
Oakman said the GTHL will consider the TNCHL for membership next season. In the meantime, since the league is not registered with a Hockey Canada organization, it is considered an outlaw league. It must buy its own insurance and its teams cannot play against Hockey Canada-registered teams or in sa6nctioned tournaments.
Additionally, a player cannot play in the TNCHL and with a Hockey Canada team at the same time and will face a penalty if they decide to return to a Hockey Canada organization, such as the GTHL, next year. |
// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: modules/common/proto/vehicle_signal.proto
#ifndef PROTOBUF_modules_2fcommon_2fproto_2fvehicle_5fsignal_2eproto__INCLUDED
#define PROTOBUF_modules_2fcommon_2fproto_2fvehicle_5fsignal_2eproto__INCLUDED
#include <string>
#include <google/protobuf/stubs/common.h>
#if GOOGLE_PROTOBUF_VERSION < 3003000
#error This file was generated by a newer version of protoc which is
#error incompatible with your Protocol Buffer headers. Please update
#error your headers.
#endif
#if 3003000 < GOOGLE_PROTOBUF_MIN_PROTOC_VERSION
#error This file was generated by an older version of protoc which is
#error incompatible with your Protocol Buffer headers. Please
#error regenerate this file with a newer version of protoc.
#endif
#include <google/protobuf/io/coded_stream.h>
#include <google/protobuf/arena.h>
#include <google/protobuf/arenastring.h>
#include <google/protobuf/generated_message_table_driven.h>
#include <google/protobuf/generated_message_util.h>
#include <google/protobuf/metadata.h>
#include <google/protobuf/message.h>
#include <google/protobuf/repeated_field.h> // IWYU pragma: export
#include <google/protobuf/extension_set.h> // IWYU pragma: export
#include <google/protobuf/generated_enum_reflection.h>
#include <google/protobuf/unknown_field_set.h>
// @@protoc_insertion_point(includes)
namespace apollo {
namespace common {
class VehicleSignal;
class VehicleSignalDefaultTypeInternal;
extern VehicleSignalDefaultTypeInternal _VehicleSignal_default_instance_;
} // namespace common
} // namespace apollo
namespace apollo {
namespace common {
namespace protobuf_modules_2fcommon_2fproto_2fvehicle_5fsignal_2eproto {
// Internal implementation detail -- do not call these.
struct TableStruct {
static const ::google::protobuf::internal::ParseTableField entries[];
static const ::google::protobuf::internal::AuxillaryParseTableField aux[];
static const ::google::protobuf::internal::ParseTable schema[];
static const ::google::protobuf::uint32 offsets[];
static void InitDefaultsImpl();
static void Shutdown();
};
void AddDescriptors();
void InitDefaults();
} // namespace protobuf_modules_2fcommon_2fproto_2fvehicle_5fsignal_2eproto
enum VehicleSignal_TurnSignal {
VehicleSignal_TurnSignal_TURN_NONE = 0,
VehicleSignal_TurnSignal_TURN_LEFT = 1,
VehicleSignal_TurnSignal_TURN_RIGHT = 2
};
bool VehicleSignal_TurnSignal_IsValid(int value);
const VehicleSignal_TurnSignal VehicleSignal_TurnSignal_TurnSignal_MIN = VehicleSignal_TurnSignal_TURN_NONE;
const VehicleSignal_TurnSignal VehicleSignal_TurnSignal_TurnSignal_MAX = VehicleSignal_TurnSignal_TURN_RIGHT;
const int VehicleSignal_TurnSignal_TurnSignal_ARRAYSIZE = VehicleSignal_TurnSignal_TurnSignal_MAX + 1;
const ::google::protobuf::EnumDescriptor* VehicleSignal_TurnSignal_descriptor();
inline const ::std::string& VehicleSignal_TurnSignal_Name(VehicleSignal_TurnSignal value) {
return ::google::protobuf::internal::NameOfEnum(
VehicleSignal_TurnSignal_descriptor(), value);
}
inline bool VehicleSignal_TurnSignal_Parse(
const ::std::string& name, VehicleSignal_TurnSignal* value) {
return ::google::protobuf::internal::ParseNamedEnum<VehicleSignal_TurnSignal>(
VehicleSignal_TurnSignal_descriptor(), name, value);
}
// ===================================================================
class VehicleSignal : public ::google::protobuf::Message /* @@protoc_insertion_point(class_definition:apollo.common.VehicleSignal) */ {
public:
VehicleSignal();
virtual ~VehicleSignal();
VehicleSignal(const VehicleSignal& from);
inline VehicleSignal& operator=(const VehicleSignal& from) {
CopyFrom(from);
return *this;
}
inline const ::google::protobuf::UnknownFieldSet& unknown_fields() const {
return _internal_metadata_.unknown_fields();
}
inline ::google::protobuf::UnknownFieldSet* mutable_unknown_fields() {
return _internal_metadata_.mutable_unknown_fields();
}
static const ::google::protobuf::Descriptor* descriptor();
static const VehicleSignal& default_instance();
static inline const VehicleSignal* internal_default_instance() {
return reinterpret_cast<const VehicleSignal*>(
&_VehicleSignal_default_instance_);
}
static PROTOBUF_CONSTEXPR int const kIndexInFileMessages =
0;
void Swap(VehicleSignal* other);
// implements Message ----------------------------------------------
inline VehicleSignal* New() const PROTOBUF_FINAL { return New(NULL); }
VehicleSignal* New(::google::protobuf::Arena* arena) const PROTOBUF_FINAL;
void CopyFrom(const ::google::protobuf::Message& from) PROTOBUF_FINAL;
void MergeFrom(const ::google::protobuf::Message& from) PROTOBUF_FINAL;
void CopyFrom(const VehicleSignal& from);
void MergeFrom(const VehicleSignal& from);
void Clear() PROTOBUF_FINAL;
bool IsInitialized() const PROTOBUF_FINAL;
size_t ByteSizeLong() const PROTOBUF_FINAL;
bool MergePartialFromCodedStream(
::google::protobuf::io::CodedInputStream* input) PROTOBUF_FINAL;
void SerializeWithCachedSizes(
::google::protobuf::io::CodedOutputStream* output) const PROTOBUF_FINAL;
::google::protobuf::uint8* InternalSerializeWithCachedSizesToArray(
bool deterministic, ::google::protobuf::uint8* target) const PROTOBUF_FINAL;
int GetCachedSize() const PROTOBUF_FINAL { return _cached_size_; }
private:
void SharedCtor();
void SharedDtor();
void SetCachedSize(int size) const PROTOBUF_FINAL;
void InternalSwap(VehicleSignal* other);
private:
inline ::google::protobuf::Arena* GetArenaNoVirtual() const {
return NULL;
}
inline void* MaybeArenaPtr() const {
return NULL;
}
public:
::google::protobuf::Metadata GetMetadata() const PROTOBUF_FINAL;
// nested types ----------------------------------------------------
typedef VehicleSignal_TurnSignal TurnSignal;
static const TurnSignal TURN_NONE =
VehicleSignal_TurnSignal_TURN_NONE;
static const TurnSignal TURN_LEFT =
VehicleSignal_TurnSignal_TURN_LEFT;
static const TurnSignal TURN_RIGHT =
VehicleSignal_TurnSignal_TURN_RIGHT;
static inline bool TurnSignal_IsValid(int value) {
return VehicleSignal_TurnSignal_IsValid(value);
}
static const TurnSignal TurnSignal_MIN =
VehicleSignal_TurnSignal_TurnSignal_MIN;
static const TurnSignal TurnSignal_MAX =
VehicleSignal_TurnSignal_TurnSignal_MAX;
static const int TurnSignal_ARRAYSIZE =
VehicleSignal_TurnSignal_TurnSignal_ARRAYSIZE;
static inline const ::google::protobuf::EnumDescriptor*
TurnSignal_descriptor() {
return VehicleSignal_TurnSignal_descriptor();
}
static inline const ::std::string& TurnSignal_Name(TurnSignal value) {
return VehicleSignal_TurnSignal_Name(value);
}
static inline bool TurnSignal_Parse(const ::std::string& name,
TurnSignal* value) {
return VehicleSignal_TurnSignal_Parse(name, value);
}
// accessors -------------------------------------------------------
// optional .apollo.common.VehicleSignal.TurnSignal turn_signal = 1;
bool has_turn_signal() const;
void clear_turn_signal();
static const int kTurnSignalFieldNumber = 1;
::apollo::common::VehicleSignal_TurnSignal turn_signal() const;
void set_turn_signal(::apollo::common::VehicleSignal_TurnSignal value);
// optional bool high_beam = 2;
bool has_high_beam() const;
void clear_high_beam();
static const int kHighBeamFieldNumber = 2;
bool high_beam() const;
void set_high_beam(bool value);
// optional bool low_beam = 3;
bool has_low_beam() const;
void clear_low_beam();
static const int kLowBeamFieldNumber = 3;
bool low_beam() const;
void set_low_beam(bool value);
// optional bool horn = 4;
bool has_horn() const;
void clear_horn();
static const int kHornFieldNumber = 4;
bool horn() const;
void set_horn(bool value);
// optional bool emergency_light = 5;
bool has_emergency_light() const;
void clear_emergency_light();
static const int kEmergencyLightFieldNumber = 5;
bool emergency_light() const;
void set_emergency_light(bool value);
// @@protoc_insertion_point(class_scope:apollo.common.VehicleSignal)
private:
void set_has_turn_signal();
void clear_has_turn_signal();
void set_has_high_beam();
void clear_has_high_beam();
void set_has_low_beam();
void clear_has_low_beam();
void set_has_horn();
void clear_has_horn();
void set_has_emergency_light();
void clear_has_emergency_light();
::google::protobuf::internal::InternalMetadataWithArena _internal_metadata_;
::google::protobuf::internal::HasBits<1> _has_bits_;
mutable int _cached_size_;
int turn_signal_;
bool high_beam_;
bool low_beam_;
bool horn_;
bool emergency_light_;
friend struct protobuf_modules_2fcommon_2fproto_2fvehicle_5fsignal_2eproto::TableStruct;
};
// ===================================================================
// ===================================================================
#if !PROTOBUF_INLINE_NOT_IN_HEADERS
// VehicleSignal
// optional .apollo.common.VehicleSignal.TurnSignal turn_signal = 1;
inline bool VehicleSignal::has_turn_signal() const {
return (_has_bits_[0] & 0x00000001u) != 0;
}
inline void VehicleSignal::set_has_turn_signal() {
_has_bits_[0] |= 0x00000001u;
}
inline void VehicleSignal::clear_has_turn_signal() {
_has_bits_[0] &= ~0x00000001u;
}
inline void VehicleSignal::clear_turn_signal() {
turn_signal_ = 0;
clear_has_turn_signal();
}
inline ::apollo::common::VehicleSignal_TurnSignal VehicleSignal::turn_signal() const {
// @@protoc_insertion_point(field_get:apollo.common.VehicleSignal.turn_signal)
return static_cast< ::apollo::common::VehicleSignal_TurnSignal >(turn_signal_);
}
inline void VehicleSignal::set_turn_signal(::apollo::common::VehicleSignal_TurnSignal value) {
assert(::apollo::common::VehicleSignal_TurnSignal_IsValid(value));
set_has_turn_signal();
turn_signal_ = value;
// @@protoc_insertion_point(field_set:apollo.common.VehicleSignal.turn_signal)
}
// optional bool high_beam = 2;
inline bool VehicleSignal::has_high_beam() const {
return (_has_bits_[0] & 0x00000002u) != 0;
}
inline void VehicleSignal::set_has_high_beam() {
_has_bits_[0] |= 0x00000002u;
}
inline void VehicleSignal::clear_has_high_beam() {
_has_bits_[0] &= ~0x00000002u;
}
inline void VehicleSignal::clear_high_beam() {
high_beam_ = false;
clear_has_high_beam();
}
inline bool VehicleSignal::high_beam() const {
// @@protoc_insertion_point(field_get:apollo.common.VehicleSignal.high_beam)
return high_beam_;
}
inline void VehicleSignal::set_high_beam(bool value) {
set_has_high_beam();
high_beam_ = value;
// @@protoc_insertion_point(field_set:apollo.common.VehicleSignal.high_beam)
}
// optional bool low_beam = 3;
inline bool VehicleSignal::has_low_beam() const {
return (_has_bits_[0] & 0x00000004u) != 0;
}
inline void VehicleSignal::set_has_low_beam() {
_has_bits_[0] |= 0x00000004u;
}
inline void VehicleSignal::clear_has_low_beam() {
_has_bits_[0] &= ~0x00000004u;
}
inline void VehicleSignal::clear_low_beam() {
low_beam_ = false;
clear_has_low_beam();
}
inline bool VehicleSignal::low_beam() const {
// @@protoc_insertion_point(field_get:apollo.common.VehicleSignal.low_beam)
return low_beam_;
}
inline void VehicleSignal::set_low_beam(bool value) {
set_has_low_beam();
low_beam_ = value;
// @@protoc_insertion_point(field_set:apollo.common.VehicleSignal.low_beam)
}
// optional bool horn = 4;
inline bool VehicleSignal::has_horn() const {
return (_has_bits_[0] & 0x00000008u) != 0;
}
inline void VehicleSignal::set_has_horn() {
_has_bits_[0] |= 0x00000008u;
}
inline void VehicleSignal::clear_has_horn() {
_has_bits_[0] &= ~0x00000008u;
}
inline void VehicleSignal::clear_horn() {
horn_ = false;
clear_has_horn();
}
inline bool VehicleSignal::horn() const {
// @@protoc_insertion_point(field_get:apollo.common.VehicleSignal.horn)
return horn_;
}
inline void VehicleSignal::set_horn(bool value) {
set_has_horn();
horn_ = value;
// @@protoc_insertion_point(field_set:apollo.common.VehicleSignal.horn)
}
// optional bool emergency_light = 5;
inline bool VehicleSignal::has_emergency_light() const {
return (_has_bits_[0] & 0x00000010u) != 0;
}
inline void VehicleSignal::set_has_emergency_light() {
_has_bits_[0] |= 0x00000010u;
}
inline void VehicleSignal::clear_has_emergency_light() {
_has_bits_[0] &= ~0x00000010u;
}
inline void VehicleSignal::clear_emergency_light() {
emergency_light_ = false;
clear_has_emergency_light();
}
inline bool VehicleSignal::emergency_light() const {
// @@protoc_insertion_point(field_get:apollo.common.VehicleSignal.emergency_light)
return emergency_light_;
}
inline void VehicleSignal::set_emergency_light(bool value) {
set_has_emergency_light();
emergency_light_ = value;
// @@protoc_insertion_point(field_set:apollo.common.VehicleSignal.emergency_light)
}
#endif // !PROTOBUF_INLINE_NOT_IN_HEADERS
// @@protoc_insertion_point(namespace_scope)
} // namespace common
} // namespace apollo
#ifndef SWIG
namespace google {
namespace protobuf {
template <> struct is_proto_enum< ::apollo::common::VehicleSignal_TurnSignal> : ::google::protobuf::internal::true_type {};
template <>
inline const EnumDescriptor* GetEnumDescriptor< ::apollo::common::VehicleSignal_TurnSignal>() {
return ::apollo::common::VehicleSignal_TurnSignal_descriptor();
}
} // namespace protobuf
} // namespace google
#endif // SWIG
// @@protoc_insertion_point(global_scope)
#endif // PROTOBUF_modules_2fcommon_2fproto_2fvehicle_5fsignal_2eproto__INCLUDED
|
// GetPendingRefundCalls gets all the calls that were made to GetPendingRefund.
// Check the length with:
// len(mockedRefunder.GetPendingRefundCalls())
func (mock *RefunderMock) GetPendingRefundCalls() []struct {
Ctx cosmossdktypes.Context
Req rewardtypes.RefundMsgRequest
} {
var calls []struct {
Ctx cosmossdktypes.Context
Req rewardtypes.RefundMsgRequest
}
mock.lockGetPendingRefund.RLock()
calls = mock.calls.GetPendingRefund
mock.lockGetPendingRefund.RUnlock()
return calls
} |
def shape_i(var, i, fgraph=None):
if fgraph is None and hasattr(var, 'fgraph'):
fgraph = var.fgraph
if fgraph and hasattr(fgraph, 'shape_feature'):
shape_feature = fgraph.shape_feature
shape_of = shape_feature.shape_of
def recur(node):
if not hasattr(node.outputs[0], 'fgraph'):
for inp in node.inputs:
if inp.owner:
recur(inp.owner)
shape_feature.on_import(fgraph, node,
'gof.ops.shape_i')
if var not in shape_of:
recur(var.owner)
return shape_of[var][i]
return var.shape[i] |
1. Field of the Invention
The invention relates to a connector assembly mountable on a panel.
2. Description of the Related Art
Japanese Utility Model Examined Publication No. 7-53269 and FIG. 9 herein disclose a connector that is mountable on a panel. With reference to FIG. 9, the connector has a waiting-side male housing 2 to be mounted on a panel 1 and an assembling-side female housing 3 to be connected with the male housing 2. The female housing 3 has a terminal accommodating portion 4 with a vertically long rectangular outer shape. A jaw 5 bulges out from the rear end of the terminal accommodating portion 4 and faces the panel 1 in parallel. Female terminal fittings (not shown) are accommodated in the terminal accommodating portion 4, and wires 6 are drawn out through the rear surface of the terminal accommodating portion 4 to extend backward. The female housing 3 is covered from behind by a grommet 7 that engages the outer peripheral edge of the jaw 5. A leading end of the grommet 7 is widened and held in close contact with the panel 1 for sealing.
The jaw 5 of the connector may twist due to thermal shrinkage after molding or an external force during the use. The jaw 5 then loses its flatness and cannot face the panel 1 in parallel. Thus, there is a possibility that the leading end of the grommet 7 cannot be held securely in close contact with the panel 1 when the grommet 7 is attached to the jaw 5, thereby impairing its sealing ability.
The invention was developed in view of the above problem and an object thereof is to provide a connector that can mount and seal to a panel.
The invention relates to a connector with a housing configured for mounting on a panel. The housing includes a terminal-accommodating portion for accommodating terminal fittings. A jaw bulges from an outer peripheral surface of the terminal accommodating portion, and preferably extends over substantially the entire periphery. The jaw faces the panel and is substantially parallel to the panel. A grommet is attached to the jaw and is held in close contact with the panel. At least one protrusion is provided on the jaw, and continuously or discontinuously surrounds the terminal-accommodating portion.
The protrusion reinforces the jaw and suppresses twisting or warping due to thermal shrinkage after molding or due to external forces during use. Thus, the jaw remains substantially flat and can be substantially parallel to the panel. Accordingly, the grommet can be attached to the jaw and held in close contact with the panel to ensure good sealing.
The protrusion is at an inner side of a panel-facing surface of the jaw covered by the grommet. Thus, the protrusion does not interfere with the grommet, and the grommet can be attached firmly to the jaw without moving onto the protrusion in a manner that would create a clearance.
The jaw has different length and width dimensions, and the longer sides of the jaw are prone to twisting or warping. However, crossing portions cross at least part of areas of the jaw along the longer sides inside the annular portion. The crossing portions suppress twisting and warping.
A projecting distance of the protrusion is sufficiently short to avoid interference with a receptacle of the mating housing when the housing is connected properly with the mating housing.
At least two protrusions preferably are formed on a panel-facing surface and an opposite surface of the jaw. The two protrusions preferably are substantially symmetrical on the jaw.
The protrusion preferably has at least one substantially U-shaped outer portion arranged substantially parallel to an inner wall that is substantially continuous with the outer peripheral wall of the terminal-accommodating portion.
The protrusion may comprise couplings at intervals along the peripheral direction of the outer portion to couple the outer portion and the inner wall.
The invention also relates to a connector assembly comprising the above-described connector and a mating connector that has a housing to be mounted on a panel. A lock arm preferably is provided on one of the housings and forms an inertial locking means. More particularly, the lock arm temporarily contacts an engaging portion for temporarily restricting the connection of the two housings. The contact state is canceled by pushing the housing and/or the mating housing with a force exceeding a connection resistance.
These and other features of the invention will become more apparent upon reading of the following detailed description of preferred embodiments and accompanying drawings. Even though embodiments are described separately, single features may be combined to additional embodiments. |
The word “buttery” in the title refers to croissants, which make an especially rich foundation for this golden-topped baked breakfast classic. Toasting the croissants before building the casserole adds caramelized notes that can stand up to the bits of browned sausage, sage and melted Gruyère strewn throughout. Make this the night before a special breakfast or brunch, then pop it in the oven an hour before you plan to serve it.
Featured in: A Breakfast Casserole That’s Comfort Food At Sunrise.
Heat oven to 500 degrees. Spread croissants on a large baking sheet and toast, cut side up, until golden brown, 5 to 10 minutes (watch carefully to see that they do not burn). Let cool, then tear into large bite-size pieces.
In a medium skillet over medium-high heat, warm the olive oil. Add sliced scallions and sausage meat; cook, breaking up meat with a fork, until mixture is well browned, about 5 minutes. Stir in sage, and remove from heat.
In a large bowl, toss together croissants and sausage mixture. In a separate bowl, whisk together eggs, milk, cream, 1 1/2 cups cheese, salt and pepper.
Lightly oil a 9- x 13-inch baking dish. Turn croissant mixture into pan, spreading it out evenly over the bottom. Pour custard into pan, pressing croissants down gently to help absorb the liquid. Cover pan with plastic wrap and refrigerate at least 4 hours or overnight.
When you’re ready to bake the casserole, heat oven to 350 degrees. Scatter the remaining grated cheese over the top of the casserole. Transfer to oven and bake until casserole is golden brown and firm to the touch, 45 minutes. Let stand 10 minutes. Garnish with sliced scallion tops before serving. |
from mmseg.apis import inference_segmentor, init_segmentor, show_result_pyplot
import mmcv
import matplotlib.pyplot as plt
img = mmcv.imread('data/test_img/03_02_01450.png')
palette = [0, 1, 2]
model.cfg = cfg
result = inference_segmentor(model, img)
plt.figure(figsize=(8, 6))
show_result_pyplot(model, img, result, palette) |
package penguin
import (
"time"
"github.com/lindsaygelle/nook"
"github.com/lindsaygelle/nook/animal"
"github.com/lindsaygelle/nook/character"
"github.com/lindsaygelle/nook/gender"
"github.com/lindsaygelle/nook/personality"
"golang.org/x/text/language"
)
var (
nobuoBirthday = nook.Birthday{
Day: 0,
Month: time.Month(0)}
)
var (
nobuoCode = nook.Code{
Value: ""}
)
var (
nobuoAmericanEnglishName = nook.Name{
Language: language.AmericanEnglish,
Value: "Nobuo"}
nobuoCanadianFrenchName = nook.Name{
Language: language.CanadianFrench,
Value: ""}
nobuoDutchName = nook.Name{
Language: language.Dutch,
Value: ""}
nobuoFrenchName = nook.Name{
Language: language.French,
Value: ""}
nobuoGermanName = nook.Name{
Language: language.German,
Value: ""}
nobuoItalianName = nook.Name{
Language: language.Italian,
Value: ""}
nobuoJapaneseName = nook.Name{
Language: language.Japanese,
Value: "のぶお"}
nobuoKoreanName = nook.Name{
Language: language.Korean,
Value: ""}
nobuoLatinAmericanSpanishName = nook.Name{
Language: language.LatinAmericanSpanish,
Value: ""}
nobuoRussianName = nook.Name{
Language: language.Russian,
Value: ""}
nobuoSimplifiedChineseName = nook.Name{
Language: language.SimplifiedChinese,
Value: ""}
nobuoSpanishName = nook.Name{
Language: language.Spanish,
Value: ""}
nobuoTraditionalChineseName = nook.Name{
Language: language.TraditionalChinese,
Value: ""}
)
var (
nobuoName = nook.Languages{
language.AmericanEnglish: nobuoAmericanEnglishName,
language.CanadianFrench: nobuoCanadianFrenchName,
language.Dutch: nobuoDutchName,
language.French: nobuoFrenchName,
language.German: nobuoGermanName,
language.Italian: nobuoItalianName,
language.Japanese: nobuoJapaneseName,
language.Korean: nobuoKoreanName,
language.LatinAmericanSpanish: nobuoLatinAmericanSpanishName,
language.Russian: nobuoRussianName,
language.SimplifiedChinese: nobuoSimplifiedChineseName,
language.Spanish: nobuoSpanishName,
language.TraditionalChinese: nobuoTraditionalChineseName}
)
var (
nobuoCharacter = nook.Character{
Animal: animal.Penguin,
Birthday: nobuoBirthday,
Code: nobuoCode,
Key: character.Nobuo,
Gender: gender.Male,
Name: nobuoName,
Special: false}
)
var (
nobuoAmericanEnglishPhrase = nook.Name{
Language: language.AmericanEnglish,
Value: "ブツブツ"}
nobuoCanadianFrenchPhrase = nook.Name{
Language: language.CanadianFrench,
Value: ""}
nobuoDutchPhrase = nook.Name{
Language: language.Dutch,
Value: ""}
nobuoFrenchPhrase = nook.Name{
Language: language.French,
Value: ""}
nobuoGermanPhrase = nook.Name{
Language: language.German,
Value: ""}
nobuoItalianPhrase = nook.Name{
Language: language.Italian,
Value: ""}
nobuoJapanesePhrase = nook.Name{
Language: language.Japanese,
Value: "ブツブツ"}
nobuoKoreanPhrase = nook.Name{
Language: language.Korean,
Value: ""}
nobuoLatinAmericanSpanishPhrase = nook.Name{
Language: language.LatinAmericanSpanish,
Value: ""}
nobuoRussianPhrase = nook.Name{
Language: language.Russian,
Value: ""}
nobuoSimplifiedChinesePhrase = nook.Name{
Language: language.SimplifiedChinese,
Value: ""}
nobuoSpanishPhrase = nook.Name{
Language: language.Spanish,
Value: ""}
nobuoTraditionalChinesePhrase = nook.Name{
Language: language.TraditionalChinese,
Value: ""}
)
var (
nobuoPhrase = nook.Languages{
language.AmericanEnglish: nobuoAmericanEnglishPhrase,
language.CanadianFrench: nobuoCanadianFrenchPhrase,
language.Dutch: nobuoDutchPhrase,
language.French: nobuoFrenchPhrase,
language.German: nobuoGermanPhrase,
language.Italian: nobuoItalianPhrase,
language.Japanese: nobuoJapanesePhrase,
language.Korean: nobuoKoreanPhrase,
language.LatinAmericanSpanish: nobuoLatinAmericanSpanishPhrase,
language.Russian: nobuoRussianPhrase,
language.SimplifiedChinese: nobuoSimplifiedChinesePhrase,
language.Spanish: nobuoSpanishPhrase,
language.TraditionalChinese: nobuoTraditionalChinesePhrase}
)
var (
Nobuo = nook.Villager{
Character: nobuoCharacter,
Personality: personality.Lazy,
Phrase: nobuoPhrase}
)
|
<gh_stars>10-100
import React from 'react';
import {
JavascriptFile,
SvgFile,
JsonFile,
YarnLockFile,
ReadmeFile,
NodeFile,
} from '@rainbow-modules/icons';
import { UnknownFile } from '../icons';
import getFileExtension from './getFileExtension';
import isSpecialFile from './isSpecialFile';
const specialFileNamesMap: Record<string, React.ReactNode> = {
'yarn.lock': <YarnLockFile />,
'readme.md': <ReadmeFile />,
'package.json': <NodeFile />,
};
const extensionIconMap: Record<string, React.ReactNode> = {
js: <JavascriptFile />,
ts: <JavascriptFile />,
json: <JsonFile />,
svg: <SvgFile />,
};
const getIconForFile = (fileName: string): React.ReactNode => {
if (isSpecialFile(fileName)) {
return specialFileNamesMap[fileName.toLowerCase()] || <UnknownFile />;
}
const fileExtension = getFileExtension(fileName);
return extensionIconMap[fileExtension] || <UnknownFile />;
};
export default getIconForFile;
|
The Stigmatizing Effect of Tuberculosis Disease DOI: 10.4328/ACAM.20825 Received: 2021-08-19 Accepted: 2021-11-27 Published Online: 2021-12-02 Corresponding Author: Burcu Korkut, Karabk Provincial Health Directorate Community Health Center, 5000 Houses 75. Year District, 20. Cad, No:4, 78020, Karabk, Turkey. E-mail: [email protected] P: +90 537 063 16 27 Corresponding Author ORCID ID: https://orcid.org/0000-0002-0296-9144 Abstract Aim: This study aimed to measure the level of stigmatization using tuberculosis-related stigma (TRS) scale in healthy individuals and in patients with tuberculosis (TB) and to evaluate the factors affecting stigmatization. Material and Methods: This cross-sectional survey study included healthy individuals (aged 18-75 years) admitted to Community Health Centre and patients with TB (aged 18-75 years) admitted to Tuberculosis Control Dispensary in Karabuk City of Turkey between July 2021 and October 2021. A questionnaire consisting of two parts, in which the first part included questions about sociodemographic characteristics and the second part included questions of Tuberculosis-Related Stigma (TRS) scale for the assessment of level of stigmatization, was applied to both healthy individuals and patients with TB using a face-to-face survey technique. Results: The study included 360 healthy individuals (mean age: 45.46±12.90 years, female 65.3%) and 120 patients with TB (mean age, 41.15±16.42 years, male 60.8%). The mean total TRS scale score in healthy individuals was 18.60±4.18; those aged 36-53 years, those who were employed, and those living in the village had significantly higher TRS scale scores (p<0.05 for all). The mean total TRS scale score in TB patients was 19.72±3.20; those aged 18-35 years, single patients, those employed, and those with highincome level had significantly higher TRS scale scores (p<0.05 for all). Discussion: The current study revealed that the level of stigma was higher in patients with TB. Additionally, it was thought that preventing stigma in TB patients would positively affect the treatment process. |
Sounds of Ethnicity: Listening to German North America, 18501914 (review) commonly achieved recognition, adding that the recognition has been partial and limited. Gerson contrasts her book with Silenced Sextet by Carrie MacMillan, Lorraine McMullen, and Elizabeth Waterston, and The Womans Page, by Janice Fiamengo: their authors chose to approach a collective situation through chapter-length studies of individual writers. In a sense, I have done the opposite, with my chapters providing studies of collective situations in which individuals operated and which they helped to shape. (xiii) Her claim is fulfilled, though the book needs more of the compelling analysis that she provides for such major figures as Pauline Johnson and Sara Jeannette Duncan. It is good to read that the Patty Pry letters in the Halifax Novascotian in 1826 are delightfully ironic, but the point would be more compelling with supporting quotations. Gerson states that Agnes Maule Machar was Victorian Canadas outstanding female public intellectual, and her scores of thoughtful and often lengthy articles dealt with topics ranging from higher education for women to addressing the needs of the poor, but no quotations follow. Furthermore, Gerson is vague on the relation of aesthetic value to literary history. She argues that the interests of American readers encouraged most professional Canadian literary women active after 1880 to aim their sights at the popular market rather than the loftier realms of high modernism, but whatever she understands by high modernism, it would not have been available in 1880. The point is more than a slip, for the last paragraph of the introduction suggests that women writers would have flourished in Canada if it were not for high modernism. By 1918, women writing in both French and English had drawn the blueprints for the rooms they would occupy in the nations cultural edifices through the 20th century. These structures would undergo frequent renovation as tastes altered; during the mid-20th century era of high modernism, the hegemony of the mens smoking room would relegate most women to the hallways and closets from which they would burst forth in secondwave feminist writing in the 1960s. But their grandmothers had staked their right of occupancy to the parlour and the study as well as to the kitchen and the nursery, and would not be evicted. (xvi) Recent studies by Brian Trehearne, Ann Martin, Sandra Djwa, Dean Irvine, and others suggest that modernism in Canada was always more conflicted than Gerson implies. Nonetheless, this book will be a useful resource for years to come. Tracy Ware Queens University |
<filename>src/app/layout/default/header/components/search.component.ts
import { Component, HostBinding, ViewChild, Input, OnInit, ElementRef, AfterViewInit } from '@angular/core';
import { MenuService, Menu } from '@delon/theme';
import { DomSanitizer } from '@angular/platform-browser';
import { Observable, interval, fromEvent } from 'rxjs';
import { debounce, debounceTime, map } from 'rxjs/operators';
// bypassSecurityTrustHtml 转为安全的html 不然没有颜色样式 推荐在绑定时使用管道 {{ title | html }}
@Component({
selector: 'header-search',
template: `
<nz-input-group nzAddOnBeforeIcon="anticon anticon-search">
<input nz-input (focus)="qFocus()" (blur)="qBlur()"
placeholder="查询菜单" [nzAutocomplete]="searchResult">
</nz-input-group>
<nz-autocomplete #searchResult>
<nz-auto-optgroup *ngFor="let group of qResultGroups" [nzLabel]="group.title">
<nz-auto-option *ngFor="let option of group.children" [nzValue]="option.value" [routerLink]="option.link">
<a [innerHtml]="sanitizer.bypassSecurityTrustHtml(option.title)"></a>
</nz-auto-option>
</nz-auto-optgroup>
</nz-autocomplete>
`,
})
export class HeaderSearchComponent implements AfterViewInit {
get menus() { return this.menuService.menus; }
qResultGroups: any[] = [];
qIpt: HTMLInputElement;
query: string;
@HostBinding('class.alain-default__search-focus')
focus = false;
@HostBinding('class.alain-default__search-toggled')
searchToggled = false;
@Input()
set toggleChange(value: boolean) {
if (typeof value === 'undefined') return;
this.searchToggled = true;
this.focus = true;
setTimeout(() => this.qIpt.focus(), 300);
}
constructor(
private el: ElementRef,
private menuService: MenuService,
private sanitizer: DomSanitizer,
) {
}
ngAfterViewInit() {
this.qIpt = (this.el.nativeElement as HTMLElement).querySelector('.ant-input') as HTMLInputElement;
// 等用户输入完 500毫秒后执行
fromEvent(this.qIpt, 'input').pipe(
debounceTime(400),
map((val: any) => val.target.value)
).subscribe(queryKey => {
this.qResultGroups = [];
this.query = queryKey;
if (this.query)
this.findMenuGroup(queryKey);
});
}
qFocus() {
this.focus = true;
this.qIpt.select();
}
qBlur() {
this.focus = false;
this.searchToggled = false;
}
/** 搜索菜单 */
findMenuGroup(likeName: string) {
if (likeName) {
this.menus.forEach(menu => {
if (likeName !== this.query) {
console.log('findMenuGroup exits');
return;
}
this.findMenu(likeName, menu);
});
}
}
findMenu(likeName: string, menu: Menu, path: string = null) {
path = path ? path + ' / ' : '';
if (
menu.text.toUpperCase().indexOf(likeName.toUpperCase()) !== -1 &&
menu.link
) {
let menuGroup = this.qResultGroups.find(group => group.title === '菜单');
// 不存在菜单组则添加
if (!menuGroup) {
const index =
this.qResultGroups.push({ title: '菜单', children: [] }) - 1;
menuGroup = this.qResultGroups[index];
}
let title = menu.text;
let startIndex = 0;
const indexs: number[] = [];
// 将匹配到的字符索引记录下来
while (
(startIndex = menu.text
.toUpperCase()
.indexOf(likeName.toUpperCase(), startIndex)) !== -1
) {
if (!indexs.find(item => item === startIndex)) {
indexs.push(startIndex);
}
startIndex += likeName.length;
}
// 倒序插入<b></b>
for (let i = indexs.length - 1; i >= 0; i--) {
title =
title.substring(0, indexs[i]) +
'<b style="color:#C00">' +
title.substring(indexs[i], indexs[i] + likeName.length) +
'</b>' +
title.substring(indexs[i] + likeName.length, title.length);
}
title = path + title;
menuGroup.children.push({
value: menu.text,
title: title,
link: menu.link,
});
}
if (menu.children && menu.children !== []) {
menu.children.forEach(child => {
if (likeName !== this.query) {
console.log('findMenu exits');
return;
}
this.findMenu(likeName, child, path + menu.text);
});
}
}
}
|
Phytochemical properties and antioxidant activity of wild-grown and cultivated Ganoderma lucidum The most biologically active compounds of medicinal mushroom Ganoderma lucidum can be classified into polysaccharides and terpenoids. Most of these biological compounds are supposed to associate with its antioxidant activity. Both of wild grown and cultivated G. lucidum have been commercially in demand in Indonesia during the past years. Due to their different growing conditions, the wild-grown and cultivated G. lucidum may contain different levels of effective chemical components which affect their quality and medicinal efficacy. This present study was carried out to determine the differences between wild-grown and cultivated G. lucidum which might be useful in exploring the characteristic of chemical compounds of G. lucidum regarding its antioxidant activity. The physicochemical evaluation was determined using gravimetric method. The phytochemical evaluation includes water soluble polysaccharides, phenolic, and terpenoids content. The antioxidant activity was evaluated by measuring the radical scavenging activity using 2,2-diphenyl-1-picrylhydrazyl (DPPH) free radical assay. Cultivated G. lucidum from Godean has the highest water soluble polysaccharides (29.86±2.42 GE, mg/g dw) and phenolic content (5.07±0.39 GAE, mg/g dw) among other studied samples. Whereas, cultivated G. lucidum from Gunung Kidul has the lowest water soluble polysaccharides (21.65±2.45 GE, mg/g dw) and phenolic content (3.21±0.87 GAE, mg/g dw). Both of wild grown G. lucidum have higher terpenoids content compare to all of cultivated G.lucidum. The cultivated Godean revealed the highest DPPH scavenging activity (the lowest IC50, 344.15±9.57 g/mL) among of the studied samples. Hence, the results suggested that G. lucidum contained high metabolites compounds and has a potential natural source of antioxidants. Introduction Ganoderma lucidum, one of polypore macro fungi growing in decomposing woods, commonly known as lingzhi, has been used as a therapy for the prevention and treatment of various diseases for centuries in Asia. The regular consumption was believed to preserve human vitality and to promote health and longevity. A number of studies on the aqueous or ethanol extracts of G. lucidum have discovered that the mushroom has anti-tumor, hepatoprotective, anti-inflammatory effects and shows activities in the immune, cardiovascular and central nervous systems. Most of these therapeutic activities are believed to have association with its antioxidant activity. G. lucidum contains bioactive components such as terpenoids, steroids, phenols, glycoproteins and polysaccharides. Numerous publications have shown that the most pharmacologically active compounds of G. lucidum can be generally divided into triterpenes and polysaccharides. G. lucidum was introduced to Indonesia in the 1990s and its IOP Publishing doi: 10.1088/1757-899X/1011/1/012061 2 large-scale cultivation was started in 1999. Both of wild grown and cultivated G. lucidum has been commercially in demand in Indonesia during the past several years. Due to their different growing conditions, the wild-grown and cultivated G. lucidum may comprise different levels of effective chemical compound which affect their quality and medicinal efficacy. This current study aimed to determine the differences between wild and cultivated G. lucidum which might be useful in exploring the characteristic of chemical compounds of G. lucidum regarding its antioxidant activity. Mushroom materials The mushrooms were dried in the sun with a sufficient amount of air flow to prevent molding. The dried wildgrown G. lucidum were collected from Cianjur and Gunung Kidul, Indonesia. While, the dried cultivated G. lucidum were collected from Kaliurang, Godean, and Gunung Kidul, Indonesia. They have been taxonomically verified at Pharmaceutical Biology Division, Faculty of Pharmacy, Universitas Gadjah Mada. The voucher specimen was retained in the Research Unit for Natural Products Technology, Indonesian Institute of Sciences. The ground dried mature fruiting bodies of G. lucidum were stored in air tight container for further analysis. Preparation and extraction Samples were macerated in ethanol 70% (1: 15) at room temperature for 72 hours. After filtration, the residue was re-extracted with the same method. The filtrates were combined and dried under reduced pressure at 60° C. The resulting extracts were further used for determination of some parameters. Physicochemical evaluation Determination of physicochemical characteristics include total ash content, acid insoluble ash, water soluble extractive, and ethanol soluble extractive using gravimetric method. Determination of water-soluble polysaccharides The water-soluble polysaccharides were determined with phenol-sulfuric acid colorimetric assay as glucose with hydrolyze polysaccharides into glucose monomer. Samples (0.50 g) were extracted with hot water at 95C and hydrochloric acid (HCl) 2 M in water bath (Memmert) for two hours. Filtrates were separated by filter paper and transferred on the centrifuge tube 10 mL. Then, 1 mL of filtrates were added with 5% phenol 0.5 mL (Merck) and 2.5 mL of concentrated sulfuric acid (H2SO4) (98% v/v) (Merck). The mixtures were shaken for 2 minutes and incubated using water bath at 100 C for 15 minutes. The water-soluble polysaccharides were analyzed quantitatively by measuring the absorbance at 490 nm using UV/Vis spectrophotometer (Dynamica Halo RB-10). The blank solution contained 1 mL of distilled water, 0.5 mL of 5% phenol and 2.5 mL of H2SO4 (98% v/v). The standard glucose (Sigma, Milwaukee, WI, USA) was used as a standard solution. The results are expressed as mg glucose equivalent (GE) in gram dry weight (dw) basis. Determination of total of phenolic content The total of phenolic content of extracts was determined using Folin -Ciocalteu reagent. The reaction mixture contained 500 L (1 mg/mL) of samples, 500 L of the Folin -Ciocateu reagent (Merck), and 1.5 mL of 20% sodium carbonate (Merck). The final volume was made up to 10 mL with aquadest. After two hours of incubation in a dark at room temperature, the absorbance of samples were measured by UV/Vis spectrophotometer (Dynamica Halo RB-10) at 765 nm and a gallic acid (Merck) was used for calibration purposes, and the results are presented as milligram gallic acid equivalent (GAE) in gram dry weight (dw) basis. Determination of the total terpenoid content The determination of total terpenoids was performed according to the colorimetric method of Lin et al. with slight modification. The samples were added with 0.4 mL of vanillinglacial acetic acid (5% w/v) (Merck) and 1.0 mL of perchloric acid solution (Merck). The tubes were then placed in a water bath (Memmert) at 60°C for 45 min. Then the mixed solution was cooled and diluted with 5 mL of acetic acid solution (Merck). The absorbance was measured at 548 nm against blank solution using UV/Vis spectrophotometer (Dynamica Halo RB-10). The standard ursolic acid (Sigma, Milwaukee, WI, USA) was used as a standard solution. The results are expressed as milligram ursolic acid equivalent (UAE) in gram dry weight (dw) basis. DPPH Radical Scavenging Assay The antioxidant activity of mushrooms methanolic extracts was evaluated by measuring the radical scavenging activity using 2,2-diphenyl-1-picrylhydrazyl (DPPH) free radical assay. The 2 mL methanolic extracts of dried G. lucidum powder were added with DPPH (Merck) solution at different concentrations of 0.25, 0.5, 1.0, 1.5, and 2.0% w/v. Each of mixtures were then shaken and left to stand in the dark at room temperature for 30 minutes. Quercetin was used as a positive control. The absorbances were measured by UV/Vis spectrophotometer (Dynamica Halo RB-10) at 517 nm. The antioxidant activities presented as IC50 values. The IC50 values is the effective concentration of samples at which the DPPH radicals were inhibited by 50%. The IC50 value were calculated by linear regression, where the abscissa (x) represents the level concentration of tested samples and the ordinate (y) represents the average percentage of inhibitory effect. Statistical analysis All assays were carried out in triplicate. The data is represented as means ± standard deviation (SD). Analysis of variance and Duncan's test were used for determination of statistical significance and p < 0.05 were regarded as significant. Physicochemical evaluation Physicochemical evaluation is used for the preliminary identification of the natural drug to know the significance of physical and chemical properties of the substance being analyzed in terms of their observed activities and especially for the determination of their purity and quality. Physical properties are often exhibited as observables. The total ash values provide an overview of inorganic composition and other impurities in dried mushrooms. Ash content parameters related to purity and contamination. The dried mushrooms are heated at the temperature at which organic compounds and their derivatives are destructed and evaporated leaving only mineral or inorganic elements. The highest and the lowest total ash value were found to be 13.88 % of cultivated -Kaliurang and 7.63 % of cultivated -Godean, respectively. While, the highest and the lowest acid -insoluble ash value were found to be 2.11 % in cultivated -Kaliurang and 0.64 % in wild grown -Cianjur, respectively. The acid-insoluble ash value indicates the presence of silicacious substances. The determination of extractable matter refers to the amount of constituent in material which is extracted by specific solvents. Polar compounds tend to be more attracted to and are more soluble in polar solvents. It is indicating the nature of constituents of the raw material, and also helps in detecting low grade material. Water-soluble and ethanol soluble extractive values plays an important role in evaluation of crude drugs. Less extractive value indicates addition of exhausted material, adulteration or incorrect processing. The highest watersoluble and ethanol extractive value were found in cultivated -Kaliurang (71.19 % and 59.49 %, respectively). Whereas, the lowest watersoluble and ethanol extractive value were found to be 45.25 % in wild -Gunung Kidul and 4.18 % in wild grown -Cianjur, respectively. Even though wild grown -Gunung Kidul has the lowest watersoluble extractive value, it has higher ethanol soluble Phytochemical evaluation The results of phytochemicals evaluation of G. lucidum samples are presented in Tables 2. Watersoluble polysaccharides were the most abundant metabolite compound in G. lucidum, followed by terpenoids, and phenolic, based on the samples. Cultivated G. lucidum from Godean has the highest water -soluble polysaccharides (29.86 GE, mg/g dw) and phenolic content (5.07 GAE, mg/g dw) among other studied samples. Whereas, cultivated G. lucidum from Gunung Kidul has the lowest watersoluble polysaccharides (21.65 GE, mg/g dw) and phenolic content (3.21 GAE, mg/g dw). There was very interesting that both of wild grown G. lucidum have higher terpenoids content compare to all of cultivated G. lucidum. Wild grown G. lucidum are grow spontaneously in self-maintaining populations in natural or semi-natural ecosystems (usually in the forest and oil palm plantation) and can exist independently of direct human action. Cultivated G. lucidum have arisen through human action (composting, spawning, casing, pinning) and are grown for their produce in mushroom farm. Metabolites in mushroom could be affected by physical variations, ecological conditions, terrestrial variations, genetic factors and evolution. Another research showed that production of metabolites was influenced by environmental factors such as humidity, temperature, intensity of light, supply waters, minerals, and CO2. This bioactivity was believed to be associated with the different contents of polysaccharides (b-1,3glucans) and triterpenes (ganoderic acids and others) in each of the samples. Conclusion Physicochemical, phytochemical includes watersoluble polysaccharides, phenolic, terpenoids content, and the antioxidant activity of 2 wild grown and 3 cultivated G. lucidum were evaluated. The present investigation demonstrates that the physicochemical, phytochemical and antioxidant properties of wild grown and cultivated G. lucidum were significantly different. Cultivated G. lucidum from Godean has the highest water -soluble polysaccharides and phenolic content. Both of wild grown G. lucidum have higher terpenoids content compare to all of cultivated G. lucidum. The cultivated -Godean revealed the highest DPPH scavenging activity among of the studied samples. Overall, the presence of primary and secondary metabolites in G. lucidum suggested that the mushroom has potential natural source of antioxidants. |
pub mod ipadic_builder;
|
// SetErrFile configures which writer error output goes to.
func SetErrFile(w fdWriter) {
klog.Infof("Setting ErrFile to fd %d...", w.Fd())
errFile = w
useColor = wantsColor(w)
} |
<gh_stars>1-10
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* @author <NAME>
* @version $Revision$
*/
package javax.swing.text;
import javax.swing.text.DefaultStyledDocument.SectionElement;
import junit.framework.TestCase;
/**
* Tests DefaultStyledDocument.SectionElement class.
*
*/
public class DefaultStyledDocument_SectionElementTest extends TestCase {
private DefaultStyledDocument doc;
private SectionElement section;
public void testGetName() {
assertSame(AbstractDocument.SectionElementName, section.getName());
}
public void testSectionElement() {
assertNull(section.getParent());
assertSame(section, section.getAttributes());
assertEquals(0, section.getAttributeCount());
assertEquals(0, section.getElementCount());
}
@Override
protected void setUp() throws Exception {
super.setUp();
doc = new DefaultStyledDocument();
section = doc.new SectionElement();
}
}
|
<filename>problems/kickstart/2018/E/board-game/hack.cpp
#include <bits/stdc++.h>
using namespace std;
mt19937 mt(random_device{}());
uniform_int_distribution<int> dist(1, 1000000);
#define T 100
#define N 5
inline int next_lexicographical_mask(int v) {
int t = v | (v - 1);
int w = (t + 1) | (((~t & -~t) - 1) >> (__builtin_ctz(v) + 1));
assert(__builtin_popcount(v) == N);
return w;
}
void test() {
int MAX = 1 << 10;
int mask = (1 << 5) - 1;
do {
for (int i = 0; i < 10; i++) {
putchar((mask & (1 << i)) ? '1' : '0');
}
putchar('\n');
mask = next_lexicographical_mask(mask);
} while (mask < MAX);
}
int main() {
printf("%d\n", T);
for (int t = 1; t <= T; t++) {
printf("%d\n", N);
for (int p = 0; p < 2; p++) {
for (int n = 0; n < N; n++) {
int a = dist(mt);
int b = dist(mt);
int c = dist(mt);
if (n)
putchar(' ');
printf("%d %d %d", a, b, c);
}
printf("\n");
}
}
return 0;
}
|
Martinez, listed as a four-star recruit by Scout.com, said he suffered the injury during the Golden Eagles’ basketball playoff game Feb. 24.
Martinez, listed at 6 feet, 2 inches and 200 pounds, was to begin rehabilitation Thursday but said he would take it easy to make sure there are no setbacks.
“That’s the way I’m going to have to look at it,” he said. “I really don’t think this will set me back. I think I’ll be fine once I recover and do what I need to do and take my time. I really haven’t got much of a rest in quite some time, and I’m finally able to rest my body and really get 100 percent, which is a good thing.
Martinez already had planned to skip basketball as a senior to better prepare himself for the college game in football.
Many among the array of Division I universities showing interest in Martinez have been informed of the injury, he said, and in return have told him they would honor their scholarship offers. In fact, new interest arose with Pac-12 Oregon offering him on Sunday and Louisville on Thursday.
Alabama and Mississippi of the powerful Southeastern Conference provided offers last week.
Martinez said he will take it day by day.
▪ Clovis West linebacker DJ Schramm’s first offer came from Mountain West power Boise State on Tuesday. |
import pandas as pd
import numpy as np
from collections import defaultdict
import cv2
import matplotlib.pyplot as plt
import copy
import os
from time import time
import json
label_map_path = "D:\\open_images\\4metadata\\oidv6-class-descriptions.csv"
train_label_path = "D:\\open_images\\1human\\oidv6-train-annotations-human-imagelabels.csv"
train_label_json = "D:\\open_images\\1human\\oidv6-train-annotations-human-imagelabels.json"
val_label_path = "D:\\open_images\\1human\\validation-annotations-human-imagelabels-boxable.csv"
test_label_path = "D:\\open_images\\1human\\test-annotations-human-imagelabels.csv"
raw_img_base = "D:\\open_images\\raw_image\\"
check_img_base = "D:\\open_images\\check_image\\"
raw_label_base = ""
demo_path = "C:\\Users\\Administrator\\Desktop\\"
start_time = time()
def print_time(msg):
global start_time
print("{}--- {}".format(time()-start_time, msg))
start_time = time()
def mark_label_main():
print_time("start check label")
# get label_id_to_name dict
df = pd.read_csv(label_map_path)
label_ids = np.array(df['LabelName'])
label_names = np.array(df['DisplayName'])
label_id_to_name = dict()
for idx, (label_id, label_name) in enumerate(zip(label_ids, label_names)):
label_id_to_name[label_id] = label_name
print_time("got label map dict")
# get train_imgs_label dict
train_imgs_label = json.load(open(train_label_json))
# df = pd.read_csv(train_label_path)
# image_ids = np.array(df['ImageID'])
# label_names = np.array(df['LabelName'])
# confidences = np.array(df['Confidence']) > 0
# valid_image_ids = image_ids[confidences]
# valid_label_ids = label_names[confidences]
# train_imgs_label = defaultdict(list)
# for idx, (image_id, label_id) in enumerate(zip(valid_image_ids, valid_label_ids)):
# train_imgs_label[image_id].append( label_id_to_name[label_id] )
# del image_ids, label_names, confidences, valid_image_ids, valid_label_ids
print_time("got img label list")
# manual check train00
for folder_name in ["train_00"]:
mark_one_folder(folder_name, train_imgs_label)
def mark_one_folder(folder_name, train_imgs_label):
print_time("start processing %s" % (folder_name))
folder_path = raw_img_base + folder_name + "\\"
check_path = check_img_base + folder_name + "\\"
if not os.path.exists(check_path):
os.makedirs(check_path)
print_time("create folder {}".format(check_path))
file_cnt = 0
for root, dirs, files in os.walk(folder_path):
for filename in files:
if not filename.endswith(".jpg"):
continue
fileid = filename.split(".")[0]
out_file_name = os.path.join(check_path, filename)
file_cnt+=1
if file_cnt % 5000 == 0:
print_time("get 5000 images")
try:
mark_one_file(os.path.join(root, filename), fileid, out_file_name, train_imgs_label)
except Exception as ex:
print(ex)
import pdb
pdb.set_trace()
pass
print("end one folder")
def mark_one_file(filename, fileid, out_file_name, train_imgs_label):
demo_img_path = filename
demo_img_id = fileid
demo_img = cv2.imread(demo_img_path)
demo_img_labels = train_imgs_label[demo_img_id]
demo_img_shape = demo_img.shape # y,x,c
# print(demo_img_labels)
font = cv2.FONT_HERSHEY_SIMPLEX
demo_img_text = copy.deepcopy(demo_img)
for idx, one_label in enumerate(demo_img_labels):
posy = 50 + idx * 30
_ = cv2.putText(demo_img_text, one_label, (50, posy), font, 1, (255, 255, 255), 2)
cv2.imwrite(out_file_name, demo_img_text)
# plt.title("demo")
# plt.imshow(demo_img)
# plt.show()
# print("--------")
pass
if __name__ == "__main__":
check_label()
|
Associations of prenatal and postnatal growth with insulin-like growth factor-I levels in pre-adolescence Background Rapid pre - and postnatal growth have been associated with later life adverse health outcomes, which could implicate (as a mediator) circulating insulin-like-growth-factor I (IGF-I), an important regulator of growth. We investigated associations of prenatal (birth weight and length) and postnatal growth in infancy and childhood with circulating IGF-I measured at 11.5 years of age. Methods We analysed 11.5-year follow-up data from 17,046 Belarusian children who participated in the Promotion of Breastfeeding Intervention Trial (PROBIT) since birth. Results Complete data were available for 5422 boys and 4743 girls (60%). We stratified the analyses by sex, as there was evidence of interaction between growth and sex in their associations with IGF-I. Weight and length/height velocity during childhood were positively associated with IGF-I at 11.5 years; associations increased with age at growth assessment and were stronger for length/height gain than for weight gain. The change in internal run-normalized IGF-I z-score at 11.5 years was 0.038 (95% CI -0.004,0.080) per standard deviation (SD) increase in length gain at 0-3 months amongst girls and 0.025 (95% CI - 0.011,0.060) amongst boys, increasing to 0.336 (95% CI 0.281,0.391;) and 0.211 (95% CI 0.165,0.256) for girls and boys, respectively, for growth during 6.5-11.5 years. Conclusion Postnatal growth velocities in childhood are positively associated with levels of circulating IGF-I in pre-adolescents. Future studies should focus on assessing whether IGF-I is on the causal pathway between early growth and later health outcomes, such as cancer and diabetes. |
The effects of genotype and environment on selected traits of oat grain and flour The effects of genotype and environment on selected traits of oat grain and flour The purpose of the investigation was to study the effects of variety properties of oat cultivars and environmental conditions on physical traits and chemical composition of grain and flour. Nine oat cultivars had been grown in the experimental plots (stations) located in two experimental stations in Jelenia Gra and Bobrowniki. The samples were collected in two harvest years. As it has been found, the genetic factors affected physical traits of the grain. The chemical composition of oat grain depended to a large extent on the weather conditions during the growing season. The genetic factors affected only total protein content of the oat grain. The grain of the oat cultivars under investigation was high in total protein but low in starch. Its proteolytic and amylolytic activities were on average levels. Total protein and pentosan content as well as proteolytic activity of the oat flour were lower than those of grain, but starch content and the falling number were higher. |
package domain.basic.type;
/**
* Type
*/
public interface Type {
}
|
"Currently, through Enterprise Florida, the state offers businesses the opportunity to grow into new markets through export counseling, grants to small businesses to participate in export trade missions, and offers international trade leads to Florida companies. Gov. Scott proposes doubling funding from the current year budget for these programs to help more small business expand internationally."
By Amy Sherman on Monday, November 26th, 2018 at 3:33 p.m.
Gov. Rick Scott promised during his 2014 re-election campaign to double funding to help more small businesses expand internationally.
Specifically, Scott promised to double the budget for Enterprise Florida programs that help established companies export their products to international markets. Two programs, Target Sector Trade Grants and Export Plan Marketing Assistance, were launched in 2011 as part of Enterprise Florida's export services.
Prior to 2013-14, federal grants between $300,000 and $500,000 paid for these types of programs.
As Scott campaigned for a second term, the Legislature raised the funding level to $1 million for 2014-15. That means the first budget Scott would have signed after he won re-election would have been for the 2015-16 fiscal year (the current budget), in which the program allotment remained $1 million.
To secure this promise, he needed a level of $2 million. He didn't get it.
Enterprise Florida's program for expansion grants has been funded by $1 million from the state Legislature, at Scott's request. But it stayed at the $1 million level throughout his second term.
By Joshua Gillin on Tuesday, December 8th, 2015 at 5:14 p.m.
Gov. Rick Scott made headlines ahead of the 2016 legislative session for asking lawmakers to give Enterprise Florida millions to attract businesses to the state. But a much smaller campaign promise to help exporters has largely flown under the budget radar.
Legislators have questioned why Scott, who serves as chairman of the public-private partnership, wants an infusion of $250 million for a new fund to help lure out-of-state businesses when Enterprise Florida already has some unused millions in escrow accounts.
Scott is lobbying hard for that cash, but he also promised during his 2014 reelection campaign to double the budget for Enterprise Florida programs that help established companies export their products to international markets. Two programs, Target Sector Trade Grants and Export Plan Marketing Assistance, were launched in 2011 and are part of Enterprise Florida's export services.
The Target Sector Trade Grants program gives eligible small businesses grants to help pay for exhibits at international trade shows. The goal is to give Florida companies exposure to potential foreign clients. An Enterprise Florida spokesman told PolitiFact Florida that 204 grants have been given out since the program started in 2011.
The Export Marketing Plan program helps small manufacturers and tech companies develop a strategy for exporting products. Enterprise Florida charges these companies $500 to develop an export marketing plan, a process than can normally cost thousands. The partnership then helps identify an initial international market to enter and cultivate clients. The program has created plans for 59 Florida businesses.
Enterprise Florida added that it has given out 151 Gold Key/Matchmaker grants, which are follow-up grants to build relationships between Florida exporters and pre-screened international customers.
During the 2013-14 fiscal year, which runs from July 1 to June 30, the programs had a $350,000 budget. The Legislature raised that to $1 million for 2014-15. The first budget Scott would have signed after he won re-election would have been for the 2015-16 fiscal year (the current budget), in which the program allotment remained $1 million.
For the purposes of rating this on the Scott-O-Meter, we're looking for the budget to double to $2 million.
Scott has not asked for an increase in his recommended budget for 2016-17, again requesting $1 million. Legislators have the last word, however, and there's no guarantee they will set aside even that much when they hammer out a final budget during the 2016 session, which starts in January.
So where does that leave this promise?
Scott promised to double the budget for two Enterprise Florida export counseling programs, but his most recent budget request would keep funding the same. He will have two more years to attempt to double the budget to $2 million. |
package com.silverpop.api.client.command;
import com.silverpop.api.client.ApiCommand;
import com.silverpop.api.client.XmlApiProperties;
import com.silverpop.api.client.command.elements.RowsElementType;
import com.silverpop.api.client.result.InsertUpdateRelationalTableResult;
import com.thoughtworks.xstream.annotations.XStreamAlias;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
import javax.xml.bind.annotation.XmlType;
@XmlApiProperties("InsertUpdateRelationalTable")
public class InsertUpdateRelationalTableCommand implements ApiCommand {
@XStreamAlias("TABLE_ID")
protected long tableid;
@XStreamAlias("ROWS")
protected RowsElementType rows;
public InsertUpdateRelationalTableCommand()
{
this.rows = new RowsElementType();
}
@Override
public Class<InsertUpdateRelationalTableResult> getResultType() {
return InsertUpdateRelationalTableResult.class;
}
public long getTableid() {
return tableid;
}
public void setTableid(long tableid) {
this.tableid = tableid;
}
public RowsElementType getRows() {
return rows;
}
public void setRows(RowsElementType rows) {
this.rows = rows;
}
}
|
. The effect of one cane or forearm-crutch on some mechanical parameters at the contralateral hip were investigated. 5 healthy individuals were photographed on a walkway while using an instrumented cane/crutch. Geometric data from an original mathematical model were taken from these fotographs and an X-ray film of the pelvis and femur. The resultant force and pressure at the hip joint, shear force in the femoral neck and, in one case, the bending moment in the trochanteric region were computed and expressed as a function of the load applied to the walking-aid. All values decreased with an increase of this load. The reduction of force and pressure at the hip joint achieved by application of a load equal to 15% of bodyweight are superior to those resulting from common operative procedures done for the samp purpose. Differences between the effects of one sided cane and forearm-crutch are negligible. |
//
// CPUPool.hpp
// MNN
//
// Created by MNN on 2018/07/15.
// Copyright © 2018, Alibaba Group Holding Limited
//
#ifndef CPUPool_hpp
#define CPUPool_hpp
#include "CPUBackend.hpp"
namespace MNN {
class CPUPool : public Execution {
public:
CPUPool(Backend *b, const Pool *parameter);
virtual ~CPUPool() = default;
virtual ErrorCode onResize(const std::vector<Tensor *> &inputs, const std::vector<Tensor *> &outputs) override;
virtual ErrorCode onExecute(const std::vector<Tensor *> &inputs, const std::vector<Tensor *> &outputs) override;
private:
const Pool *mParameter;
std::function<void()> mFunction;
};
} // namespace MNN
#endif /* CPUPool_hpp */
|
/////////////////////////////////////////////////////////////////////////////////////////////
// Copyright 2017 Intel Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/////////////////////////////////////////////////////////////////////////////////////////////
#ifndef __CPUTFONTDX11_H__
#define __CPUTFONTDX11_H__
#include "CPUT.h"
#include "CPUTRefCount.h"
#include "CPUTFont.h"
class CPUTTextureDX11;
class CPUTFontDX11 : public CPUTFont
{
protected:
~CPUTFontDX11(); // Destructor is not public. Must release instead of delete.
public:
CPUTFontDX11();
static CPUTFont *CreateFont( cString FontName, cString AbsolutePathAndFilename);
CPUTTextureDX11 *GetAtlasTexture();
ID3D11ShaderResourceView *GetAtlasTextureResourceView();
private:
CPUTTextureDX11 *mpTextAtlas;
ID3D11ShaderResourceView *mpTextAtlasView;
CPUTResult LoadGlyphMappingFile(const cString fileName);
};
#endif // #ifndef __CPUTFONTDX11_H__ |
Still Seeking Recognition: Mapuche Demands, State Violence, and Discrimination in Democratic Chile As recent research demonstrates that recognition-based reforms have not addressed many of the substantive demands of indigenous movements, many scholars claim that the movements have moved beyond recognition to focus now on the effects of neoliberal capitalism and material claims. However, setting aside formal recognition of indigenous peoples as a focus of analytic concern may have the unintended effect of drawing attention away from two issues: first, recognition continues to be a pertinent concern for some indigenous movements; and second, recognition and redistribution are understood by many indigenous people as inherently linked. Our analysis focuses on the case of the Mapuche in Chile, showing that they are not after recognition or redistribution; their ongoing struggle for justice entails demands that lie at the intersection of the two. The import as well as the contested character of recognition can be seen in state policy, the Mapuches own demands, local elites narratives of exclusion, and the transborder goals of the movement. We argue that combating the ongoing harms faced by indigenous peoples requires developing an understanding of recognition and redistribution that views them as inherently linked rather than as different types of claims. |
Editor's note: This is the second in a two-part series on smart storage strategies. Read the second article, "Defeating the dumpster divers."
Information technology executives may be tempted to keep buying more storage devices as online data continues to accumulate at alarming rates. However, adding new capacity to servers generating the most storage demand won’t necessarily solve the problem.
The use of available space on server-attached storage is notoriously poor. And the need to manage myriad storage systems only adds to the expense and inefficiency of the storage infrastructure.
Storage consolidation, which can be done by using fewer but larger storage devices or centralizing data in a storage-area network (SAN), provides an alternative strategy that is putting many government agencies on the path of cost avoidance.
Clearly something must be done. And it’s a foregone conclusion that storage costs will continue to increase with the growth of data, said Rich Harsell, regional director at GlassHouse Technologies’ Federal Division. But with consolidation, an enterprise has an opportunity to at least keep the spending in check. Harsell estimated that a large organization might be able to scale back future storage costs by as much as 30 percent through consolidation.
But projects must be carefully planned — and pitfalls avoided — for organizations to reap the benefits of consolidation. Here are the steps that experts recommend to make a consolidation project go smoothly.
Industry consultants advise that a storage consolidation project should always start with a thorough understanding of the existing environment. It’s difficult to plan for the future when the present is not well understood.
“Typically, one of the biggest hurdles is that [government agencies] have no idea of what they have,” said Larry Fondacaro, senior solutions architect at integrator Emtec Federal.
That situation can be rectified by an analysis of server-attached storage. Armed with the knowledge of how much storage is in each server, agencies can determine which servers are good candidates for consolidation, said Howard Weiss, field solutions team manager at CDW.
For example, servers with large databases or file servers with considerable back-end storage would be candidates for consolidation. A small application server that stores nothing beyond its operating system doesn’t belong on a consolidated SAN, Weiss said.
“You don’t want to put every server on a SAN,” he said.
Meanwhile, hardware inventories also lead to a better understanding of utilization rates, which is helpful for setting future goals. For example, officials at the University of New Hampshire started a consolidation project last year to corral storage on a single SAN.
Before beginning the project, the school discovered that its average storage utilization in its direct-attached environment was about 40 percent, said Joe Doucet, director of the Enterprise Computing Group at the university’s Computing and Information Services department. Utilization was also inconsistent; some server-attached storage was at maximum capacity, while other devices used only 30 percent of the available space.
Now, the university aims for a 70 percent utilization level, an objective it plans to reach by the end of 2008, Doucet said.
Organizations must know the performance requirements of their applications and user expectations. Storage performance is measured in I/Os per second, the amount of data transferred on and off a hard drive.
An organization may be managing by simply by relying on the speed of the five drives on a typical server, in which each drive is capable of 100 to 150 I/Os per second. Those rates are much lower than most low-end SANs, Weiss said.
“A lot of customers buy the faster SAN because they think they need it,” he said. The lesson is don’t overpay for performance you don’t need, unless you can make a solid case for needing the extra capacity in the future.
An agency that determines its current storage costs can calculate the benefits of consolidation and build a better business case. Knowledge of the cost baseline is also handy for evaluating alternative storage solutions. However, agencies often lack this knowledge because it’s hard to collect.
“The organizations we’ve dealt with don’t have any cost models in place, and hence the only costs that they can put a finger on are acquisition costs,” Harsell said.
Meanwhile, the initial purchase price of storage provides only part of the picture. Harsell estimates that 20 percent to 30 percent of a typical storage budget is tied to acquisition. Other significant costs include salaries, data center floor space, power and cooling, and software license fees.
“Frequently, the comparison between what we are getting rid of and what we are bringing into the data center is pretty narrowly looked at in terms of total cost of ownership,” said Bob Wambach, senior director of storage product marketing at EMC.
Wambach ranked labor and utilities as the top two storage-cost items. Acquisition, over the life of a storage solution, can end up as the No. 3 cost source, he said.
Agencies sometimes overlook the storage-related work server and networking employees do, and they may fail to include their work in calculating overall storage costs, Harsell said.
By getting a detailed grasp of cost, a storage shop can become an in-house service provider if it chooses and can charge other departments for the storage capacity they use.
Harsell said GlassHouse encourages its clients to adopt Information Technology Infrastructure Library-compliant cost models. ITIL is a set of best practices for managing IT services.
It’s not enough to account for server-based storage. Agencies need to understand the nature of the data they store.
“Most people I talk to really don’t know the data that is in storage on the servers,” Weiss said.
The result is inefficient and expensive storage practices. For example, an organization that doesn’t identify static data — for example, JPEGs or PDFs — stands to make thousands of copies of unchanging files during years of back-up sessions, Weiss said. But an organization that flags static data can design a SAN with an area for fixed-content storage, he said.
Agencies that take the time to sort their data can usually take advantage of tiered-storage architectures. In such arrangements, technicians assign data to the most cost-effective storage platform based on the data’s criticality, or in the case of fixed-content storage, its changeability.
Critical data goes on the top tier, which is typically the highest performing disk storage.
Noncritical data may reside in the bottom tier, a tape — or disk-based archive. Some organizations operate an intermediate tier for data of middling value. This tier usually consists of storage built around lower-cost devices, such as Serial ATA (SATA) disk arrays.
Fondacaro said Emtec Federal uses storage resource management tools to classify data for customers. Data may be classified by application, file type and most recent access time.
That arrangement helps customers decide whether a particular piece of data is critical and needs to be housed in high-end production storage or whether it is infrequently accessed and can be housed in near-line or archival storage.
Lt. Col. C. J. Wallington, division chief of advanced technologies at the Army’s Program Executive Office for Enterprise Information Systems, said he has identified a storage consolidation need for small groups of users. The division is testing an EqualLogic SAN for tactical use.
On the opposite end of the scale, SANs that already provide some degree of consolidation are evolving into more dense configurations, Wambach said.
The same holds true for network-attached storage devices, he said. Those devices consolidate file-level storage, while SANs aggregate block-level storage associated with databases.
“The consolidation trend is to build bigger boxes,” Wambach said.
Consolidation perksA successful storage consolidation project can produce a variety of savings. Here are three of the most common.
Reducing capital expenses ranks high as a reason for consolidating storage. Consolidation begets better use of storage resources, which means organizations don’t have to pay as much for storage.
Howard Weiss, field solutions team manager at CDW, said a customer may equip a small server with eight hard drives but use only 25 percent of the storage space. With drives costing $300 to $400 apiece, an idle storage unit is a prime money-waster.
But idle drives are only part of the problem. In a direct-attached storage environment, excess capacity on one server can’t be shared elsewhere. So as customers implement new servers and applications, they purchase more server-attached storage.
In comparison, pooling storage resources and sharing them on a network breaks the cycle of equipping every server with storage. “The cost savings can be quite enormous,” Weiss said.
Consolidating reduces operational and capital costs. Fewer drives draw less power. “Beside the processor, those hard drives are one of the most-power hungry things inside of the server,” Weiss said.
Reducing the number of storage devices means less equipment to manage, which helps keep operational and labor expenses under control.
“One of the key reasons [to consolidate] is to get a better handle on operational costs and the people costs,” said Arun Taneja, president of the Taneja Group, a storage consulting firm.
The University of New Hampshire targets improved utilization and staff savings with its ongoing project to corral storage on a single storage-area network (SAN). On the workforce side, consolidation has let the university triple its storage capacity without adding staff.
“The ongoing cost of staff is one of the hardest things to deal with in a university,” said Tom Franke, the university’s chief information officer.
Sometimes simply parting with old and difficult-to-maintain equipment is a benefit in itself.
Fort Wayne, Ind., for example, is disposing of servers and associated storage that it began using 20 years ago. Clifford Clarke, the city’s chief information officer, said the old equipment needs to be refreshed because it can support only a limited amount of direct-attached storage units. The city has begun migrating to an EMC-based SAN.
Consolidation pitfallsAgencies that pursue a storage consolidation strategy should be prepared for trouble spots on the way to greater efficiency. “Clearly, putting all your eggs in one basket creates its own challenges,” said Arun Taneja, president of the Taneja Group.
Here are three potential hazards for which being prepared can prevent trouble.
A disk drive failure in a direct-attached storage system typically doesn’t affect many users. But having a shared-storage array go down could affect many people, and organizations don’t always consider that possibility.
Redundancy is the main theme here. Taneja said organizations can deploy arrays with dual Redundant Array of Independent Disks controllers and install dual host-bus adapters on servers. They can build redundant cabling and storage switches into the storage architecture.
Storage experts advise having a disaster recovery strategy before they consolidate their storage. Organizations can use various snapshot products to make point-in-time copies of their data. They can also opt for remote replication, a process in which data housed on a production array is continuously copied to an off-site array.
The choice will depend on the expectations of users for whom recovery time and recovery points are critical. Sean Derrington, director of storage management at Symantec’s data center management group, advises customers to create various tiers of service that can satisfy those expectations.
When storage is consolidated, remote users no longer have the burden of maintaining storage. But they might gain a new headache: having to wait a bit longer to gain access to their data.
Clifford Clarke, Fort Wayne’s chief information officer, identified remote connectivity problems as an unintended consequence of the Indiana city’s consolidated file storage project. The city, which has more than 30 remote sites, found that it was taking a long time for information to move across the wide-area network. File access times stretched to a minute.
The city expects to fix the problem by installing Cisco Systems’ Wide Area Application Services solution. Clarke said the devices reduce access times by 40 percent to 60 percent and provide local-area network-like performance. |
On the stability of the massive scalar field in Kerr space-time The current early stage in the investigation of the stability of the Kerr metric is characterized by the study of appropriate model problems. Particularly interesting is the problem of the stability of the solutions of the Klein-Gordon equation, describing the propagation of a scalar field in the background of a rotating (Kerr-) black hole. Results suggest that the stability of the field depends crucially on its mass $\mu$. Among others, the paper provides an improved bound for $\mu$ above which the solutions of the reduced, by separation in the azimuth angle in Boyer-Lindquist coordinates, Klein-Gordon equation are stable. Finally, it gives new formulations of the reduced equation, in particular, in form of a time-dependent wave equation that is governed by a family of unitarily equivalent positive self-adjoint operators. The latter formulation might turn out useful for further investigation. On the other hand, it is proved that from the abstract properties of this family alone it cannot be concluded that the corresponding solutions are stable. Introduction Kerr space-time is the only possible vacuum exterior solution of Einstein's field equations describing a stationary, rotating, uncharged black hole with non-degenerate event horizon and is expected to be the unique, stationary, asymptotically flat, vacuum space-time containing a non-degenerate Killing horizon. Also, it is expected to be the asymptotic limit of the evolution of asymptotically flat vacuum data in general relativity. An important step towards establishing the validity of these expectations is the proof of the stability of Kerr space-time. In comparison to Schwarzschild space-time, where linearized stability has been proved, this problem is complicated by a lower dimensional symmetry group and the absence of a Killing field that is everywhere time-like outside the horizon. For instance, the latter is reflected in the fact that energy densities corresponding to the Klein-Gordon field in a Kerr gravitational field have no definite sign. This absence complicates the application of methods from operator theory and of so called "energy methods" that are both employed in estimating the decay of solutions of hyperbolic partial differential equations. 1 On the other hand, two facts are worth noting. For this, note that in the following any reference to coordinates implicitly assumes use of Boyer-Lindquist coordinates. First, in addition to its Killing vector fields that generate one-parameter groups of symmetries (isometries), Kerr space-time admits a Killing tensor that is unrelated to its symmetries. Initiated by his groundbreaking work on the complete separability of the Hamilton-Jacobi equation in a Kerr background, Carter discovered that an operator that is induced by this Killing tensor commutes with the wave operator. On the other hand, Carter's operator contains a second order time derivative. An analogous operator has been found for the operator governing linearized gravitational perturbations of the Kerr geometry. A recent study finds another such 'symmetry operator' which only contains a first order time derivative and commutes with a rescaled wave operator. Differently to Carter's operator, this operator is analogous to symmetry operators induced by one-parameter group of isometries of the metric, in that it induces a mapping in the data space that is compatible with time evolution, and therefore describes a true symmetry of the solutions. It is likely that an analogous operator can be found for a rescaling of the linearized operator governing gravitational perturbations of the Kerr geometry. In case of existence, it should facilitate the generalization to a Kerr background of the Regge-Wheeler-Zerilli-Moncrief (RWZM) decomposition of fields on a Schwarzschild background which in turn should greatly simplify the analysis of the stability of Kerr space-time. Second, there is a Killing field that is time-like in an open neighborhood of the event horizon given by : is time-like in the ergoregion, see Lemma 2.2. On the other hand, ∂ t itself is spacelike in the ergoregion, null on the stationary limit surface and time-like outside. For these reasons, at least for a satisfying (1.0.2), it might be possible to "join" energy inequalities belonging to the Killing fields by and ∂ t. The discussion of the stability of the Kerr black hole is in its early stages. The first intermediate goal is the proof or disproof of its stability under "small" perturbations. As mentioned before, the linearized stability of the Schwarzschild metric has already been proved. In that case, by using the RWZM decomposition of fields in a Schwarzschild background, the question of the stability can be completely reduced to the question of the stability of the solutions of the wave equation on Schwarzschild space-time. For Kerr space-time, a similar reduction is not known. If such reduction exists, there is no guarantee that the relevant equation is the scalar wave equation. It is quite possible that such equation contains an additional (even positive) potential term that, similar to the potential term introduced by a mass of the field, could result in instability of the solutions. Second, an instability of a massive scalar field in a Kerr background could indicate instability of the metric against perturbations by matter which generically has mass. If this were the case, even a proof of the stability of Kerr space-time could turn out as a purely mathematical exercise with little relevance for general relativity. Currently, the main focus is the study of the stability of the solutions of the Klein-Gordon field on a Kerr background with the hope that the results lead to insight into the problem of linearized stability. Although the results of this paper also apply to the case that = 0, its main focus is the case of Klein-Gordon fields of mass > 0. Quite differently from the case of a Schwarzschild background, the results for these test cases suggest an asymmetry between the cases = 0 and = 0. In the case of the wave equation, i.e., = 0, results point to the stability of the solutions, whereas for = 0, there are a number of results pointing in the direction of instability of the solutions under certain conditions. In particular, unstable modes were found by the numerical investigations by Furuhashi and Nambu for M ∼ 1 and (a/M ) = 0.98, by Strafuss and Khanna for M ∼ 1 and (a/M ) = 0.9999 and by Cardoso and Yoshida for M 1 and 0.98 (a/M ) < 1. The analytical study by Hod and Hod finds unstable modes for M ∼ 1 with a growth rate which is four orders of magnitude larger than previous estimates. On the other hand, proves that the restrictions of the solutions of the separated, in the azimuthal coordinate, Klein-Gordon field (RKG) are stable for Here m ∈ Z is the 'azimuthal separation parameter' and r + := M + √ M 2 − a 2. So far, this has been the only mathematically rigorous result on the stability of the solutions of the RKG for > 0. This result contradicts the result of Zouros and Eardley, but is consistent with the other results above. In addition, there is the numerical result by Konoplya and Zhidenko, which confirms the result of Beyer, but also finds no unstable modes of the RKG for M ≪ 1 and M ∼ 1. Among others, this paper improves the estimate (1.0.3). It is proved that the solutions of the RKG are stable for satisfying Further, it gives new formulations for RKG, in particular, in form of a time-dependent wave equation that is governed by a family of unitarily equivalent positive self-adjoint operators. The latter might turn out useful in future investigations. On the other hand, it is proved that from the abstract properties of this family alone it cannot be concluded that the corresponding solutions are stable. The remainder of the paper is organized as follows. Section 2 gives the geometrical setting of the discussion of the solutions of the RKG and a proof of the above mentioned property of the Killing field. Section 3 gives basic properties of operators read off from the equation, including some new results. These properties provide the basis for a formulation of the initial-value problem for the equation in Section 4 which is less dependent on methods from semigroups of operators than that of. Section 4 also contains the improved result on the stability of the solutions of RKG, a formulation of the RKG in terms of a timedependent wave equation and the above mentioned counterexample. Finally, the paper concludes with a discussion of the results and 2 appendices that contain proof of results that were omitted in the main text to improve the readability of the paper. The Geometrical Setting In Boyer-Lindquist coordinates 1, (t, r,, ) : → R 4, the Kerr metric g is given by 1 If not otherwise indicated, the symbols t, r,, denote coordinate projections whose domains will be obvious from the context. In addtion, we assume the composition of maps, which includes addition, multiplication where g tt := 1 − 2M r, g t := 2M ar sin 2, g rr := − ∆, g := −, M is the mass of the black hole, a ∈ is the rotational parameter and ∆ := r 2 − 2M r + a 2, := r 2 + a 2 cos 2, In these coordinates, the reduced Klein-Gordon equation corresponding to m ∈ Z, governing solutions : → C of the form where u : s → C, for all t ∈ R, ∈ (−, ), (r, ) ∈ s, is given by for every f ∈ C 2 ( s, C) and ≥ 0 is the mass of the field. In particular, note that b defines a real-valued bounded function on s which positive for m ≥ 0 and negative for m ≤ 0. For this reason, it induces a bounded self-adjoint (maximal multiplication) operator B on the weighted L 2 -space X, see below, which is positive for m ≥ 0 and negative and so forth, always to be maximally defined. For instance, the sum of two complex-valued maps is defined on the intersection of their domains. Finally, we use Planck units where the reduced Planck constant, the speed of light in vacuum c, and the gravitational constant, all have the numerical value 1. for m ≤ 0. Further, D 2 r is singular since the continuous extensions of the coeffcients of its highest (second) order radial derivative vanish on the horizon {r + } . In particular, the following proves that the Killing field Proofs are given in Appendix 1. has a continuous extension to s. This extension is positive on ∂ s if and only if is time-like precisely on Proof. See Appendix 1. Basic Properties of Operators in the Equation In a first step, we represent (2.0.4) as a differential equation for an unknown function u with values in a Hilbert space. For this reason, we represent formal operators present in (2.0.4) as operators with well-defined domains in an appropriate Hilbert space and, subsequently, study basic properties of the resulting operators. Theorems 3.5, 3.6 provide new results. Definition 3.1. In the following, X denotes the weighted L 2 -space X defined by Further, B is the bounded linear self-adjoint operator on X given by for every f ∈ X. Note that B is positive for m ≥ 0 and negative for m ≤ 0. Remark 3.2. We note that, as consequence of the fact that B ∈ L(X, X) is self-adjoint, the operator where exp denotes the exponential function on L(X, X), see, e.g., Section 3.3 in, is unitary for every t ∈ R and coincides with the maximal multiplication operator by the function exp((it/2)b). to consist of all f ∈ C 2 ( s, C) ∩ X satisfying the conditions a), b) and c): Lemma 3.4. A 0 is a densely-defined, linear, symmetric and essentially self-adjoint operator in X. In addition, the closure 0 of A 0 is semibounded with lower bound Proof. See Lemma 2 and Theorem 4 in. Proof. For this, we use the notation of. According to the proof of Theorem 4 of, the underlying sets of X andX := L 2 ( s, (r 4 /∆) sin )) are equal; and the norms induced on the common set are equivalent, the maximal multiplication operator T r 4 /(∆) by the function r 4 /(∆) is a bijective bounded linear operator on X that has a bounded linear inverse; the operator H, related to A 0 by is a densely-defined, linear, symmetric, semibounded and essentially self-adjoint operator inX, and D is contained in the (coinciding) domains of A 0 and H. Further, it has been shown that (H − )D is dense inX for <, where := −m 2 a 2 /r 4 + is a lower bound for H. From this follows that D is a core for the closureH of H. For the proof, let SinceH − is bijective with a bounded inverse, the latter implies that f 1, f 2,... is convergent to f and also that lim →∞ Hf =Hf. Hence, we conclude thatH coincides with the closure of H| D. Since T r 4 /(∆), T −1 r 4 /(∆) ∈ L(X, X), from the latter also follows that 0 coincides with the closure of A 0 | D. Theorem 3.6. The operator 0 coincides with the Friedrichs extension of the restriction of A 0 to C ∞ 0 ( s, C). Proof. As a consequence of Theorem 3 in, it follows that D is contained in the domain of the Friedrichs extension In this connection, note that the addition of a multiple of the identity operator 'does not affect' the Friedrichs extension of an operator. 1 Since D is a core for A 0, from this follows that A F ⊃ 0 and hence, since A F is in particular symmetric and A 0 is self-adjoint, that A F = 0. Lemma 3.7. A : is a densely-defined, linear and positive self-adjoint operator in X. Proof. That A is a densely-defined, linear and self-adjoint operator in X is a consequence of Theorem 3.4 and the Rellich-Kato theorem. For the latter, see e.g. Theorem X.12 in, Vol. II. The positivity of A is a simple consequence of the fact that Formulation of an Initial Value Problem In the following, we give an initial value formulation for equations of the type of (2.0.4) whose possibility is indicated by Theorem 4.11 in, see also Theorem 5.4.11 in. Here, we give the details of such formulation, including abstract energy estimates that provide an independent basis for the estimate (1.0.3) and also for its improvement (4.0.13) below. Specialization of the abstract formulation to X given by (3.0.6), A := 0 − C, B given by (3.0.7) and C := −( + ) for some > 0, provides an initial-value formulation for (2.0.4) on every open interval I of R along with quantities that are conserved under time evolution. Note that in this case A + C = 0. For convenience, the proofs of the following statements are given in the Appendix 2. Assumption 4.1. In the following, let (X, | ) be a non-trivial complex Hilbert space and A be a densely-defined, linear and strictly positive self-adjoint operator in X. Definition 4.2. We denote by W 1 A the complex Hilbert space 1 given by D(A 1/2 ) equipped with the scalar product | 1, defined by A may be regarded as a generalized Sobolev space. Remark 4.3. Note that, as a consequence of A, X) be a symmetric linear operator in X and I be a non-empty open interval of R. Definition 4.5. We define a solution space S I to consist of all differentiable u : for every t ∈ I. 1 Note that (4.0.9) contains two types of derivatives. Every first derivative of u is to be understood in the sense of derivatives of W 1 A -valued functions, whereas every further derivative is to be understood in the sense of derivatives of X-valued functions. Unless otherwise indicated, this convention is also adopted in the subsequent part of this section. On the other hand, since the imbedding W 1 A → X is continuous, differentiability in the sense of W 1 A -valued functions also implies differentiability in the sense of X-valued functions, including agreement of the corresponding derivatives. In particular, every u ∈ S I also satisfies the equation u (t) + iBu (t) + (A + C)u(t) = 0 (4.0.10) for every t ∈ I, where here all derivatives are to be understood in the sense of derivatives of X-valued functions. Further, note that the assumptions on C, in general, do not imply that A + C is self-adjoint. Remark 4.6. According to Theorem 4.11 in, see also Theorem 5.4.11 in, for every t 0 ∈ I, ∈ D(A) and ∈ W 1 A, there is a uniquely determined corresponding u ∈ S I such that u(t 0 ) = and u (t 0 ) =. The proof uses methods from the theory of semigroups of operators. Independently, the uniqueness of such u follows more elementary from energy estimates in part (iii) of the subsequent Lemma 4.7. Parts (i) and (ii) of the subsequent Lemma 4.7 give a "conserved current" and a "conserved energy", respectively, that are associated with solutions of (4.0.9). Part (iii) gives associated energy estimates, that, in particular, imply the uniqueness of the initial value problem for (4.0.9) stated in (iv). Lemma 4.7. Let u ∈ S I and t 0 ∈ I. Then the following holds. for every t ∈ I, is constant. for every t ∈ I, is constant. (iii) In addition, let A + C be semibounded with lower bound ∈ R. Then for t 1, t 2 ∈ I such that t 1 ≤ t 2. (iv) In addition, let A + C be semibounded. If v ∈ S I is such that Proof. See Appendix 2. The following example proves that it is possible that the energy assumes strictly negative values, but that the solutions of (4.0.9) are stable, i.e., that there are no exponentially growing solutions. This is different from the case of vanishing B, where there are unstable solutions of (4.0.9) if and only if the energy assumes strictly negative values. In addition, the value of the conserved energy E u corrresponding to the solution u of (4.0.9) with initial data u = t and u = t is < 0. There are other possible definitions for the energy that is associated with solutions of (4.0.9). In cases of vanishing B, such are usually not of further use. In the case of a nonvanishing B, they can be useful as is the case for the RKG. In this case, the positivity of E s,u for sufficiently large masses of the field and s = ma 2M r + for every t ∈ I, is constant. If A + C + s(B − s) is additionally semibounded with lower bound ∈ R, then Proof. See Appendix 2. Proof. The statement is a direct consequence of Corollary 4.9 (or Theorem 4.17 (ii) in, see also Theorem 5.4.17 (ii) in ). The following gives a connection of the operator 0 + sB − s 2, s ∈ R, and the Killing field ∂ t + s∂. The corresponding proof is given in Appendix 2. This connection sheds light on the previous proof of the positivity of 0 + sB − s 2 for s = ma/(2M r + ) for sufficiently large. Differently to g tt, the term g(∂ t + s∂, ∂ t + s∂ ) is positive in a neighbourhood of the event horizon, but gradually turns negative away from the horizon. The latter is compensated by the mass term 2 for sufficiently large. Lemma 4.13. Let s ∈ R and := ∂ t + s∂. Then Proof. See Appendix 2. Proof. See Appendix 2. The previous can be used to prove the stability of the solutions of (4.0.9) in particular cases where the operators A + C and B commute. Note that in these cases, there is a further conserved "energy" associated to the solutions of (4.0.9). Theorem 4.15. If, in addition, A + C is self-adjoint and semibounded, B is bounded, A + C and B commute, i.e., is positive, then there are no exponentially growing solutions of (4.0.9). Proof. The statement is a simple consequence of Lemma 4.14 and Lemma 4.7 (iii). Coming back to the statement of Lemma 4.14, for every t ∈ I, the corresponding A(t) is a densely-defined, linear and self-adjoint operator in X, see, e.g., Lemma 7.1, in the Appendix. In particular, if A + C + (1/4) B 2 is positive, A(t) is positive, too. For instance, according to Lemma 3.7, this is true in the special case of the Klein-Gordon equation (2.0.4). Hence in such case it might be expected that (4.0.14) for u ∈ S I implies that u is not exponentially growing since this is the case if A(t) = A for every t ∈ I, where A is a densely-defined, linear, positive self-adjoint operator in X. In that case, u is given by for all t 0, t ∈ I, where cos((t − t 0 )A 1/2 ) and sin((t − t 0 )A 1/2 /A 1/2 ) denote the bounded linear operators that are associated by the functional calculus for A 1/2 to the restriction of cos((t − t 0 ).id R ) and the restriction of the continuous extension of sin((t − t 0 ).id R )/id R to [0, ∞), respectively, to the spectrum of A 1/2. Note that the solutions (4.0.16) are in particular bounded if A is strictly positive. Unfortunately, this expectation is in general not true. A counterexample can be found already on the level of finite dimensional Hilbert spaces. Example 4.16. The example uses for the Hilbert space X the space C 2 equipped with the Euclidean scalar product, := A + C and B are the linear operators on C 2 whose representations with respect to the canonical basis are given by the matrices respectively. An analysis shows that and B are bounded linear and self-adjoint operators in X, is semibounded, B is positive and+(1/4)B 2 is even strictly positive. Further, and B do not commute. Finally, the operator polynomial (C → L(X, X), → − B − 2 ) has an eigenvalue with real part < 0. Therefore, in this case, there is an exponentially growing solution of the corresponding equation (4.0.10) and hence also of (4.0.14). Note that in this case, the corresponding family of operators (4.0.15) consists of strictly positive bounded self-adjoint linear operators whose spectra are bounded from below by a common strictly positive real number. Fig 2 gives the graph of p := (R → L(X, X), → det( − B − 2 )) = 4 + 4.6 3 + 4.29 2 − 1 which suggests that there are precisely two distinct simple roots. Indeed, this is true. The proof proceeds by a discussion of the graph of p using the facts that Thus, (C → L(X, X), → det( − B − 2 )) has two distinct simple real roots and a pair of simple complex conjugate roots. Discussion The mathematical investigation of the stability of Kerr space-time has started, but is still in the phase of the study of relevant model equations in a Kerr background. The study of the solutions of the Klein-Gordon equation is expected to give important insight into the problem. In Even in the presence of such a derivative, it is hard to believe that the addition of such term causes instability. In particular, the energy estimates in Lemma 4.7, indicate a stabilizing influence of such a term. On the other hand, so far, there is no result that would allow to draw such conclusion. The numerical results that indicate instability in the case = 0 make quite special assumptions on the values of the rotational parameter of the black hole that do not make them look very trustworthy. They could very well be numerical artefacts. Moreover, the numerical investigation by Konoplya et al.,, does not find any unstable modes and contradicts all these investigations. Also the analytical results in this area are not accompanied by error estimates and therefore ultimately inconclusive. Still, apart from, all these results are consistent with the estimate on in and the improved estimate of this paper, above which the solutions of the reduced, by separation in the azimuth angle in Boyer-Lindquist coordinates, Klein-Gordon equation are stable. It seems that the proof of the stability of the solutions of the wave equation in a Kerr background will soon be established. The question of the stability of the massive scalar field in a Kerr background is still an open problem, with only few rigorous results available, and displays surprising mathematical subtlety. In particular, in this case standard tools of theoretical physical investigation, including numerical investigations, seem too imprecise for analysis. Hence a rigorous mathematical investigation, like the one performed in this paper, seems to be enforced. Appendix 1 In the following, we give the proofs of the Lemmatas 2.1 and 2.2 from Section 2. Proof of Lemma 2.1. Proof. For this, let s ∈ R. Then Hence g(∂ t + s ∂, ∂ t + s ∂ ) has a positive extension to the boundary of s if and only if In this case, Proof of Lemma 2.2. Appendix 2 In the following, we give the omitted proofs from Sections 3 and 4. Proof of Lemma 4.7. Proof. '(i)': For this, let t ∈ I and h ∈ R such that t + h ∈ I. Then Hence it follows that j u,v is differentiable in t with derivative From the latter, we conclude that the derivative of j u,v vanishes and hence that j u,v is a constant function. '(ii)': For this, again, let t ∈ I and h ∈ R such that t + h ∈ I. Further, let := A + C. Then Hence it follows that E u is differentiable in t with derivative From the latter, we conclude that the derivative of E u vanishes and hence that E u is a constant function. '(iii)': Since A + C is semibounded with lower bound ∈ R, for every ∈ D(A). Hence it follows by (ii) that for every t ∈ R. If = 0, the latter implies that u (t) ≤ E 1/2 u for every t ∈ I. Hence it follows by weak integration in X, e.g., see Theorem 3.2.5 in, that where t 1, t 2 ∈ I are such that t 1 < t 2, and hence that For the weak integration, note that the inclusion of W 1 A into X is continuous. If > 0, it follows from (7.0.18) along with the parallelogram identity for elements of X that for t ∈ I. Hence it follows by weak integration in X that for all t 1, t 2 ∈ I such that t 1 < t 2. The latter implies that e 1/2 t2 u(t 2 ) ≤ e 1/2 t1 u(t 1 ) + (2E u /) 1/2 e 1/2 t2 − e 1/2 t1. Hence If < 0, it follows from (7.0.18) that for every t ∈ I, where a := − > 0. The latter implies that for every t ∈ I. Hence it follows by weak integration in X that where t 1, t 2 ∈ I are such that t 1 < t 2, and By help of the generalized Gronwall inequality from Lemma 3.1 in, from the latter we conclude that for t 1 ∈ I and t 2 ∈ I such that t 1 < t 2. '(iv)': For this, we define w := v − u. Then w is an element of S I such that w(t 0 ) = w (t 0 ) = 0. This implies that for every t ∈ I is constant of value 0. Hence we conclude from (iii) that w(t) = 0 X for all t ∈ I and therefore that v = u. Proof of Corollary 4.9. Proof of Lemma 4.13. Proof. First, we notice that the only non-vanishing components of (g ab ) (a,b)∈{t,r,,} 2 are given by g tt =, g t = g t = 2M ar △, g rr = − △, g = − 1, Further, we notice that = exp(tD) exp(hD) and hence that g := (I → X, s → exp(sD)f (s)) is differentiable in t with derivative In particular, this implies, if f is twice differentiable in t ∈ I, that g is twice differentiable in t with second derivative Applying the previous auxiliary result to D = (i/2)B proves that v is twice differentiable. In the following, we give some abstract lemmatas that are applied in the text. For the convenience of the reader, corresponding proofs are added. Hence also lim →∞ U =. Therefore, U (D) is a core for U AU −1. Finally, if A is positive, it follows for ∈ D(A) that U |(U A U −1 )U = U |U A = |A ≥ 0 and hence also the positivity of U AU −1. |
package draconictransmutation.gameObjs.registration.impl;
import draconictransmutation.gameObjs.registration.WrappedRegistryObject;
import draconictransmutation.utils.text.ILangEntry;
import net.minecraft.util.SoundEvent;
import net.minecraft.util.Util;
import net.minecraftforge.fml.RegistryObject;
public class SoundEventRegistryObject<SOUND extends SoundEvent> extends WrappedRegistryObject<SOUND> implements ILangEntry {
private final String translationKey;
public SoundEventRegistryObject(RegistryObject<SOUND> registryObject) {
super(registryObject);
translationKey = Util.makeTranslationKey("sound_event", this.registryObject.getId());
}
@Override
public String getTranslationKey() {
return translationKey;
}
} |
package agent
import (
"io"
"os"
"strings"
"github.com/danjacques/gofslock/fslock"
"github.com/tliron/kutil/logging"
problemspkg "github.com/tliron/kutil/problems"
"github.com/tliron/kutil/transcribe"
cloutpkg "github.com/tliron/puccini/clout"
"github.com/tliron/puccini/clout/js"
)
func (self *Agent) OpenServiceClout(namespace string, serviceName string) (fslock.Handle, *cloutpkg.Clout, error) {
if lock, err := self.lockPackage(namespace, "service", serviceName, false); err == nil {
cloutPath := self.getPackageMainFile(namespace, "service", serviceName)
log.Debugf("reading clout: %q", cloutPath)
if clout, err := cloutpkg.Load(cloutPath, "yaml"); err == nil {
return lock, clout, nil
} else {
logging.CallAndLogError(lock.Unlock, "unlock", log)
return nil, nil, err
}
} else {
return nil, nil, err
}
}
func (self *Agent) SaveServiceClout(serviceNamespace string, serviceName string, clout *cloutpkg.Clout) error {
cloutPath := self.getPackageMainFile(serviceNamespace, "service", serviceName)
log.Infof("writing to %q", cloutPath)
if file, err := os.OpenFile(cloutPath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666); err == nil {
defer logging.CallAndLogError(file.Close, "file close", log)
return transcribe.WriteYAML(clout, file, " ", false)
} else {
return err
}
}
func (self *Agent) CoerceClout(clout *cloutpkg.Clout, copy_ bool) (*cloutpkg.Clout, error) {
coercedClout := clout
if copy_ {
var err error
if coercedClout, err = clout.AgnosticCopy(); err != nil {
return nil, err
}
}
problems := problemspkg.NewProblems(nil)
js.Coerce(coercedClout, problems, self.urlContext, true, "yaml", false, true)
return coercedClout, problems.ToError(true)
}
func (self *Agent) OpenFile(path string, coerceClout bool) (io.ReadCloser, error) {
if coerceClout {
if file, err := os.Open(path); err == nil {
defer logging.CallAndLogError(file.Close, "file close", log)
if clout, err := cloutpkg.Read(file, "yaml"); err == nil {
if clout, err = self.CoerceClout(clout, false); err == nil {
if code, err := transcribe.EncodeYAML(clout, " ", false); err == nil {
return io.NopCloser(strings.NewReader(code)), nil
} else {
return nil, err
}
} else {
return nil, err
}
} else {
return nil, err
}
} else {
return nil, err
}
} else {
return os.Open(path)
}
}
|
Hope as a strategy in supervising social workers of terminally ill patients. This article focuses on supervision of social workers who feel despair and hopelessness in treating terminally ill patients. The emotional difficulties that may lead to these feelings are discussed. A special model of supervision that relates to hope as a strategy to help social workers cope with such difficulties is presented. The model suggests goals in supervising such social workers and outlines the means and techniques for achieving the goals. |
Superpixel-Based Classification of Occlusal Caries Photography Segmentation using superpixels of simple photographic image for classification of occlusal caries according to the lesion severity, makes significant changes to the way experts annotate the image, but also in the way of learning and evaluating of an automatic classifier. Working on an extension of the lower part of the 6-class ICDAS (International Caries Detection and Assessment System) scale, we are building a classifier exhibiting very low Random Forests OOB (Out-Of-Bag) Error estimation, without performing any image enhancement or morphological operation techniques. We also demonstrate the robustness of the classifier's performance over the size of superpixels by introducing a shrinking factor in the model's learning phase. Finally we highlight the complications to evaluate the models performance through the cross-validation procedure, arising from the class inequalities as distributed across the limited image dataset. |
Lung cancer patients frequently visit the emergency room for cancer-related and -unrelated issues. Lung cancer patients visit the emergency room (ER) for cancer-related and -unrelated reasons more often compared to patients with other types of cancer. This results in increased admissions and deaths in the ER. In this study, we retrospectively reviewed the characteristics of lung cancer patients visiting the ER in order to optimize the utilization of emergency medical services and improve the patients' quality of life. Lung cancer patients visiting the ER of a single institution over a 2-year period were analyzed. The patients' chief complaints and diagnoses at presentation in the ER were classified as cancer-related and -unrelated. Hospital admission, discharge from the ER, hospital mortality and survival of advanced lung cancer patients hospitalized through admission to the ER was surveyed. A total of 113 patients visited the ER 143 times. Seventy visits (49.0%) were cancer-related and 73 (51.0%) were cancer-unrelated. Respiratory symptoms, pain, gastrointestinal and neurological events and fever were the most common cancer-related issues recorded. With the progression of cancer stage, the number of ER visits, admissions, ambulance use and hospital mortalities increased. In visits due to cancer-unrelated issues, including infection, cardiovascular and gastrointestinal diseases, fever was the most common complaint. Emergency admissions of advanced-stage patients for cancer-related issues revealed a significantly shorter median survival time compared to that for patients admitted for cancer-unrelated issues (61 vs. 406 days, respectively; P<0.05). It was observed that outpatients with lung cancer visited the ER for cancer-related and -unrelated reasons with a similar frequency. Therefore, accurate differential diagnosis in the ER is crucial for patients with lung cancer. |
// Test will test for membership of the data and returns true if it is a
// member, false if not. This is a probabilistic test, meaning there is a
// non-zero probability of false negatives but a zero probability of false
// positives. That is, it may return false even though the data was added, but
// it will never return true for data that hasn't been added.
func (i *InverseBloomFilter) Test(data []byte) bool {
index := i.index(data)
indexPtr := (*unsafe.Pointer)(unsafe.Pointer(&i.array[index]))
val := (*[]byte)(atomic.LoadPointer(indexPtr))
if val == nil {
return false
}
return bytes.Equal(*val, data)
} |
<filename>disabled/tokens.h
#ifndef _TOKENS_H_INCLUDED_
#define _TOKENS_H_INCLUDED_
namespace element
{
enum Token : int
{
T_EOF = 0,
T_NewLine,
T_Identifier, // name
T_Integer, // 123
T_Float, // 123.456
T_String, // "abc"
T_Bool, // true false
T_If, // if
T_Else, // else
T_Elif, // elif
T_For, // for
T_In, // in
T_While, // while
T_This, // this
T_Nil, // nil
T_Return, // return
T_Break, // break
T_Continue, // continue
T_Yield, // yield
T_And, // and
T_Or, // or
T_Xor, // xor
T_Not, // not
T_Underscore, // _
T_LeftParent, // (
T_RightParent, // )
T_LeftBrace, // {
T_RightBrace, // }
T_LeftBracket, // [
T_RightBracket, // ]
T_Column, // :
T_DoubleColumn, // ::
T_Semicolumn, // ;
T_Comma, // ,
T_Dot, // .
T_Add, // +
T_Subtract, // -
T_Divide, // /
T_Multiply, // *
T_Power, // ^
T_Modulo, // %
T_Concatenate, // ~
T_AssignAdd, // +=
T_AssignSubtract, // -=
T_AssignDivide, // /=
T_AssignMultiply, // *=
T_AssignPower, // ^=
T_AssignModulo, // %=
T_AssignConcatenate,// ~=
T_Assignment, // =
T_Equal, // ==
T_NotEqual, // !=
T_Less, // <
T_Greater, // >
T_LessEqual, // <=
T_GreaterEqual, // >=
T_Argument, // $ $1 $2 ...
T_ArgumentList, // $$
T_Arrow, // ->
T_ArrayPushBack, // <<
T_ArrayPopBack, // >>
T_SizeOf, // #
T_InvalidToken
};
const char* TokenAsString(Token token);
}
#endif // _TOKENS_H_INCLUDED_
|
Joint Impact of Physical Activity and Family History on the Development of Diabetes Among Urban Adults in Mainland China To examine the joint influences of physical activity (PA) and family history (FH) of diabetes on subsequent type 2 diabetes (T2D), the authors pooled and analyzed data from 2 community-based urban adult prospective cohort studies in 2011 in Nanjing, China. Among 4550 urban participants, the 3-year cumulative incidence of T2D was 5.1%. After adjustment for potential confounders, compared with those with FH+ and insufficient PA, the adjusted odds ratio (95% confidence interval) of developing T2D was 0.42 (0.18, 0.98) for participants with sufficient PA and FH+, 0.32 (0.22, 0.46) for participants with insufficient PA and FH−, and 0.15 (0.08, 0.28) for participants with sufficient PA and FH−. Such significant graduated associations between PA/FH and risk of developing T2D were also identified in either men or women, separately. Sufficient PA and FH− may jointly reduce the risk of developing T2D in urban Chinese adults. |
<filename>pulse-plugin-event-dto/src/main/java/com/microfocus/adm/pulse/pluginapi/event/dto/PulseCommentEvent.java
/*
* MIT License
*
* Copyright (c) 2019 Micro Focus or one of its affiliates.
*
* Licensed under the MIT License (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://opensource.org/licenses/MIT
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and limitations under the License.
*
*/
package com.microfocus.adm.pulse.pluginapi.event.dto;
import java.util.Date;
/**
* Describes the review comment event that has occurred.
*/
public interface PulseCommentEvent {
/**
* @return Type of action that occurred to the comment.
*/
PulseCommentAction getAction();
/**
* @return Unique identifier for the comment thread.
*/
String getThreadId();
/**
* @return Date the comment or response was made.
*/
Date getDate();
/**
* @return Login for the user who made the comment or response.
*/
String getLoginName();
/**
* If the comment is a response to another comment, this is the unique identifier of the response. The threadId holds the
* identifier of the thread that this comment belongs to.
*
* @return Identifier of the comment this is a response to or null if not a response.
*/
String getResponseId();
/**
* If the comment was made in the context of a review, this is the review label.
*
* @return Label of the review or null if the comment is made outside of a review.
*/
String getReviewLabel();
/**
* @return Current body of the comment. Valid only for {@link PulseCommentAction#CREATED} and {@link PulseCommentAction#EDITED}
* events.
*/
String getBody();
/**
* @return Previous body of the comment. Valid only for {@link PulseCommentAction#EDITED} and {@link PulseCommentAction#DELETED}
* events.
*/
String getPreviousBody();
/**
* If the comment is on a file this is the path to the file.
*
* @return File path or null if not a comment on a file.
*/
String getPath();
/**
* If the comment is on a file this is the SCM identifier for the file.
*
* @return SCM identifier of the file or null if not comment is not for a file.
*/
String getFileScmId();
/**
* SCM identifier of the changeset the file belongs to. Only valid if the comment is about a file.
*
* @return Changeset SCM identifier or null if not comment is not for a file.
*/
String getChangesetId();
/**
* If the comment is on a file, this is the starting line number within the file the comment applies to. If the comment is not
* on a file, then this will be 0.
*
* @return First line in the file the comment applies to.
*/
int getStartLine();
/**
* If the comment is on a file, this is the end line number within the file the comment applies to. If the comment is on a
* single line, then this will be the same as the start line number. If the comment is not on a file, then this will be 0.
*
* @return Last line in the file the comment applies to.
*/
int getEndLine();
}
|
Luminance Adaptive Coding of Chrominance Signals We describe two techniques for digital coding of the chrominance components of a color television signal. Both techniques make use of an observation that in color pictures most of the locations of large spatial changes in the chrominance are coincident with large spatial changes in the luminance. This allows us to predict the chrominance samples more efficiently using the previously transmitted chrominance and luminance samples, and the present luminance sample. In general, we determine which of the previous luminance samples best represents the present luminance sample and use the corresponding previous chrominance sample to represent the present chrominance sample. We present results of computer simulations of two such coding schemes. The first scheme, in which the chrominance components are coded by a DPCM coder, uses adaptive prediction of the chrominance components based on the luminance. In the second scheme, the chrominance signal is adaptively extrapolated from its past using the luminance signal for adaptation. Only those chrominance samples where the extrapolation error is more than a threshold are transmitted to the receiver. The addresses of such samples are derived from the luminance signal and therefore need not be transmitted. Our computer simulations on videotelephone type of pictures, indicate that, for the predictive coding, the entropy of the coded chrominance signals can he reduced by about 15 to 20 percent by adaptation. This results in a bit rate of 0.55 bits/ luminance pel, for transmission of chrominance information. Using adaptive extrapolation, only about 20 percent of the chrominance samples need to be transmitted which results in a bit rate of approximately 0.58 bits/luminance pel. |
def GetApproverIds(issue):
approver_ids = []
for av in issue.approval_values:
approver_ids.extend(av.approver_ids)
return list(set(approver_ids)) |
<filename>packages/website-frontend/src/app/user/user.component.ts<gh_stars>10-100
import { Component, OnInit } from '@angular/core';
import { Login } from '@stryker-mutator/dashboard-contract';
import { AuthService } from '../auth/auth.service';
import { Router } from '@angular/router';
import { AutoUnsubscribe } from '../utils/auto-unsubscribe';
@Component({
selector: 'stryker-user',
templateUrl: './user.component.html',
styleUrls: ['./user.component.css']
})
export class UserComponent extends AutoUnsubscribe implements OnInit {
user: Login | null = null;
expanded = false;
constructor(private authService: AuthService, private router: Router) {
super();
}
ngOnInit() {
this.subscriptions.push(this.authService.currentUser$.subscribe(user => {
this.user = user;
}));
}
logOut() {
this.authService.logOut();
this.user = null;
this.router.navigate(['/']);
}
}
|
<gh_stars>1-10
# -*- coding: utf-8 -*-
from data.reader import wiki_from_pickles, corpora_from_pickles
from data.corpus import Sentences
from lexical_diversity import lex_div
import numpy as np
from collections import Counter
import matplotlib.pyplot as plt
import seaborn as sns
def my_hdd(text):
def choose(n, k):
if 0 <= k <= n:
ntok = 1
ktok = 1
for t in range(1, min(k, n - k) + 1):
ntok *= n
ktok *= t
n -= 1
return ntok // ktok
else:
return 0
def hyper(successes, sample_size, population_size, freq):
try:
prob_1 = 1.0 - ((choose(freq, successes) *
choose((population_size - freq),(sample_size - successes)))/
choose(population_size, sample_size))
prob_1 = prob_1 * (1/sample_size)
except ZeroDivisionError:
prob_1 = 0
return prob_1
frequency_dict = Counter(text)
n_toks = len(text)
return sum(hyper(0, 1000, n_toks, f) for f in frequency_dict.values())
def lex_div_dist_plots(tfs, srfs, unis, div_f, save_dir):
hist_args = dict(alpha=1.0)
for param, div_vals in tfs.items():
sns.distplot(div_vals, label="TF " + str(param), hist_kws=hist_args)
for param, div_vals in srfs.items():
sns.distplot(div_vals, label="SRF " + str(param), hist_kws=hist_args)
sns.distplot(unis, label="UNIF", axlabel=div_f.__name__, hist_kws=hist_args)
plt.legend()
plt.savefig(save_dir + div_f.__name__ + "_dist_plot.png", dpi=300)
plt.close()
def lex_div_means(tfs, srfs, unis, div_f, save_dir):
with open(save_dir + div_f.__name__ + "_means.txt", "w") as handle:
for param, lex_div_ls in tfs.items():
handle.write("TF " + str(param) + "\t")
handle.write(str(np.mean(lex_div_ls).round(3)))
handle.write("\t" + str(np.sqrt(np.var(lex_div_ls)).round(3)))
handle.write("\n")
for param, lex_div_ls in srfs.items():
handle.write("SRF " + str(param) + "\t")
handle.write(str(np.mean(lex_div_ls).round(3)))
handle.write("\t" + str(np.sqrt(np.var(lex_div_ls)).round(3)))
handle.write("\n")
handle.write("UNIF " + "\t")
handle.write(str(np.mean(unis).round(3)))
handle.write("\t" + str(np.sqrt(np.var(unis)).round(3)))
import argparse
def parse_args():
p = argparse.ArgumentParser()
p.add_argument("--lang", type=str)
p.add_argument("--factors", nargs="*", type=int, default=[])
p.add_argument("--hist_lens", nargs="*", type=int, default=[])
args = p.parse_args()
return args.lang, args.factors, args.hist_lens
def get_filters(filter_dir, k, names, param_name, param_ls):
filters_dict = {}
for param in param_ls:
all_samples = corpora_from_pickles(filter_dir, names=names)
cur_param_filters = [Sentences(c) for name_d, c in all_samples if
name_d["k"] == k and name_d[param_name] == param]
filters_dict[param] = cur_param_filters
return filters_dict
def lex_div_main(tfs, srfs, unis, results_d):
factors = sorted(tfs.keys())
hist_lens = sorted(srfs.keys())
half_factors = factors[1::2]
half_tfs = {k: tfs[k] for k in half_factors}
half_hist_lens = hist_lens[1::2]
half_srfs = {k: srfs[k] for k in half_hist_lens}
cutoff = int(1e5)
for div_f in [lex_div.mtld, my_hdd]:
print("\nlex div with " + div_f.__name__, flush=True)
tf_mtlds = {param: [div_f(list(s.tokens())[:cutoff]) for s in samples]
for param, samples in half_tfs.items()}
print("done with ", div_f.__name__, " for TF", flush=True)
srf_mtlds = {param: [div_f(list(s.tokens())[:cutoff]) for s in samples]
for param, samples in half_srfs.items()}
print("done with ", div_f.__name__, " for SRF", flush=True)
uni_mtlds = [div_f(list(s.tokens())[:cutoff]) for s in unis]
print("done with ", div_f.__name__, " for UNI", flush=True)
lex_div_dist_plots(tf_mtlds, srf_mtlds, uni_mtlds, div_f, save_dir=results_d)
lex_div_means(tf_mtlds, srf_mtlds, uni_mtlds, div_f, save_dir=results_d)
|
<reponame>zzalscv2/FFEA
//
// This file is part of the FFEA simulation package
//
// Copyright (c) by the Theory and Development FFEA teams,
// as they appear in the README.md file.
//
// FFEA is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// FFEA is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with FFEA. If not, see <http://www.gnu.org/licenses/>.
//
// To help us fund FFEA development, we humbly ask that you cite
// the research papers on the package.
//
#include "MatrixFixedSparsityPattern.h"
int MatrixFixedSparsityPattern::init(tetra_element_linear *elem, int num_elements) {
vector< vector<sparse_count> > all_entries;
int n, ni, nj;
for (n = 0; n < num_elements; n++) {
// add mass matrix for this element
for (int i = 0; i < 4; i++) {
for (int j = 0; j < 4; j++) {
ni = elem[n].n[i]->index;
nj = elem[n].n[j]->index;
}
}
}
return 0;
}
|
<filename>gnu/mapping/CharArrayInPort.java
package gnu.mapping;
import gnu.text.*;
import gnu.lists.*;
/** An Inport for reading from a char array.
* Essentially the same as an InPort wrapped around a CharArrayReader, but
* more efficient because it uses the char array as the InPort's buffer. */
public class CharArrayInPort extends InPort
{
static final Path stringPath = Path.valueOf("<string>");
public CharArrayInPort make
/* #ifdef use:java.lang.CharSequence */
(CharSequence seq)
/* #else */
// (CharSeq seq)
/* #endif */
{
if (seq instanceof FString)
{
FString fstr = (FString) seq;
return new CharArrayInPort(fstr.data, fstr.size);
}
else
{
int len = seq.length();
char[] buf = new char[len];
/* #ifdef use:java.lang.CharSequence */
if (seq instanceof String)
((String) seq).getChars(0, len, buf, 0);
else if (! (seq instanceof CharSeq))
for (int i = len; --i >= 0; )
buf[i] = seq.charAt(i);
else
/* #endif */
((CharSeq) seq).getChars(0, len, buf, 0);
return new CharArrayInPort(buf, len);
}
}
public CharArrayInPort (char[] buffer, int len)
{
super(NullReader.nullReader, stringPath);
try
{
setBuffer(buffer);
}
catch (java.io.IOException ex)
{
throw new Error(ex.toString()); // Can't happen.
}
limit = len;
}
public CharArrayInPort (char[] buffer)
{
this(buffer, buffer.length);
}
public CharArrayInPort (String string)
{
this(string.toCharArray());
}
public int read () throws java.io.IOException
{
if (pos >= limit)
return -1;
return super.read();
}
}
|
package no.stelar7.api.r4j.tests.val;
import no.stelar7.api.r4j.basic.constants.api.regions.ValorantShard;
import no.stelar7.api.r4j.basic.constants.types.val.Season;
import no.stelar7.api.r4j.impl.R4J;
import no.stelar7.api.r4j.impl.val.VALRankedAPI;
import no.stelar7.api.r4j.pojo.val.ranked.Leaderboard;
import no.stelar7.api.r4j.tests.SecretFile;
import org.junit.jupiter.api.Test;
public class TestVALRanked
{
@Test
public void getLeaderboard()
{
R4J api = new R4J(SecretFile.CREDS);
VALRankedAPI ranked = api.getVALAPI().getRankedAPI();
Leaderboard leaderboard = ranked.getLeaderboard(ValorantShard.EU, Season.EPISODE_2_ACT_1.getActId(), 0, 200);
System.out.println(leaderboard);
}
}
|
# Generated by Django 3.0.5 on 2020-04-26 13:40
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('learn_word', '0005_auto_20200424_1824'),
]
operations = [
migrations.RemoveField(
model_name='wordlearnsetting',
name='display_num',
),
migrations.AddField(
model_name='wordlearnsetting',
name='learn_num',
field=models.IntegerField(default=100),
),
]
|
The Indian government has been making a big push for digital payments since demonetization of old currency notes. In a move to promote Digital India initiative and encourage cashless payments, the government has introduced Unified Payments Interface (UPI) for smartphones, and Unstructured Supplementary Service Data (USSD) based mobile banking for basic and feature phones. The government has also introduced Bharat Interface for Money (BHIM) app, which is a unified UPI app, and Aadhaar Pay that enables you to make cashless money using Aadhaar card and your fingerprint for biometric authentication. Now, to further ease the payments process, government has introduced Bharat QR Code for cashless electronics payments.
Over the past few years, a lot of people have moved on from making cash based payments to cashless payments using credit and debit cards. However, this mode of electronic payment for cashless transactions has strings attached in terms of transaction fees and the cost of owning and running the card swipe machines. With Bharat QR Code, the government has taken yet another step to promote digital payments by simplifying things for merchants and also for the consumers. Let’s dive in a little deeper to understand what is Bharat QR Code all about, how it works and what are the benefits.
A look at current QR Code-based payments
The QR code-based payments are accepted by most merchants across India, but they are largely closed systems. Visa is a pioneer in QR Code payments and it has already launched mVisa in India a year-and-a-half ago. Last year, DTH operator TataSky had partnered with Visa to allow and accept QR code-based payments from its subscribers. In fact, mVisa is the widely accepted payment option across the globe. ALSO READ: How to use USSD-based mobile banking, here’s everything you should know
In November 2016, MasterCard launched its ‘Masterpass QR service’ in partnership with Ratnakar Bank’s Ongo payment wallet. RuPay was also expected to come up with its QR code-based solution, but there is no word as yet. Since demonetization, e-wallet apps such as Paytm, Freecharge and Mobikwik, among others have also seen a surge in usage. While these wallet apps also allow QR code-based payments, both parties need to have the app.
For instance, if you are transferring money using Paytm, the recipient needs to have Paytm account and app installed in their smartphone. However, there is no unified solution for the same. Meaning, I won’t be able to transfer money from Paytm wallet to a recipient using Freecharge or MobiKwik. This is where Bharat QR Code will greatly help. ALSO READ: Demonetization: How to get started with UPI apps and go cashless to beat the cash crunch
What is Bharat QR Code and what are its benefits?
Bharat QR Code is a common QR code built for ease of payments. It is a standard that will support Visa, MasterCard and Rupay cards for wider acceptance. Currently, if you want to make a cashless payment at most stores, you need a credit and debit card to swipe and enter the PIN code for authentication. ALSO READ: Unified Payments Interface: UPI-based apps of every bank on Google Play Store
However, Bharat QR code will enable the merchants to accept digital payments without the Point of Sale (PoS) swiping machine. It will allow customers of any bank to use their smartphone app to make payment using their debit card. In terms of benefits, merchants will no longer need to invest in buying the PoS machine. With no PoS machine, merchants will also be able to do away with the transaction fees charged by the banks for using the PoS terminal.
How to make payments to merchants using Bharat QR Code
Currently, Bharat QR is integrated into ICICI Bank’s Pockets app and HDFC Bank’s PayZApp, with more banks expected to update their apps with support for the same. I tried making payment using ICICI Bank’s Pocket app and it worked like a charm. To test the feature, I generated a QR code using BHIM app. Next, I used Pocket’s app to scan the QR code and the payment successfully went through via UPI. ALSO READ: PM Narendra Modi launches BHIM app for mobile payments; here’s how it works
In short, you will need your bank app or BHIM app installed in your phone. At the merchant’s store, open the app, tap on Scan QR Code or Scan & Pay (option will differ from one bank to the other) and scan the Bharat QR Code. Once the code is scanned, you will have to enter the amount that you want to pay, add a remark and enter your four-digit passcode. Once authentication is complete, the money will be transferred to the merchant’s bank account.
How will merchants benefit from Bharat QR Code?
Merchants will simply need to generate the Bharat QR Code, take a print out and stick itat their payment desk. The payment will take place via (Immediate Payment Service) IMPS and the money will be instantly creditedinto their bank account. Unlike Paytm,Freecharge and Mobikwik, merchants will no longer have limits on the amount of money that they can accept every month. Also, the hassle of transferring money from wallet to bank account will be eliminated — further making it easier accept digital money. ALSO READ: Aadhaar Payment App: Here’s how it will ease payments process for everyone
Security benefits of Bharat QR Code
Currently, when you give your credit / debit card to swipe, there is negligible possibilities of someone capturing crucial details such as card number, expiry date and CVV. While OTP option is enabled for two-step-verification before authenticating transaction, the risk of exposing your card details still remains. In case of Bharat QR code, the transaction is completed with enhanced security, and card details remain in customer’s control, which is a big advantage.
BONUS VIDEO: How to use the BHIM (Bharat Interface for Money) Android App for UPI payments |
/**
* Copyright (c) 2021 OceanBase
* OceanBase CE is licensed under Mulan PubL v2.
* You can use this software according to the terms and conditions of the Mulan PubL v2.
* You may obtain a copy of Mulan PubL v2 at:
* http://license.coscl.org.cn/MulanPubL-2.0
* THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
* EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
* MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
* See the Mulan PubL v2 for more details.
*/
#ifndef OCEANBASE_OBSERVER_VIRTUAL_TABLE_OB_INFORMATION_COLUMNS_TABLE_
#define OCEANBASE_OBSERVER_VIRTUAL_TABLE_OB_INFORMATION_COLUMNS_TABLE_
#include "share/ob_virtual_table_scanner_iterator.h"
#include "share/schema/ob_schema_struct.h"
#include "share/schema/ob_table_schema.h"
namespace oceanbase {
namespace share {
namespace schema {
class ObDatabaseSchema;
class ObTableSchema;
class ObColumnSchemaV2;
} // namespace schema
} // namespace share
namespace observer {
class ObInfoSchemaColumnsTable : public common::ObVirtualTableScannerIterator {
static const int32_t COLUMNS_COLUMN_COUNT = 21;
enum COLUMN_NAME {
TABLE_SCHEMA = common::OB_APP_MIN_COLUMN_ID,
TABLE_NAME,
TABLE_CATALOG,
COLUMN_NAME,
ORDINAL_POSITION,
COLUMN_DEFAULT,
IS_NULLABLE,
DATA_TYPE,
CHARACTER_MAXIMUM_LENGTH,
CHARACTER_OCTET_LENGTH,
NUMERIC_PRECISION,
NUMERIC_SCALE,
DATETIME_PRECISION,
CHARACTER_SET_NAME,
COLLATION_NAME,
COLUMN_TYPE,
COLUMN_KEY,
EXTRA,
PRIVILEGES,
COLUMN_COMMENT,
GENERATION_EXPRESSION
};
public:
ObInfoSchemaColumnsTable();
virtual ~ObInfoSchemaColumnsTable();
virtual int inner_get_next_row(common::ObNewRow*& row);
virtual void reset();
inline void set_tenant_id(uint64_t tenant_id)
{
tenant_id_ = tenant_id;
}
private:
DISALLOW_COPY_AND_ASSIGN(ObInfoSchemaColumnsTable);
int fill_row_cells(const common::ObString& database_name, const share::schema::ObTableSchema* table_schema,
const share::schema::ObColumnSchemaV2* column_schema, const uint64_t ordinal_position);
int check_database_table_filter();
/**
* Iterate through all the tables and fill row cells,
* if is_filter_table_schema is false, last_db_schema_idx
* must be a valid value. else, use -1.
*/
int iterate_table_schema_array(const bool is_filter_table_schema, const int64_t last_db_schema_idx);
// Iterate through all the columns and fill cells
int iterate_column_schema_array(const common::ObString& database_name,
const share::schema::ObTableSchema& table_schema, const int64_t last_db_schema_idx, const int64_t last_table_idx,
const bool is_filter_table_schema);
/**
* If ob_sql_type_str failed to call, and the error code returned is OB_SIZE_OVERFLOW.
* realloc memory to the size of OB_MAX_EXTENDED_TYPE_INFO_LENGTH, then try again
*/
int get_type_str(const ObObjMeta& obj_meta, const ObAccuracy& accuracy, const common::ObIArray<ObString>& type_info,
const int16_t default_length_semantics, int64_t& pos);
private:
uint64_t tenant_id_;
int64_t last_schema_idx_;
int64_t last_table_idx_;
int64_t last_column_idx_;
bool has_more_;
char* data_type_str_;
char* column_type_str_;
int64_t column_type_str_len_;
bool is_filter_db_;
int64_t last_filter_table_idx_;
int64_t last_filter_column_idx_;
common::ObSEArray<const share::schema::ObDatabaseSchema*, 8> database_schema_array_;
common::ObSEArray<const share::schema::ObTableSchema*, 16> filter_table_schema_array_;
};
} // namespace observer
} // namespace oceanbase
#endif // OCEANBASE_OBSERVER_VIRTUAL_TABLE_OB_COLUMNS_
|
/**
* Thin client implementation of {@link BinaryMetadataHandler}.
*/
private class ClientBinaryMetadataHandler implements BinaryMetadataHandler {
/** In-memory metadata cache. */
private volatile BinaryMetadataHandler cache = BinaryCachingMetadataHandler.create();
/** {@inheritDoc} */
@Override public void addMeta(int typeId, BinaryType meta, boolean failIfUnregistered)
throws BinaryObjectException {
if (cache.metadata(typeId) == null) {
try {
ch.request(
ClientOperation.PUT_BINARY_TYPE,
req -> serDes.binaryMetadata(((BinaryTypeImpl)meta).metadata(), req.out())
);
}
catch (ClientException e) {
throw new BinaryObjectException(e);
}
}
cache.addMeta(typeId, meta, failIfUnregistered); // merge
}
/** {@inheritDoc} */
@Override public void addMetaLocally(int typeId, BinaryType meta, boolean failIfUnregistered)
throws BinaryObjectException {
throw new UnsupportedOperationException("Can't register metadata locally for thin client.");
}
/** {@inheritDoc} */
@Override public BinaryType metadata(int typeId) throws BinaryObjectException {
BinaryType meta = cache.metadata(typeId);
if (meta == null) {
BinaryMetadata meta0 = metadata0(typeId);
if (meta0 != null) {
meta = new BinaryTypeImpl(marsh.context(), meta0);
cache.addMeta(typeId, meta, false);
}
}
return meta;
}
/** {@inheritDoc} */
@Override public BinaryMetadata metadata0(int typeId) throws BinaryObjectException {
BinaryMetadata meta = cache.metadata0(typeId);
if (meta == null) {
try {
meta = ch.service(
ClientOperation.GET_BINARY_TYPE,
req -> req.out().writeInt(typeId),
res -> {
try {
return res.in().readBoolean() ? serDes.binaryMetadata(res.in()) : null;
}
catch (IOException e) {
throw new BinaryObjectException(e);
}
}
);
}
catch (ClientException e) {
throw new BinaryObjectException(e);
}
}
return meta;
}
/** {@inheritDoc} */
@Override public BinaryType metadata(int typeId, int schemaId) throws BinaryObjectException {
BinaryType meta = metadata(typeId);
return meta != null && ((BinaryTypeImpl)meta).metadata().hasSchema(schemaId) ? meta : null;
}
/** {@inheritDoc} */
@Override public Collection<BinaryType> metadata() throws BinaryObjectException {
return cache.metadata();
}
/**
* Clear local cache on reconnect.
*/
void onReconnect() {
cache = BinaryCachingMetadataHandler.create();
}
} |
Party Cohesion in Westminster Systems: Inducements, Replacement and Discipline in the House of Commons, 18361910 This article considers the historical development of a characteristic crucial for the functioning and normative appeal of Westminster systems: cohesive legislative parties. It gathers the universe of the 20,000 parliamentary divisions that took place between 1836 and 1910 in the British House of Commons, construct a voting record for every Member of Parliament (MP) serving during this time, and conducts analysis that aims to both describe and explain the development of cohesive party voting. In line with previous work, it shows that with the exception of a chaotic period in the 1840s and 1850s median discipline was always high and increased throughout the century. The study uses novel methods to demonstrate that much of the rise in cohesion results from the elimination of a rebellious left tail from the 1860s onwards, rather than central tendency shifts. In explaining the aggregate trends, the article uses panel data techniques and notes that there is scant evidence for replacement explanations that involve new members behaving in more disciplined ways than those leaving the chamber. It offers evidence that more loyal MPs were more likely to obtain ministerial posts, and speculates that this and other inducement-based accounts offer more promising explanations of increasingly cohesive parties. |
import scipy.signal
import numpy as np
import tensorflow as tf
from collections import namedtuple
def discount(x, gamma, time=None):
if time is not None and time.size > 0:
y = np.array(x, copy=True)
for i in reversed(range(y.size-1)):
y[i] += (gamma ** time[i]) * y[i+1]
return y
else:
return scipy.signal.lfilter([1], [1, -gamma], x[::-1], axis=0)[::-1]
Batch = namedtuple("Batch", ["si", "a", "adv", "r",
"terminal", "features", "reward", "step", "meta"])
def huber_loss(delta, sum=True):
if sum:
return tf.reduce_sum(tf.where(tf.abs(delta) < 1,
0.5 * tf.square(delta),
tf.abs(delta) - 0.5))
else:
return tf.where(tf.abs(delta) < 1,
0.5 * tf.square(delta),
tf.abs(delta) - 0.5)
def lower_triangular(x):
return tf.matrix_band_part(x, -1, 0)
def to_bool(x):
return x == 1
def parse_to_num(s):
l = s.split(',')
for i in range(0, len(l)):
try:
l[i] = int(l[i])
except ValueError:
l = []
break
return l
|
/**
* Simplified tree-like model for a query.
* - SELECT : All the children are list of joined query models in the FROM clause.
* - UNION : All the children are united left and right query models.
* - TABLE and FUNCTION : Never have child models.
*/
private static final class QueryModel extends ArrayList<QueryModel> {
/** */
@GridToStringInclude
final Type type;
/** */
GridSqlAlias uniqueAlias;
/** */
GridSqlAst prnt;
/** */
int childIdx;
/** If it is a SELECT and we need to split it. Makes sense only for type SELECT. */
@GridToStringInclude
boolean needSplit;
/** If we have a child SELECT that we should split. */
@GridToStringInclude
boolean needSplitChild;
/** If this is UNION ALL. Makes sense only for type UNION.*/
boolean unionAll = true;
/**
* @param type Type.
* @param prnt Parent element.
* @param childIdx Child index.
* @param uniqueAlias Unique parent alias of the current element.
* May be {@code null} for selects inside of unions or top level queries.
*/
QueryModel(Type type, GridSqlAst prnt, int childIdx, GridSqlAlias uniqueAlias) {
this.type = type;
this.prnt = prnt;
this.childIdx = childIdx;
this.uniqueAlias = uniqueAlias;
}
/**
* @return The actual AST element for this model.
*/
private <X extends GridSqlAst> X ast() {
return prnt.child(childIdx);
}
/**
* @return {@code true} If this is a SELECT or UNION query model.
*/
private boolean isQuery() {
return type == Type.SELECT || type == Type.UNION;
}
/** {@inheritDoc} */
@Override public String toString() {
return S.toString(QueryModel.class, this);
}
} |
POWER GRID DYNAMICS: ENHANCING POWER SYSTEM OPERATION THROUGH PRONY ANALYSIS Prony Analysis is a technique used to decompose a signal into a series consisting of weighted complex exponentials and promises to be an effi cient way of recognizing sensitive lines during faults in power systems such as the U.S. Power grid. Positive Sequence Load Flow (PSLF) was used to simulate the performance of a simple two-area-four-generator system and the reaction of the system during a line fault. The Dynamic System Identifi cation (DSI) Toolbox was used to perform Prony analysis and use modal information to identify key transmission lines for power fl ow adjustment to improve system damping. The success of the application of Prony analysis methods to the data obtained from PSLF is reported, and the key transmission line for adjustment is identifi ed. Future work will focus on larger systems and improving the current algorithms to deal with networks such as large portions of the Western Electricity Coordinating Council (WECC) power grid. |
MEXICO CITY — Police and human rights activists headed to an isolated river town along the Honduran coast Thursday to investigate what happened last week during a gun battle that local officials say left four innocent people, including two pregnant women, dead in a drug bust orchestrated by U.S. agents.
Relatives began to come forward, trying to convince investigators — and the U.S. government — that those shot were not drug smugglers, but locals who use the rivers as roads and were moving from place to place when attacked in a raid by Honduran police flying in U.S. helicopters and aided by U.S. Drug Enforcement Administration agents.
News of the firefight, along with allegations of the four deaths, led local residents to torch several houses and demand that U.S. drug enforcement agents leave the area, according to the mayor.
The gun battle took place Friday along the muddy Patuca River, near the northeastern town of Ahuas, as Honduran forces assisted by U.S. advisers in four U.S. helicopters sought to seize a load of cocaine being moved from an illicit jungle airstrip to a waiting boat.
“First the narcos opened fire, and later the DEA helicopters were searching the area, and they fired with their guns at the boat with civilians, thinking maybe they were the narcos,” the mayor of Ahuas, Lucio Vaquedano, said in an interview.
Vaquedano said that several bodies had been recovered and that the boat the civilians were allegedly aboard was pocked with large bullet holes, which he said police told him were from a .50-caliber gun, the kind used by a door gunner on one of the U.S.-supplied helicopters. “It was easy to get confused because there were two boats, and the narco boat didn’t have lights and the civilian boat was running with its lights on,” he said.
Ahuas is a town of 1,500 people, many of them members of indigenous Miskito tribes, in the state of Gracias a Dios.
U.S. officials said Thursday that at least “several” DEA agents had served as advisers during the raid but that the American officers, while armed for self-defense, did not fire their weapons.
The U.S. officials, representing law enforcement agencies, and diplomats who have been briefed on the mission also cast doubt on the allegations that innocent people were killed during the 2 a.m. mission, though they said an investigation is ongoing. The U.S. officials said it was not unusual for local authorities to work with smugglers and also said they wondered why innocent civilians would be on the water in the middle of the night.
With congressional approval and in coordination with the State Department’s Narcotics Affairs Section, the DEA has sent advisory support teams to train and coordinate anti-drug operations with units of the Honduran National Police. These military-style fast-response units, which use U.S. intelligence, radar tracking of illicit flights and U.S. helicopters, seized 22 metric tons of cocaine last year — a record amount — but they have also generated controversy, as human rights advocates criticize the further militarization of the drug wars.
According to the U.S. account, an illegal flight by a mid-size propeller plane landed at 1:30 a.m. and a flyover by helicopters with night-vision capabilities counted 30 men on the ground at a small airstrip off-loading bundles from the plane into a waiting pickup truck.
Ten minutes later, the truck arrived at the river and the men started to move bundles into a waiting boat. The helicopters landed, and the workers fled. Honduran and U.S. officials said the operation netted 14 bundles containing 450 kilograms — almost 1,000 pounds — of cocaine.
At 2:30 a.m., according to the same officials, another boat arrived and began firing at the Honduran police and DEA agents. One of the helicopters, which may have been on the ground, returned fire and the boat sped away, the officials said.
Ruth Donaire, a spokeswoman for the National Police in Honduras, said a commission had been established to investigate the case but noted the difficulty of access to the area. “We cannot confirm the number of dead because we currently have four different versions of the facts, with four different figures,” she said.
U.S. and Honduran human rights activists demanded more information.
Correspondent Nick Miroff and researcher Gabriela Martinez contributed to this report. |
A low-rank approach for interference management in dense wireless networks The curse of big data, propelled by the explosive growth of mobile devices, places overwhelming pressures on wireless communications. Network densification is a promising approach to improve the area spectral efficiency, but to acquire massive channel state information (CSI) for effective interference management becomes a formidable task. In this paper, we propose a novel interference management method which only requires the network connectivity information, i.e., the knowledge of the presence of strong links, and statistical information of the weak links. To acquire such mixed network connectivity information incurs significant less overhead than complete CSI, and thus this method is scalable to large network sizes. To maximize the sum-rate with the mixed network connectivity information, we formulate a rank minimization problem to cancel strong interference and suppress weak interference, which is then solved by a Riemannian trust-region algorithm. Such algorithm is robust to initial points and has a fast convergence rate. Simulation result shows that our approach achieves a higher data rate than the state-of-the-art methods. |
<reponame>rocklyve/pcc-imp-app<filename>PrivacyCrashCam/app/src/main/java/de/pcc/privacycrashcam/applicationlogic/camera/RecordCallback.java
package de.pcc.privacycrashcam.applicationlogic.camera;
/**
* Observes camera recordings. Gets informed about state changes of recording.
*/
public interface RecordCallback {
/**
* Called when recording starts.
*/
void onRecordingStarted();
/**
* Called when recording ends.
*/
void onRecordingStopped();
/**
* Called when an error occurs.
*
* @param errorMessage User readable error message to be displayed (or ignored..)
*/
void onError(String errorMessage);
}
|
// Tags returns all of the tags encountered in the error chain.
func Tags(errToWalk error) []interface{} {
allTags := []interface{}{}
walkErrorChain(errToWalk, func(err error) bool {
if tagged, ok := err.(tagger); ok {
allTags = append(allTags, tagged.Tags()...)
}
return false
})
return allTags
} |
The mobile environment of EHR browsing verified on tablet terminal Recently, medical records can be accessible from Electric Health Record(EHR) in some regions such as e-maiko.net. EHR provides patient's records in patient's hands even though the authorized access requires the information literacy. Therefore, this research aims to provide an environment of short-step EHR access. The authors propose the EHR design to separate the data handling layer and the visualization layer. By the construction of the web access data interface and the visualization of the device optimized viewer, the system can select the visual interface depends on the situation and the viewer devices. Especially, the environment was implemented on the e-maiko.net. Also, the iPad application was provided for the experiment of the optimized visualization. The experiment was carried out with 9 subjects who are the expert of computers, are the novice of tablet terminals. The experiment compaired the perfomance of the access time to find a document between the conventional web interface and the iPad viewer interface. The result of experiment suggested that the iPad viewer makes the EHR access much more easier than the conventional web interface. Also, the result of interviews, the half of the subjects pointed out the security concerns to use mobile devices to access EHR. On the other hand, the rest of the subjects insisted the benefit of short-step access rather than the security issue. For the conclusion, the proposed system will be acceptable as an EHR browsing environment if the system is enough explained about the security issue. |
<reponame>WinDaniel/YuzuBrowser<filename>libraries/BreadcrumbsView/src/main/java/moe/feng/common/view/breadcrumbs/BreadcrumbsAdapter.java
/*
* Copyright (C) 2017-2018 Hazuki
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package moe.feng.common.view.breadcrumbs;
import android.content.Context;
import android.support.annotation.IdRes;
import android.support.annotation.NonNull;
import android.support.annotation.Nullable;
import android.support.v7.widget.ListPopupWindow;
import android.support.v7.widget.RecyclerView;
import android.view.ContextThemeWrapper;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.view.ViewTreeObserver;
import android.widget.AdapterView;
import android.widget.Button;
import android.widget.ImageButton;
import android.widget.ListAdapter;
import android.widget.SimpleAdapter;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import moe.feng.common.view.breadcrumbs.model.BreadcrumbItem;
class BreadcrumbsAdapter extends RecyclerView.Adapter<BreadcrumbsAdapter.ItemHolder> {
private final int DROPDOWN_OFFSET_Y_FIX;
private ArrayList<BreadcrumbItem> items;
private BreadcrumbsCallback callback;
private BreadcrumbsView parent;
private int mPopupThemeId = -1;
public BreadcrumbsAdapter(BreadcrumbsView parent) {
this(parent, new ArrayList<BreadcrumbItem>());
}
public BreadcrumbsAdapter(BreadcrumbsView parent, ArrayList<BreadcrumbItem> items) {
this.parent = parent;
this.items = items;
DROPDOWN_OFFSET_Y_FIX = parent.getResources().getDimensionPixelOffset(R.dimen.dropdown_offset_y_fix_value);
}
public @NonNull
ArrayList<BreadcrumbItem> getItems() {
return this.items;
}
public void setItems(@NonNull ArrayList<BreadcrumbItem> items) {
this.items = items;
}
public void setCallback(@Nullable BreadcrumbsCallback callback) {
this.callback = callback;
}
public @Nullable
BreadcrumbsCallback getCallback() {
return this.callback;
}
public void setPopupThemeId(@IdRes int popupThemeId) {
this.mPopupThemeId = popupThemeId;
}
@Override
public ItemHolder onCreateViewHolder(ViewGroup parent, int viewType) {
LayoutInflater inflater = LayoutInflater.from(parent.getContext());
if (viewType == R.layout.breadcrumbs_view_item_arrow) {
return new ArrowIconHolder(inflater.inflate(viewType, parent, false));
} else if (viewType == R.layout.breadcrumbs_view_item_text) {
return new BreadcrumbItemHolder(inflater.inflate(viewType, parent, false));
} else {
return null;
}
}
@Override
public void onBindViewHolder(ItemHolder holder, int position) {
int viewType = getItemViewType(position);
int truePos = viewType == R.layout.breadcrumbs_view_item_arrow ? ((position - 1) / 2) + 1 : position / 2;
holder.setItem(items.get(truePos));
}
@Override
public int getItemCount() {
return (items != null && !items.isEmpty()) ? (items.size() * 2 - 1) : 0;
}
@Override
public int getItemViewType(int position) {
return position % 2 == 1 ? R.layout.breadcrumbs_view_item_arrow : R.layout.breadcrumbs_view_item_text;
}
class BreadcrumbItemHolder extends ItemHolder<BreadcrumbItem> {
Button button;
BreadcrumbItemHolder(View itemView) {
super(itemView);
button = (Button) itemView;
button.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
if (callback != null) {
callback.onItemClick(parent, getAdapterPosition() / 2);
}
}
});
}
@Override
public void setItem(@NonNull BreadcrumbItem item) {
super.setItem(item);
button.setText(item.getSelectedItem());
button.setTextColor(
getAdapterPosition() == getItemCount() - 1
? parent.currentTextColor : parent.defaultTextColor
);
}
}
class ArrowIconHolder extends ItemHolder<BreadcrumbItem> {
ImageButton imageButton;
ListPopupWindow popupWindow;
ArrowIconHolder(View itemView) {
super(itemView);
imageButton = (ImageButton) itemView;
imageButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
if (item.hasMoreSelect()) {
try {
popupWindow.show();
} catch (Exception e) {
e.printStackTrace();
}
}
}
});
imageButton.setColorFilter(parent.defaultTextColor);
createPopupWindow();
}
@Override
public void setItem(@NonNull BreadcrumbItem item) {
super.setItem(item);
imageButton.setClickable(item.hasMoreSelect());
if (item.hasMoreSelect()) {
List<Map<String, String>> list = new ArrayList<>();
for (Object obj : item.getItems()) {
Map<String, String> map = new HashMap<>();
map.put("text", obj.toString());
list.add(map);
}
// Kotlin: item.getItems().map { "text" to it.toString() }
ListAdapter adapter = new SimpleAdapter(getPopupThemedContext(), list, R.layout.breadcrumbs_view_dropdown_item, new String[]{"text"}, new int[]{android.R.id.text1});
popupWindow.setAdapter(adapter);
popupWindow.setWidth(ViewUtils.measureContentWidth(getPopupThemedContext(), adapter));
imageButton.setOnTouchListener(popupWindow.createDragToOpenListener(imageButton));
} else {
imageButton.setOnTouchListener(null);
}
}
private void createPopupWindow() {
popupWindow = new ListPopupWindow(getPopupThemedContext());
popupWindow.setAnchorView(imageButton);
popupWindow.setOnItemClickListener(new AdapterView.OnItemClickListener() {
@Override
public void onItemClick(AdapterView<?> adapterView, View view, int i, long l) {
if (callback != null) {
callback.onItemChange(parent, getAdapterPosition() / 2, getItems().get(getAdapterPosition() / 2 + 1).getItems().get(i));
popupWindow.dismiss();
}
}
});
imageButton.getViewTreeObserver().addOnGlobalLayoutListener(new ViewTreeObserver.OnGlobalLayoutListener() {
@Override
public void onGlobalLayout() {
popupWindow.setVerticalOffset(-imageButton.getMeasuredHeight() + DROPDOWN_OFFSET_Y_FIX);
imageButton.getViewTreeObserver().removeOnGlobalLayoutListener(this);
}
});
}
}
class ItemHolder<T> extends RecyclerView.ViewHolder {
T item;
ItemHolder(View itemView) {
super(itemView);
}
public void setItem(@NonNull T item) {
this.item = item;
}
Context getContext() {
return itemView.getContext();
}
Context getPopupThemedContext() {
return mPopupThemeId != -1 ? new ContextThemeWrapper(getContext(), mPopupThemeId) : getContext();
}
}
}
|
Cellular and network models for intrathalamic augmenting responses during 10-Hz stimulation. Repetitive stimulation of the thalamus at 7-14 Hz evokes responses of increasing amplitude in the thalamus and the areas of the neocortex to which the stimulated foci project. Possible mechanisms underlying the thalamic augmenting responses during repetitive stimulation were investigated with computer models of interacting thalamocortical (TC) and thalamic reticular (RE) cells. The ionic currents in these cells were modeled with Hodgkin-Huxley type of kinetics, and the results of the model were compared with in vivo thalamic recordings from decorticated cats. The simplest network model demonstrating an augmenting response was a single pair of coupled RE and TC cells, in which RE-induced inhibitory postsynaptic potentials (IPSPs) in the TC cell led to progressive deinactivation of a low-threshold Ca2+ current. The augmenting responses in two reciprocally interacting chains of RE and TC cells depended also on gamma-aminobutyric acid-B (GABAB) IPSPs. Lateral GABAA inhibition between identical RE cells, which weakened bursts in these cells, diminished GABAB IPSPs and delayed the augmenting response in TC cells. The results of these simulations show that the interplay between existing mechanisms in the thalamus explains the basic properties of the intrathalamic augmenting responses. |
GREENBELT, Md. (WUSA9) -- The United States Attorney for the District of Maryland Rod J. Rosenstein announced Wednesday that a 28-year-old man pleaded guilty in conspiracy to commit an armed robbery of an armored truck in 2012.
Officials said, Adrian Baldwin, 28, of D.C. conspired with Damione Lewis, Delacey Brown, Taurian Miller, and others to rob an armored truck in mid November at a bank located in Hyattsville.
According to officials, Baldwin was going to take part in the robbery and share the money.
On November 21, 2012, a total of $272,956 was picked up in an armored car from the bank located in Hyattsville. Officials said, Baldwin, along with the other conspirators went up to the employee, took the money to the vans and drove off. They were carrying firearms at the time of the incident, officials said.
The money was split up between the conspirators. According to officials, Baldwin said he used the money to buy a 2002 Ford Explorer. He will have to turnover the vehicle as part of his plea agreement, officials said.
He is facing at least 20 years in prison and his sentenced is scheduled for November 10, 2014.
The other conspirators have also all pleaded guilty in the robbery. They are all awaiting sentencing, officials said. |
The Effect of Writing Gratitude in Buku Syukur Beta on Depression Severity in Type-2 Diabetes Mellitus Patients According to the World Health Organization (WHO), diabetes mellitus (DM) is the 6th and 5th cause of death worldwide and in Indonesia, respectively. Compared to non-diabetic, patients with DM are reported to be about 1.43 times to suffer from comorbid depression. Previous research has shown that writing a gratitude journal every day for three weeks affected pure neural altruism. Therefore, this study aimed to provide the effect of writing the expressions of gratitude in Buku Syukur Beta (BSB) on depression severity in type-2 DM patients. This is a single-blind randomized control trial (RCT) study in twelve type-2 DM patients. The research subjects were the members of Program Pengelolaan Penyakit Kronis (Prolanis) at Oepoi Public Health Center, Kupang, Indonesia; mostly classified as elders. The study has the test and control group, consisting of six patients. For 60 days, the test group appreciated the BSB daily for 10 minutes or 25 sentences, while the control group wrote their daily activities in a diary (Buku Harian Beta). The depression severity was measured before and after treatment with the Patient Health Questionnaire-9 (PHQ-9). The independent samples t-test showed a significant reduction in depression severity with a p-value of 0.011 (p<0.05). This study concluded that writing the expressions of gratitude for 60 days in BSB reduced depression severity in type-2 DM patients. |
#ifndef OCCA_C_MEMORY_HEADER
#define OCCA_C_MEMORY_HEADER
#include <occa/c/defines.h>
#include <occa/c/types.h>
OCCA_START_EXTERN_C
OCCA_LFUNC bool OCCA_RFUNC occaMemoryIsInitialized(occaMemory memory);
OCCA_LFUNC void* OCCA_RFUNC occaMemoryPtr(occaMemory memory,
occaProperties props);
OCCA_LFUNC occaDevice OCCA_RFUNC occaMemoryGetDevice(occaMemory memory);
OCCA_LFUNC occaProperties OCCA_RFUNC occaMemoryGetProperties(occaMemory memory);
OCCA_LFUNC occaUDim_t OCCA_RFUNC occaMemorySize(occaMemory memory);
OCCA_LFUNC occaMemory OCCA_RFUNC occaMemorySlice(occaMemory memory,
const occaDim_t offset,
const occaDim_t bytes);
//---[ UVA ]----------------------------
OCCA_LFUNC bool OCCA_RFUNC occaMemoryIsManaged(occaMemory memory);
OCCA_LFUNC bool OCCA_RFUNC occaMemoryInDevice(occaMemory memory);
OCCA_LFUNC bool OCCA_RFUNC occaMemoryIsStale(occaMemory memory);
OCCA_LFUNC void OCCA_RFUNC occaMemoryStartManaging(occaMemory memory);
OCCA_LFUNC void OCCA_RFUNC occaMemoryStopManaging(occaMemory memory);
OCCA_LFUNC void OCCA_RFUNC occaMemorySyncToDevice(occaMemory memory,
const occaDim_t bytes,
const occaDim_t offset);
OCCA_LFUNC void OCCA_RFUNC occaMemorySyncToHost(occaMemory memory,
const occaDim_t bytes,
const occaDim_t offset);
//======================================
OCCA_LFUNC void OCCA_RFUNC occaMemcpy(void *dest,
const void *src,
const occaUDim_t bytes,
occaProperties props);
OCCA_LFUNC void OCCA_RFUNC occaCopyMemToMem(occaMemory dest, occaMemory src,
const occaUDim_t bytes,
const occaUDim_t destOffset,
const occaUDim_t srcOffset,
occaProperties props);
OCCA_LFUNC void OCCA_RFUNC occaCopyPtrToMem(occaMemory dest,
const void *src,
const occaUDim_t bytes,
const occaUDim_t offset,
occaProperties props);
OCCA_LFUNC void OCCA_RFUNC occaCopyMemToPtr(void *dest,
occaMemory src,
const occaUDim_t bytes,
const occaUDim_t offset,
occaProperties props);
OCCA_LFUNC occaMemory OCCA_RFUNC occaMemoryClone(occaMemory memory);
OCCA_LFUNC void OCCA_RFUNC occaMemoryDetach(occaMemory memory);
OCCA_LFUNC occaMemory OCCA_RFUNC occaWrapCpuMemory(occaDevice device,
void *ptr,
occaUDim_t bytes,
occaProperties props);
OCCA_END_EXTERN_C
#endif
|
Study on clear stereo image pair acquisition method for small objects with big vertical size in SLM vision system Microscopic vision system with stereo light microscope (SLM) has been applied to surface profile measurement. If the vertical size of a small object exceeds the range of depth, its images will contain clear and fuzzy image regions. Hence, in order to obtain clear stereo images, we propose a microscopic sequence image fusion method which is suitable for SLM vision system. First, a solution to capture and align image sequence is designed, which outputs an aligning stereo images. Second, we decompose stereo image sequence by wavelet analysis theory, and obtain a series of high and low frequency coefficients with different resolutions. Then fused stereo images are output based on the high and low frequency coefficient fusion rules proposed in this article. The results show that w1(w2) and Z of stereo images in a sequence have linear relationship. Hence, a procedure for image alignment is necessary before image fusion. In contrast with other image fusion methods, our method can output clear fused stereo images with better performance, which is suitable for SLM vision system, and very helpful for avoiding image fuzzy caused by big vertical size of small objects. Microsc. Res. Tech. 79:408421, 2016. © 2016 Wiley Periodicals, Inc. |
/*
* Copyright 2015 <NAME>. (http://www.onehippo.com)
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.hippoecm.hst.pagecomposer.jaxrs.services;
import java.util.concurrent.Callable;
import javax.servlet.http.HttpSession;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import org.apache.commons.lang.StringUtils;
import org.hippoecm.hst.configuration.hosting.Mount;
import org.hippoecm.hst.configuration.hosting.VirtualHost;
import org.hippoecm.hst.core.container.ContainerConstants;
import org.hippoecm.hst.core.request.ResolvedMount;
import org.hippoecm.hst.pagecomposer.jaxrs.model.treepicker.AbstractTreePickerRepresentation;
import org.hippoecm.repository.api.HippoNodeType;
import static org.hippoecm.hst.pagecomposer.jaxrs.model.treepicker.DocumentTreePickerRepresentation.representExpandedParentTree;
import static org.hippoecm.hst.pagecomposer.jaxrs.model.treepicker.DocumentTreePickerRepresentation.representRequestContentNode;
@Path("/"+ HippoNodeType.NT_DOCUMENT+"/")
@Produces(MediaType.APPLICATION_JSON)
public class HippoDocumentResource extends AbstractConfigResource {
@GET
@Path("/picker")
public Response getRoot() {
return tryGet(new Callable<Response>() {
@Override
public Response call() throws Exception {
final AbstractTreePickerRepresentation representation;
representation = representRequestContentNode(getPageComposerContextService());
return ok("Folder loaded successfully", representation);
}
});
}
/**
* @param siteMapItemRefIdOrPath
* @return the rest response to create the client tree for <code>siteMapPathInfo</code> : the response contains all
* ancestor nodes + their siblings up to the 'channel content root node' plus the siblings for <code>siteMapPathInfo</code>
* plus its direct children.
*/
@GET
@Path("/picker/{siteMapItemRefIdOrPath: .*}")
public Response get(final @PathParam("siteMapItemRefIdOrPath") String siteMapItemRefIdOrPath) {
return tryGet(new Callable<Response>() {
@Override
public Response call() throws Exception {
final AbstractTreePickerRepresentation representation;
if (StringUtils.isEmpty(siteMapItemRefIdOrPath)) {
representation = representRequestContentNode(getPageComposerContextService());
} else {
// find first the mount for current request
HttpSession session = getPageComposerContextService().getRequestContext().getServletRequest().getSession();
String renderingHost = (String)session.getAttribute(ContainerConstants.RENDERING_HOST);
final VirtualHost virtualHost = getPageComposerContextService().getRequestContext().getResolvedMount().getMount().getVirtualHost();
final Mount editingMount = getPageComposerContextService().getEditingMount();
final ResolvedMount resolvedMount = virtualHost.getVirtualHosts().matchMount(renderingHost, null, editingMount.getMountPath());
representation = representExpandedParentTree(getPageComposerContextService(),resolvedMount, siteMapItemRefIdOrPath);
}
return ok("Folder loaded successfully", representation);
}
});
}
}
|
Pulmonary Hypertension in Heart Failure Patients The development of pulmonary hypertension (PH) in patients with heart failure is associated with increased morbidity and mortality. In this article, the authors examine recent changes to the definition of PH in the setting of left heart disease (PH-LHD), and discuss its epidemiology, pathophysiology and prognosis. They also explore the complexities of diagnosing PH-LHD and the current evidence for the use of medical therapies, promising clinical trials and the role of left ventricular assist device and transplantation. The pathophysiology of PH-LHD is thought to be a continuum, where the initial transmission of elevated left-sided filling pressures into the pulmonary circulation is followed by superimposed components, such as pulmonary vasoconstriction, decreased nitric oxide availability and desensitisation to natriuretic peptide-induced vasodilatation. This process leads to pulmonary vascular remodelling including thickening of the alveolar-capillary membrane, medial hypertrophy, intimal and adventitial fibrosis and small vessel luminal occlusion ( Figure 1). 3 More recently, Fayyaz et al. studied pulmonary arterial and venous remodelling in autopsy specimens from patients with PH-HFpEF and PH-HFrEF compared with normal controls and those with pulmonary veno-occlusive disease (PVOD). They found that more venous intimal thickening was present compared with arterial intimal thickening in those with PH-LHD, and this was similar to changes seen in people with PVOD. These changes correlated with PH severity, suggesting that the pulmonary venous remodelling promoted and dictated the development and severity of PH in the HF population. 13 Additionally, recent work has further assessed the impact of left-sided valvular disease on PH, with nearly 50% of patients with severe aortic stenosis having PH, of whom 12% had CpcPH, which was associated with higher PAWP, lower pulmonary arterial compliance (PAC) and was a significant predictor of mortality. 14 Diagnosis Echocardiography Echocardiography is one of the mainstays of investigation of LHD in general and efforts have been made to diagnose and monitor PH-LHD using routine echocardiography. This has been well summarised in a recent review by Maeder et al. 9,15,16 Pulmonary artery systolic pressure (PASP), the most well-known parameter, can be estimated by measuring peak tricuspid regurgitation velocity, applying the modified Bernoulli equation (4v 2 ) and adding estimated right atrial pressure (most commonly using inferior vena cava size and collapsibility). Studies have shown a good correlation with invasive haemodynamic measurements, although PASP estimates often have reduced accuracy due to: the technical ability required to acquire quality images; problems with tricuspid regurgitation velocity (low, absent or of poor quality or with severe tricuspid regurgitation); or when right atrial volume is unable to be assessed or is inaccurately estimated. 20 Additionally, PASP alone cannot determine the underlying haemodynamic PH phenotype. 21 Therefore, other more reliable and informative measures for assessment have been evaluated for the PH-LHD population. There has been a focus on assessing the RV-PA interaction and/or afterload elevation in people with PH-LHD. This includes the assessment of septal flattening (particularly in systole), RV dilatation, RV to LV ratio, RV apex angle and RV systolic impairment (as measured by RV fractional area change or tricuspid annular plane systolic excursion (TAPSE) and RV longitudinal strain (as measured by 2D and 3D speckle tracking) ( Figure 2). 22 Furthermore, the right ventricular outflow tract (RVOT) pulse wave Doppler profile contains several parameters to inform the underlying haemodynamic profile of a given patient or population with PH-LHD including acceleration time, velocity time integral (VTI) and presence/absence/timing of systolic notching. 21,23 These right heart metrics should be evaluated in conjunction with standard left heart metrics, including LA size, estimated LA pressure (by mitral inflow and tissue Doppler assessment), LV size and function, and valvular dysfunction, which in turn can then aid in distinguishing IpcPH and CpcPH. 16 The ratio of TAPSE/PASP has been described as an index of right ventriculo-arterial coupling independent of LV dysfunction, and has been validated with invasive haemodynamics by Tello et al. 24 With recent attention to PAC across the PH spectrum, including in PH-LHD, with increased pulsatile load (secondary to elevated PAWP) reducing PAC, we have described a non-invasive surrogate for PAC using the RVOT-VTI/PASP relationship, which we showed stratifies patients with IpcPH and CpcPH as compared with pulmonary arterial hypertension (PAH), and correlated with the 6-minute walk distance. 14, Right Heart Catheterisation In patients with suspected PH-LHD, right heart catheterisation (RHC) is required to prove the diagnosis and to differentiate between precapillary PH (PAH) and PH-LHD and to further distinguish IpcPH and CpcPH. Although the procedure is relatively safe and is now routine practice in most centres, there is a hesitancy to apply this as routine in all PH-LHD patients, given its invasive nature and potential for misinterpretation of the data. Our recommendation is that RHC should be performed in the following circumstances: diagnostic uncertainty based on noninvasive testing; disproportionate symptoms compared with echocardiographic findings; progressive symptoms despite optimal medical therapy; when advanced therapies are planned especially transplantation or mechanical circulatory support. One major drawback with RHC in this patient population is that the with aspiration and assessment of PAWP blood) and the PAWP should be measured at the end of the expiratory phase of normal respiration to minimise respirophasic variations. 30,31 If there are still concerns about the accuracy of the PAWP measurement, then direct measurement of the left ventricular end-diastolic pressure (LVEDP) can be performed. However it must be remembered that LVEDP is a measure of LV preload and LV diastolic compliance, and this is not a true surrogate for PAWP, which is both the best reflection of the total effect of LHD on the pulmonary circulation and has been shown to be a better predictor of outcomes, especially in the HFpEF population. In addition to standard measurements, other procedural techniques may be required in patients with PH-LHD. These patients are frequently on diuretic therapy, which can lead to artificially lower PAWP Pulmonary Hypertension-specific Therapy As PAH and PH-LHD share a number of common pathophysiological pathways and neurohumoral perturbations, there have been a number of studies performed to assess the efficacy of PH-specific therapy in the PH-LHD population. 39 In general, given lack of positive trial data along with the potential increased risk of pulmonary oedema in the setting of improved trans-pulmonary flow, the use of PH-specific therapy is not recommended. We have summarised these studies in Tables 2 and 3. Heart Failure with Preserved Ejection Fraction Given the paucity of other treatment options for heart failure with preserved ejection fraction, several studies have been undertaken in this population. The largest study has been the Phosphodiesterase-5 Heart Failure with Reduced Ejection Fraction A number of trials have been performed in the broader HFrEF population, but at this stage data are lacking to support the use of PHspecific therapy. Initial clinic trials using bosentan, IV prostacyclins and darusentan (a selective endothelin AT antagonists) were all negative. A major criticism of these studies is that they failed to focus on the PH-LHD population and often had higher doses of these therapies than used in the PAH population. More focused studies have been performed to assess the potential However, there is a subgroup that has persistent CpcPH after LVAD implantation and there is no consensus on treatment for this group. There have been several small trials evaluating the role of sildenafil after LVAD placement. In a single-centre study, Tedford et al. showed sildenafil treatment led to a significant reduction in mPAP, improved cardiac output and reduction in PVR in LVAD patients with residual elevated pulmonary pressures more than 1-month post implant. 63 Other agents, including bosentan, have been evaluated. 64 The Clinical Study to Assess the Efficacy and Safety of Macitentan in Patients With Pulmonary Hypertension After Left Ventricular Assist Device Implantation (SOPRANO; NCT02554903) study is ongoing. Thus, while the data suggest that LVAD therapy is associated with improvements in cardiopulmonary haemodynamics acutely and over time, there are patients who have persistent PH and/or RV failure (early or late) after LVAD implantation. While several smaller trials suggest haemodynamic benefit from the use of PH-specific therapy, and we use such therapy in isolated cases, there is currently a lack of large randomised data to support its use more broadly across this population. Transplantation Orthotropic heart transplantation (OHT) is still considered the definitive treatment for end-stage HFrEF. Unfortunately, patients with PH-LHD have worse outcomes post-transplantation, specifically those patients with a PVR >2.5 WU who do not demonstrate reversibility with vasodilator challenge, have significantly higher risk of mortality due to RV failure at 3 months (33%; 14% related to RV failure versus 6%). 65 This was further shown in an analysis of the United Network for Organ Sharing (registry that showed pre-transplant PVR >2.5 WU was an independent predictor of mortality), although the degree of elevation of PVR modestly increased mortality in a non-linear manner. 66 These studies demonstrate that the evaluation of PH-LHD in the context of OHT must be both dynamic and repeated, and that a stepwise approach to the transplant candidate with an elevated PVR is vital in patients where the PVR remains elevated. Without a viable mechanical support option, as may be the case in the congenital population, selected patients may be eligible for combined heart-lung transplantation. This option, however, is not without significant pitfalls, because this procedure is performed at only a select number of centres and has a high postoperative morbidity and mortality when compared with OHT. 67 Conclusion PH-LHD is a major problem for patients with both HFrEF and HFpEF and limited targeted treatment options have proven beneficial for this population. Although trials to this date have been negative, the combination of more nuanced phenotyping of this patient population combined with novel modalities is providing hope of advances in treatment. |
The former 'Bachelor' star posted bail on Tuesday. He has been charged with leaving the scene of a deadly crash, a felony.
Chris Soules' legal team is speaking out, and asking the public not to "prejudge" the former Bachelor until they know all the facts.
Lawyer Brandon Brown "asked that members of the public do not prejudge this case based on media coverage" in a statement released Thursday, adding: "Soules’ 911 call, released yesterday, proved that the initial knee-jerk coverage of this accident was incorrect."
On Monday night, Soules' pickup rear-ended a tractor near Aurora, a northeast Iowa town located about 15 miles south of Soules' farm in Arlington and 65 miles north of Iowa City. Both vehicles were sent into roadside ditches.
The tractor driver, Kenneth E. Mosher, 66, was taken by ambulance to a local hospital, where he was pronounced dead.
After the crash, Soules, 35, allegedly walked away from the scene and was picked up by an unidentified person. The former TV star was taken into custody a couple of hours later at his home.
Soules has since been charged with leaving the scene of a deadly crash, a felony. He was released after posting a $10,000 bail Tuesday.
Audio released Wednesday shows that before Soules walked away, he called 911 from the scene of the fatal crash and told the dispatcher he had hit a man on a tractor.
"He's not conscious," said Soules, who identified himself on the call when asked by the dispatcher. Asked if the victim was breathing, Soules says he can't tell and later says that he "doesn't appear to be." The dispatcher at one point asks Soules if he knows CPR, and he responds he doesn't.
Soules can be heard asking other people at the scene if they know CPR. Later, a person on the recording can be heard counting out what sounds like CPR.
Authorities said Soules had alcoholic beverages or containers with him, but they said they were still investigating whether he was impaired.
Soules has retained attorneys Alfredo Parrish, Brandon Brown, and Gina Messamer of Des Moines law firm Parrish Kruidenier to represent him.
"While initial reports suggested Soules fled the scene, the 911 call confirms that Soules in fact was the one who contacted law enforcement immediately," his legal team said in a statement.
"During the call, he clearly identified himself and explained his role in the terrible accident. Soules attempted to resuscitate Mr. Mosher and remained on the scene with him until emergency medical personnel arrived. Soules’ attorneys are exploring the possibility of a gag order to prevent further misinformation from prejudicing Soules’ right to a fair trial."
The statement also said Soules' legal team is working to gather all of the evidence and review the facts of the crash. "His attorneys are confident that once all the evidence is made public, it will show Soules acted reasonably and did everything in his power to provide aid to Mr. Mosher."
A preliminary hearing is set for May 2. If convicted of the felony charge against him, Soules could be sentenced to as much as five years in prison under Iowa law.
Soules starred as ABC's The Bachelor in 2015. |
Q:
Selecting colors suitable for color, greyscale, and black/white printing
I'm developing technical graphics for a printed/online scientific publication using Adobe Illustrator CS5. I am trying to select a color palette that is pleasant when viewed in color and whose colors are distinguishable when printed in greyscale.
As an anti-example, suppose I have two colors: #942724 (red) and #265791 (blue). They are pleasant to look at, but when these are converted greyscale, they are completely indistinguishable.
I see that Illustrator comes with many Swatch Libraries and the Adobe Color Wheel provides six different color rules (e.g. Monochromatic, Analogous, Triad). It's easy to select, say, two colors that look good in color and are distinguishable in greyscale. But I find it much more difficult with 3+ colors. Is there a general technique/rule that can be applied? How does one select a color palette that is versatile enough to look good in color and be distinguishable in greyscale (or even B/W) print?
A:
In general, you can mentally manage this by following these rules:
First, think in terms of Hue/Saturation/Value instead of RGB or CMYK.
Value represents how much black is in the color, or how 'dark' an image is. The lower the Value, the darker the color. Value has by far the most effect on the appearance of a color in the B&W space, and you can generally assume that colors with widely different Values will be very visually distinct.
Hue is the least most important setting for black & white, and represents the color being used for the... color.
Saturation is the second most important setting for black & white, and represents how much white is in the color. Generally it's thought of as how much color is mixed into the image, with higher values appearing closer to 'pure' colors. For B&W, generally the higher the saturation, the darker the B&W display.
Keep in mind however, that oftentimes blue hues are printed darker at the same value/saturation than yellow hues. I generally use the rule that the higher the saturation, the higher the variance between blue/yellow:
Inkjet printer printout of just the color lines from the above image:
Random Aside: If you use the 'Black & White' filter in Photoshop, the default settings take into account the visual variance between different hues. If you set all the settings to the same value and apply it to the above image, the color lines will all be the same. |
<filename>src/app/inventario/inventario.module.ts
import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { Ng2SmartTableModule } from 'ng2-smart-table';
import { InventarioRoutingModule } from './inventario-routing.module';
import { ReactiveFormsModule } from '@angular/forms';
import { ComprasComponent } from './pages/compras/compras.component';
import { AlmacenComponent } from './pages/almacen/almacen.component';
import { FinanzasComponent } from './pages/finanzas/finanzas.component';
import { VentasComponent } from './pages/ventas/ventas.component';
import { InicioComponent } from './pages/inicio/inicio.component';
import { AgregarComponent } from './pages/agregar/agregar.component';
import { ProductosComponent } from './pages/productos/productos.component';
import { SalidasComponent } from './pages/salidas/salidas.component';
import { EntradasComponent } from './pages/entradas/entradas.component';
import { LogisticaComponent } from './pages/logistica/logistica.component';
// Modal Component
import { ModalModule } from 'ngx-bootstrap/modal';
@NgModule({
declarations: [
ComprasComponent,
AlmacenComponent,
FinanzasComponent,
VentasComponent,
InicioComponent,
AgregarComponent,
ProductosComponent,
SalidasComponent,
EntradasComponent,
LogisticaComponent
],
imports: [
CommonModule,
ReactiveFormsModule,
InventarioRoutingModule,
Ng2SmartTableModule,
ModalModule.forRoot()
]
})
export class InventarioModule { }
|
<reponame>izhevskoye/oligarchy
use crate::game::{
state_manager::SaveGameEvent,
ui::state::{ConfirmDialogState, MainMenuState, SaveGameList},
AppState,
};
use bevy::prelude::*;
use bevy_egui::{
egui::{self, Align2},
EguiContext,
};
use std::fs::remove_file;
pub fn confirm_dialog(
egui_context: ResMut<EguiContext>,
mut app_state: ResMut<State<AppState>>,
mut menu_state: ResMut<State<MainMenuState>>,
mut save_game_list: ResMut<SaveGameList>,
confirm_dialog: Res<ConfirmDialogState>,
mut save_game: EventWriter<SaveGameEvent>,
) {
egui::Window::new("Confirmation")
.anchor(Align2::CENTER_CENTER, [0.0, 0.0])
.default_width(150.0)
.resizable(false)
.collapsible(false)
.show(egui_context.ctx(), |ui| {
ui.label("Are you sure?");
ui.horizontal(|ui| {
if ui.small_button("No").clicked() {
menu_state.pop().unwrap();
}
if ui.small_button("Yes").clicked() {
let _ = menu_state.pop();
match confirm_dialog.clone() {
ConfirmDialogState::DeleteFile(file_name) => {
let _ = remove_file(&file_name);
save_game_list.update_list();
}
ConfirmDialogState::SaveFile(file_name) => {
save_game.send(SaveGameEvent { file_name });
let _ = menu_state.pop();
}
ConfirmDialogState::ExitGame => {
let _ = app_state.overwrite_replace(AppState::MainMenu);
}
ConfirmDialogState::ExitProgram => {
std::process::exit(0);
}
}
}
});
});
}
|
BRUSSELS—In one weekend, two different European Union member countries saw people take to the streets in protest for two very different reasons.
In the United Kingdom, over one million people marched through central London, ending at the Houses of Parliament, calling for a second public referendum to decide whether Britain should withdraw from EU membership. Some called for a complete revocation of Article 50, stopping Brexit altogether.
Walls and borders, sounds familiar, doesn’t it?
This past weekend was a visual representation of the complicated phenomenon that is the EU and membership in the bloc. The union is itself the product of capitalist class efforts to create a single economic market after World War II. Today, the EU acts as bank, judicial system, economic enforcer, and watchdog military alliance. Dominated by Germany’s central bank, the EU’s austerity policies have created millions of discontented workers across the continent, particularly in recession-hit countries like Greece, Portugal, Spain, and Italy. The same is true in Britain.
At the same time, a more fluid and cosmopolitan European identity has been forged for just as many millions. They abhor the anti-immigrant rhetoric employed by the EU’s right-wing critics and oppose any return to hard borders or the nationalist sentiments of the past.
The protests in London and Brussels demonstrated the reality that arguments for and against the EU which both carry some merit—depending on where you live, and how hard EU austerity measures have hit your economy.
And speaking of complications, back in the U.K., British lawmakers wrested control of the Brexit process from Prime Minister Theresa May Monday and will now seek to decide how the U.K. withdraws from the EU—or not.
May admitted early Monday that her Brexit “Plan B” agreement didn’t have the votes needed to carry a win. After seizing control of the agenda from May, Members of Parliament decided 329 to 302 to schedule a series of votes on alternative Brexit strategies, with options including a second referendum, keeping the U.K. in the EU’s single-market, a no-deal divorce, or canceling the whole thing.
— No-Deal Brexit: The U.K. would leave April 12.
— Rewrite the Irish “backstop” agreement: An amendment to the Withdrawal Agreement that gives the U.K. a unilateral right to exit the so-called Irish backstop, or open border trading between Ireland, an EU member state, and the U.K.’s Northern Ireland. No one wants the return to violence that a hard border could bring, but Brexit advocates fear becoming trapped in a customs union with the EU.
— Soft Brexit/Norway style: The U.K. stays in the European Economic Area and rejoins the European Free Trade Association, giving it access to the EU single market.
— Revoke Article 50 if no withdrawal agreement is reached four days before the April 12 deadline.
MPs will begin voting by paper ballot tonight at about 7:00 p.m. GMT (2:00 p.m. CST).
With the calls for May to resign beginning weeks ago, the reality of the situation just seemed to hit. With the House of Commons prepared to make moves on Brexit, May is scheduled to speak with Conservative Party lawmakers before the vote in one final Hail Mary pass.
It is expected she will indicate a date for her resignation following the meeting if she is able to secure the votes for her deal. |
Two years ago, well before the transfer deadline, I thought Arsenal might finish 5th. They bought Brother Mesut and came 4th.
Last season, again well before the transfer deadline, I thought Arsenal would be 4th. They came 3rd.
This year…
Numero de lo Habitual
Shots Taken Rank: 2nd
Shots Conceded Rank: 4th
Shot Dif Rank: 2nd
xGDif Rank: 3rd
My worry from the 13-14 season was that Arsenal’s shot differential had been creeping down for a while. They still created superior chances, but their points output seemed a lucky compared to the underlying numbers. The big issue was that Shots For was now in Europa League range, which was slightly embarrassing for a perennial Champions League team.
Enter Alexis Sanchez, Danny Welbeck, and an upgrade at right back, and the shot numbers improved. Not enough to overtake Manchester City, but better now than Liverpool, Chelsea, and United at the very least. Last season was progress.
Everyone knows that Alexis was fantastic last season, but to casual observation Danny Welbeck was pretty bad. 4 measly goals is unimpressive for any Arsenal forward, and especially for one that reportedly cost £16m. (Though with Benteke going for twice that, Welbeck looks like a positive bargain nowadays.)
However, dig below the surface and you see a different story. Welbeck was 9th in the league in expected goals per 90 at nearly half a goal per game (players had to play at least 9 full 90s to qualify). Add in some quality passing in the final third, and you get one of the best all-around forwards in the Premier League.
Who unfortunately performed more like Frazier Campbell.
If I were running Arsenal, I’d keep the faith with Welbz. Tell the lad to buck up, keep doing what he’s doing, and the goals will eventually come.
Then again, the numbers thought Balotelli was pretty damned good last season too, so what the hell do they know?
Transfers In
Petr Cech – GK – £10m
It has been soooooooooooooo long since Arsenal had a reliable, competent goalkeeper between the posts. I suppose the era before Jens became Mad Jens is the last time I felt as a fan that the ball was in safe hands. For this reason, I am strangely excited about Cech. It doesn’t make up for Cesc being in blue or for the Cashley for William Gallas swap, but viewed by itself, it’s a great piece of business.
While on the topic of goalkeepers, David Ospina was underrated and over-abused last year. Cech is an upgrade, but Arsenal also have a backup good enough to start for most of the clubs in the league*.
*This was not a sarcastic evaluation.
Transfers Out
Absolutely no one important. (Yes, this includes Poldi-bear and Chezzer.)
This truly is a different Arsenal era.
Current Needs
Coquelin is a rich man’s Flamini, while Flamini at this point in his career is a bum’s garbage fire. Therefore a defensive midfielder than can destroy, ping long passes, and stay healthy is a necessity. Preferably one with excellent pace, since counterattacks against Arsenal’s center backs are a perennial Achilles heel. Unfortunately, this has been a clear need for the last 3-4 seasons, and I have no idea if Wenger will fill the role.
I also thought they could use a player like Memphis Depay or Raheem Sterling to add depth at wide forward, and extreme pace on both sides of the pitch. Both players instead went to league rivals. However, at this point I am quibbling – the squad as a whole is the strongest it has been in ages.
Can Arsenal win the league?
I will answer a question with a question: Can they stay healthy?
If yes – if Theo, and Sanchez, and Ramsey, and Arteta, and Ozil, and OX can all stay generally healthy – then Arsenal should definitely be in the mix for a title. Nearly every player on the squad is in their prime, the young guys look amazing, and Wenger’s tactical setup has transitioned from possession purist to surprisingly practical over the last couple of seasons.
The other big question is Welbeck. If Welbeck produces goals like the numbers expect, a title is certainly possible for Arsenal this season. Or I guess if Theo plays through the center and scores like he did at the tail end of last season, you could expect the same. That said, I prefer Arsenal set up with Sanchez left, Welbeck central, Theo right and Mesut as a 10 for maximum threat and destruction.
How often that setup will happen is anyone’s guess.
Conclusion
Normally, this is where I summarize how all the other top teams are expected to do and give you where I expect each team to finish. However, that’s James’s job now, so I will leave that to him. For me, the underlying numbers from last season, combined with transfers so far indicate the title could be a true 4-team race between last season’s CL finishers.
I have not been optimistic about Arsenal in a long, long time. The following lines should be viewed in that light.
I absolutely expect Arsenal to make the Champions League this season.
With a little luck and some Shad Forsythe magic, I think we could see Wenger hoist his first league title since 2003-04.
Predicted finish for Arsenal: 2nd
Bonus Section – Q&A with Knutson
Who the hell are you?
I’m Spartacus!
Wait. That’s not right. The byline says Ted Knutson. We’ll go with that.
For those who are recent fans of StatsBomb (recent here meaning in the last year), I co-founded the site and I still pay all the bills. Oh, and I am the designer of those silly radar charts you still see from time to time.
Why don’t you write here any more?
Because a year ago I got a spiffy job working in football. While I technically could continue writing about leagues our teams are not involved in, I decided it would be very difficult not to let my current work seep into what I write about.
Oh, and we had another kid, so any and all free time went right out the window anyway.
What do you actually do?
That one is complicated. Since the restructuring in February, I’ve mostly been focused on player recruitment, as well as building statistical visualizations and metrics to help our teams better understand football. Prior to that was less work in recruitment and more stuff I can’t talk about.
The quick answer that most people seem to understand is that I pretty much get to play Football Manager with real life.
How did you get that job?
People read my work on this here site here. They apparently liked it. We had lunch. They did not hate me. Then we had dinner. Despite spending hours of time with me, Matthew Benham hired me anyway.
That was a year ago this week.
Do you have previous experience in football?
Nope. Just my work here on the site. My prior job experience was working for most of a decade at Pinnaclesports.com, where I did all sorts of stuff including acting as lead trader for the English Premier League.
How do I get a job like yours?
Write.
A lot.
Like, a metric fuckton amount. (Check my StatsBomb archives for proof.)
Constantly ask and examine new questions about football and data. Don’t be afraid to make mistakes, and don’t let perfection be the enemy of progress. Also listen to criticism about your work and improve it.
Understand that visualizations are really, really important to help your work be interesting to a broader audience, but also to help it be understood by everyone. To help with this, read the important books by Tufte, Few, and Alberto Cairo at least.
That’s pretty much all I know how to do to get you noticed: think hard, work hard, write in public, and be positive and social interacting with people about your work.
THEN, once you have a job, work harder.
Why?
There are probably quite a number of you who would love to have a job just like mine, right? And yet I would like to keep my job! Some of you are probably smarter than I am. Many of you probably know more about stats, or programming, or possibly football.
I can’t control any of that.
The only thing I can control is how hard I work to try and make everything successful.
Be smarter. Work harder.
Tragically boring career advice from someone on the inside.
Do you really have such a big influence on transfer business?
This came out of a quote on a Brentford forum, which in turn was an interpretation made from Michiel de Hoog’s piece on the new Brentford head coach Marinus Dijkhuizen.
“If the enlightening Dutch article about Marinus (forget the magazine but Beesotted widely re tweeted it) is anything to go by, Ted Knutson has a very (and possibly overly) significant input on in-comings’ and outgoings. MD even quoted alluding to such. “
So do I have overly significant input on transfer stuff? The short answer is: no.
The long answer is: tons of people are involved and give feedback in the recruitment process. All of the input is taken, analysed, and then choices are made. My work is part of that. However, I am usually sitting at my desk and able to give quick answers, which is useful for busy head coaches and narrative devices.
Why don’t you talk to the media? Or speak at conferences? Or use your Twitter account more than like twice a month?
Because talking about what you actually do for work to try and give football teams an edge is a Bad Idea ™.
Think of it this way – Billy Beane has been the General Manager of the Oakland A’s since 1998. That’s SEVENTEEN YEARS. If he found an edge early and never talked about it, it could potentially be valid the entire time.
How long do I want to work in football? I don’t know, but seventeen years sounds like a good start.
Maybe I’ll be on a panel at a conference next year. Or the year after. Or in a generation.
What kind of stats do you use to scout players?
The usual ones. Some unusual ones as well. And an awful lot of actually watching players play football.
But hey, yeah, carry on with this assumption that the stats guys hate football and never watch games.
Is Brentford going to be good this season?
Man, I fucking hope so.
I love the guys I work with, I love the job, and Matthew, Phil, and Rasmus are three of the best people I could imagine working for. I’d certainly like us to succeed so that all this hard work pays off and I can hopefully stick around for a while.
That said, it’s football. There are no guarantees.
I was lucky enough to get to hold the trophy in Midtjylland this June. Our Danish team has an amazing culture and is filled with good people and players. I’d like to have that chance many more times in my career, which means helping both clubs succeed as often as possible.
Are Brentford going to sign any more players?
Now that would be telling…
What about Midtjylland?
*cursing*
*rude gestures*
In your opinion what’s the best transfer signing in the Premier League this season?
That one is easy. So easy, in fact, that I wrote about it last summer before I got hired. I like to work ahead.
Finally… Why was Konstantin Kerschbaumer’s scouting nickname “Chris Palmer”?
I mentioned this on Twitter about a month ago. Philipp Hofmann’s scouting nickname was Sturm Tank (which – AHEM BRENTFORD FANS – is far superior to The Hoff, but has thus far failed to catch on). I also mentioned KK’s scouting nickname, but it needed more than 140 characters to explain.
Back when we were reviewing Kerschbaumer, I asked Ricardo Larrandart, Master of Agents to get the agent information on Kerschbaumer, so that we could add it to the dossier.
The next morning, I sat down at my desk and Ricky told me that he got what I needed for “Palmer.”
“Who?”
“Chris Palmer. The guy you told me to get information for.”
*I think really hard. Nothing.*
“Who???”
“Chris. Palmer. You told me to get his agent info last night before you left.”
*light goes on for me*
“I asked for Kersch. Baumer. First name Konstantin.”
*Ricky pauses*
“Then who is Chris Palmer?”
“… I do have to say, I am impressed at your ability to get agent info for guys I assume might be fictional players.”
Fin
Related
Article by Ted Knutson |
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.aliyuncs.mpaas.model.v20200710;
import com.aliyuncs.RpcAcsRequest;
import com.aliyuncs.http.MethodType;
import com.aliyuncs.mpaas.Endpoint;
/**
* @author auto create
* @version
*/
public class QueryMasLocusAnalysisRequest extends RpcAcsRequest<QueryMasLocusAnalysisResponse> {
private String behaviorEndDateTime;
private String userId;
private String autoStartDateTime;
private String autoEndDateTime;
private String tenantId;
private Long pageSize;
private String behaviorStartDateTime;
private String utdid;
private Long pageNo;
private String appId;
private String workspaceId;
public QueryMasLocusAnalysisRequest() {
super("mPaaS", "2020-07-10", "QueryMasLocusAnalysis");
setMethod(MethodType.POST);
try {
com.aliyuncs.AcsRequest.class.getDeclaredField("productEndpointMap").set(this, Endpoint.endpointMap);
com.aliyuncs.AcsRequest.class.getDeclaredField("productEndpointRegional").set(this, Endpoint.endpointRegionalType);
} catch (Exception e) {}
}
public String getBehaviorEndDateTime() {
return this.behaviorEndDateTime;
}
public void setBehaviorEndDateTime(String behaviorEndDateTime) {
this.behaviorEndDateTime = behaviorEndDateTime;
if(behaviorEndDateTime != null){
putBodyParameter("BehaviorEndDateTime", behaviorEndDateTime);
}
}
public String getUserId() {
return this.userId;
}
public void setUserId(String userId) {
this.userId = userId;
if(userId != null){
putBodyParameter("UserId", userId);
}
}
public String getAutoStartDateTime() {
return this.autoStartDateTime;
}
public void setAutoStartDateTime(String autoStartDateTime) {
this.autoStartDateTime = autoStartDateTime;
if(autoStartDateTime != null){
putBodyParameter("AutoStartDateTime", autoStartDateTime);
}
}
public String getAutoEndDateTime() {
return this.autoEndDateTime;
}
public void setAutoEndDateTime(String autoEndDateTime) {
this.autoEndDateTime = autoEndDateTime;
if(autoEndDateTime != null){
putBodyParameter("AutoEndDateTime", autoEndDateTime);
}
}
public String getTenantId() {
return this.tenantId;
}
public void setTenantId(String tenantId) {
this.tenantId = tenantId;
if(tenantId != null){
putBodyParameter("TenantId", tenantId);
}
}
public Long getPageSize() {
return this.pageSize;
}
public void setPageSize(Long pageSize) {
this.pageSize = pageSize;
if(pageSize != null){
putBodyParameter("PageSize", pageSize.toString());
}
}
public String getBehaviorStartDateTime() {
return this.behaviorStartDateTime;
}
public void setBehaviorStartDateTime(String behaviorStartDateTime) {
this.behaviorStartDateTime = behaviorStartDateTime;
if(behaviorStartDateTime != null){
putBodyParameter("BehaviorStartDateTime", behaviorStartDateTime);
}
}
public String getUtdid() {
return this.utdid;
}
public void setUtdid(String utdid) {
this.utdid = utdid;
if(utdid != null){
putBodyParameter("Utdid", utdid);
}
}
public Long getPageNo() {
return this.pageNo;
}
public void setPageNo(Long pageNo) {
this.pageNo = pageNo;
if(pageNo != null){
putBodyParameter("PageNo", pageNo.toString());
}
}
public String getAppId() {
return this.appId;
}
public void setAppId(String appId) {
this.appId = appId;
if(appId != null){
putBodyParameter("AppId", appId);
}
}
public String getWorkspaceId() {
return this.workspaceId;
}
public void setWorkspaceId(String workspaceId) {
this.workspaceId = workspaceId;
if(workspaceId != null){
putBodyParameter("WorkspaceId", workspaceId);
}
}
@Override
public Class<QueryMasLocusAnalysisResponse> getResponseClass() {
return QueryMasLocusAnalysisResponse.class;
}
}
|
Where do we stand on the relationship between tau biomarkers and mild cognitive impairment? We present an editorial on a recent publication by Amlien et al. in which diffusion tensor imaging (DTI) was used to quantify longitudinal decreases in fractional anisotropy (FA) and increased radial diffusivity (DR) in patients with mild cognitive impairment (MCI). These longitudinal alterations were found to be greater in MCI patients with high cerebrospinal fluid (CSF) tau levels at baseline and greater than healthy controls. Amlien et al. concluded that tau levels were an important early biomarker for predicting rate of disease progression and outcome. The results of this study are an interesting finding for possible predictive use of tau levels in MCI. However, in our assessment, the methodology does not support the conclusion that CSF total tau levels are predictive of MCI progression towards a disease state such as Alzheimer's disease (AD). Further longitudinal study is needed, that includes follow-up neuropsychological assessment and conversion of subjects in the study to AD, to conclude that CSF total tau levels represent a predictive biomarker of MCI progression towards AD. |
Such a brace is disclosed in European patent application 670 152 A1. In the known knee brace, an inflatable padding is used to correct the longitudinal axis of the leg, said padding being disposed directly at the side next to the knee and expanding through inflation such as to exert a pressure directly on the knee. Taking into consideration the position of the knee-distal straps placed around the upper leg and lower leg, the hinged rail, which extends laterally over the knee, exerts a moment on the knee, which, depending on the position of the hinged rail on the inside or outside of the leg, is thus brought towards a knock-knee position or bow-legged position. This results in two effects: firstly, a pressure is exerted directly at the side on the knee and therefore on the pain-sensitive joint capsule; secondly, when the knee is bent, the inflatable padding must slide on the skin surrounding the knee, this, of course, being associated with friction and leading, if the brace is worn for lengthy periods of time and especially during sporting activity, to grazing of the corresponding points on the skin.
A further design of knee brace for correcting the longitudinal axis of the leg is disclosed in U.S. Pat. No. 5,302,169. In this. brace, the two arms of a hinged rail extending over upper leg and lower leg are brought by means of adjusting screws into a desired oblique position with respect to a hinged connection consisting in relatively complex manner of two interconnected ball-joint-like sliding bearings.
With this knee brace, it is inevitable that there will be compressive forces acting on the knee, this having a considerably adverse effect on the wearing of such a brace, particularly when the knee is moved. Furthermore, the screw-type adjustment of the two arms of the hinged rail, accomplished via relatively short lever arms, allows a considerable elastic hysteresis of said arms, with the result that the known knee brace cannot be guaranteed to provide a reliable therapeutically required correction of the leg position.
A further knee brace is disclosed in U.S. Pat. No. 4,796,610, in which attached to the end of the arms of a hinged rail are pads which are in contact with the upper and lower leg. Said knee brace is intended to absorb forces occurring transversely with respect to the leg in the region of the knee joint and thereby to prevent injuries of the kind that repeatedly occur especially during the playing of sports. The knee brace serves, therefore, in particular to prevent sports injuries; it is not designed to bring about a conscious correction of the longitudinal axis of the leg.
In addition, it is known, for example, from EP 0 684 026 A1 to provide inflatable pads at the hinged rails of a knee brace for the purpose of padding the hinged rails against the leg, said inflatable pads being able to be individually inflated by the wearer such that the wearing of the knee brace is as convenient as possible, i.e. the pads serve to adapt the hinged rails individually to the form of the respective leg, particularly also to the change thereof during the wearing of the knee brace over the course of a day, in order in this manner to achieve a particularly good fit between the knee brace and the leg. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.