content
stringlengths 7
2.61M
|
---|
<gh_stars>1-10
package cp.obd.evdatautility;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import cp.obd.evdatautility.MyLocation;
import cp.obd.evdatautility.MyLocation.LocationResult;
import android.location.Location;
import android.os.Environment;
import android.text.format.Time;
import android.util.Log;
public class GPSDataStream extends EVDataStream {
private Time currentTime;
protected MyLocation myLocation;
@Override
public boolean init(String filePrefix) {
try {
File root = Environment.getExternalStorageDirectory();
dataFile = new File(root, filePrefix+".csv");
fos = new BufferedWriter(new FileWriter(dataFile));
fos.append("Latitude");
fos.append(',');
fos.append("Longitude");
fos.append(',');
fos.append("Altitude");
fos.append(',');
fos.append("Time Stamp");
fos.append("\r\n");
} catch (Exception e) {
Log.e("FILE", "Cannot create temp file for " + filePrefix);
return false;
}
return true;
}
public void init(BlueToothActivity activity) {
LocationResult locationResult = new LocationResult(){
@Override
public void gotLocation(Location location){
if (location != null) {
try {
addToFile(location);
}
catch (Exception e) {
Log.e("GPS", "CANNOT WRITE LOCATION");
}
}
}
};
myLocation = new MyLocation();
myLocation.getLocation(activity, locationResult);
currentTime = new Time();
init("Gps");
}
public void addToFile(Location loc) throws IOException {
fos.append(String.valueOf(loc.getLatitude()));
fos.append(',');
fos.append(String.valueOf(loc.getLongitude()));
fos.append(',');
fos.append(String.valueOf(loc.getAltitude()));
fos.append(',');
currentTime.setToNow();
fos.append(currentTime.toString().substring(0, 15));
fos.append("\r\n");
}
public File endStream() throws IOException {
myLocation.cancel();
return super.endStream();
}
}
|
Transaction Dependency Graph Construction using Signal Injection Understanding the runtime behavior and dependencies between components in complex transaction-based enterprise systems enables the system administrators to identify performance bottlenecks, allocate resources, and detect failures. This paper introduces a novel method for extracting dependency information between system components at runtime by using delay injection on individual links and Fast Fourier Transforms. Our proposed method introduces minimal disturbance in the system and its execution time is independent of the system workload. Thus, it can be used at runtime in production systems. Furthermore, it avoids false positives introduced by other methods. We present preliminary experimental results that demonstrate that our approach is able to identify dependencies, avoid false positives, while ensuring low perturbation to the target system. |
Procoagulant platelets: generation, function, and therapeutic targeting in thrombosis. Current understanding of how platelets localize coagulation to wound sites has come mainly from studies of a subpopulation of activated platelets. In this review, we summarize data from the last 4 decades that have described these platelets with a range of descriptive titles and attributes. We identify striking overlaps in the reported characteristics of these platelets, which imply a single subpopulation of versatile platelets and thus suggest that their commonality requires unification of their description. We therefore propose the term procoagulant platelet as the unifying terminology. We discuss the agonist requirements and molecular drivers for the dramatic morphological transformation platelets undergo when becoming procoagulant. Finally, we provide perspectives on the biomarker potential of procoagulant platelets for thrombotic events as well as on the possible clinical benefits of inhibitors of carbonic anhydrase enzymes and the water channel Aquaporin-1 for targeting this subpopulation of platelets as antiprocoagulant antithrombotics. |
Dynamics diagnosis of the COVID-19 deaths using the Pearson diagram The pandemic COVID-19 brings with it the need for studies and tools to help those in charge make decisions. Working with classical time series methods such as ARIMA and SARIMA has shown promising results in the first studies of COVID-19. We advance in this branch by proposing a risk factor map induced by the well-known Pearson diagram based on multivariate kurtosis and skewness measures to analyze the dynamics of deaths from COVID-19. In particular, we combine bootstrap for time series with SARIMA modeling in a new paradigm to construct a map on which one can analyze the dynamics of a set of time series. The proposed map allows a risk analysis of multiple countries in the four different periods of the pandemic COVID-19 in 55 countries. Our empirical evidence suggests a direct relationship between the multivariate skewness and kurtosis. We observe that the multivariate kurtosis increase leads to the rise of the multivariate skewness. Our findings reveal that the countries with high risk from the behavior of the number of deaths tend to have pronounced skewness and kurtosis values. Introduction The new coronavirus was discovered in December 2019 and named Severe Acute Respiratory Syndrome Coronavirus 2 (abbreviated ''SARS-CoV-2''), whose associated disease was named COVID-19 (Coronavirus Disease 19) by the World Health Organization (WHO). This virus first emerged in Wuhan, then spread worldwide, and remains one of the greatest challenges to be addressed on a global scale. The search for rapid insights into the impact of the infection caused by the virus requires global collaboration among researchers from many disciplines and countries, moving faster than is usually the case. World leaders are expected to make life-saving decisions and ensure that unstable patients receive the care they need, improve the economic, social, and psychological situation of the most vulnerable populations, and consider many other factors. This paper considers studying the number of COVID-19 deaths based on the skewness and kurtosis measures. A variety of topics related to skewness and kurtosis are explored in a literature review. Given their technical and applied importance, Hogg proposed measurements of kurtosis and skewness for various non-normal distributions. Luo and Schramm used skewness and kurtosis in the study of cosmological density perturbations. The pattern of negative skewness and excess kurtosis has been sought in stock market returns, see Kim and White. In Lam et al., skewness and kurtosis were used in the context of time series for investors' portfolio optimization. According to Cain et al., skewness and kurtosis were used univariately and A.D.S. Gonalves et al. the situation of deaths from COVID-19, help to know how the different decisions of political leaders affect the situation of each country, and the formulation of public policies to combat COVID- 19. This paper is organized as follows. Section 2 presents data to be analyzed. Section 3 addresses how different techniques in branches of Multivariate Analysis, Time Series Analysis and Non Parametric Statistics are combined to answer the under-analysis problem. Section 4 shows the main numerical results. Section 5 concludes this paper. Database In this paper, we analyze the complex dynamics evident in the spread of COVID-19 across 55 countries. The data used for this analysis are the time series of daily deaths associated with COVID-19 from 22-01-2020 to 14-07-2021. Data were obtained from the Our World in Data (OWID) website and from https://github.com/owid/ covid-19-data/tree/master/public/data/jhu, and we used R software for all computational and data mining procedures. Table 1 shows the descriptive measures (mean, median, minimum, maximum, and coefficient of variation) for daily COVID-19 deaths by country. Brazil and the United States are shown in a darker shade because they were the countries with the highest mean daily COVID-19 deaths, whereas the countries in a lighter shade, Iceland and New Zealand, had the lowest mean daily COVID-19 deaths. In addition, in all cases, the means are higher than the medians (indicating a right skew) and the values of the CVs are high (indicating a large dispersion around the mean). Methodology The main objective of this work is to construct multivariate skewness and kurtosis maps. To that end, we use the Pearson diagram to understand how countries' decisions about daily deaths from COVID-19 are reflected in the positions on the maps. In this section, we present an overview of the concepts used in this work. Background: Time series, bootstrap and multivariate analysis First, we begin with some concepts related to time series analysis (TSA). TSA refers to a collection of data observed consecutively at equal time intervals. The TSA goal is twofold: to propose a model to describe the dynamics under study and to make predictions. The time series of daily deaths by COVID-19 per country is our object of study. Countries with few deaths after COVID-19 require a smooth time series model, while countries with more deaths require a more complex model (e.g., with seasonality and stochastic trends). For this step, we adopt the Seasonal Autoregressive Integrated Moving Average (SARIMA) model. Bootstrap in time series The bootstrap method is used to represent the distribution for an estimator or test statistic numerically by resampling from a database or estimated data model. There are two main types of bootstraps: parametric and non-parametric. The parametric bootstrap considers a model or estimates for parameters from the data, while the nonparametric bootstrap requires no such estimates. As can be seen in, in the case of dependencies such as time series, the procedure becomes more complicated and less direct than the proposal in the context of independence (Efron, 1979 ). In the parametric context, we capture the dependency structure using a fitted model from the data, for example a finite ARMA process. The bootstrap algorithm with moving blocks can be executed by dividing the series into small blocks and resampling the blocks with replacement so that the structure within the blocks is preserved. This method is implemented in the software R with the library boot and the function tsboot. The use of bootstrap on time series is necessary to calculate multivariate skewness and kurtosis, which we discuss in more detail in this article. The parametric bootstrap method provides better estimates of multivariate skewness and kurtosis than the non-parametric approach, which is a central point of this article because they are the necessary measures for constructing the Pearson diagram-inspired map. The chosen method is the parametric bootstrap, although it is also possible to use the non-parametric approach. SARIMA process A time series { ; = 1,, } can be understood by the result of three components : where, and represent the trend, seasonal and stationary random components, respectively. According to Brockwell and Davis, taking and as nonnegative integers, follows a (,, ) (,, ) -process with period if the delayed series = A.D.S. Gonalves et al. follows a causal multiplicative seasonal ARMA model defined by: where is a function of mean of, is the lag operator = −1, = − and ''∼ (0, 2 )'' denotes a white noise having variance 2. In this article, the seasonality of the model is considered weekly, since it is possible to identify the cases where COVID-19 falls on the weekend and rises at the beginning of the week. Fig. 1 illustrates the Brazilian death number from 22-01-2020 to 04-06-2021. It can be observed that there are cycles that justify a seasonal approach. Multivariate skewness and kurtosis As discussed by Koizumi et al., let, be p-dimensional random vectors with mean and covariance matrix. Then the population measures of skewness and kurtosis defined by Mardia are respectively given by: and where E(⋅) means the expected value, and are independent random vectors and identically distributed. Let us now consider the sample versions for Eqs. and. Let vector and sample covariance matrix are respectively given by: In this way, the measures of multivariate sample skewness and kurtosis in are defined as follows: A.D.S. Gonalves et al. Let 1, and 2, in and, then for large, follows a 2 distribution with ( + 1)( + 2)∕6 degrees of freedom and follows a normal distribution. Some decision rules based on 1, and 2, can be formulated: if the statistic 1, in is significant, the joint distribution has significant symmetry for the set of variables; If the statistic 2, in is significant, the joint distribution has significant kurtosis; If at least one of these tests is significant, it is concluded that the underlying pooled population is not normal. In this paper, having obtained the skewness and kurtosis in Eqs. and, we can obtain features related to the joint distributions of the daily death series of COVID-19 for each country and, consequently, construct the Pearson diagram, as detailed in the next discussion. Pearson diagram The family of Pearson distributions is a class of continuous probability distributions with density ( ) defined as any valid solution of the differential equation given by (Pearson, 1895 ): In practice, it is possible to identify groups of points in these maps that are driven by the same probability distribution. Results In this section, we present an application to real data of daily deaths from COVID-19. Based on the values of multivariate skewness and kurtosis, we construct a dynamic Pearson diagram for four partitions of each considered time series to identify dynamic groups of countries that monitor the spread and prevalence of SARS-CoV-2 (COVID- 19) and provide subsidies to support public policy formulation and decision making. We hypothesize that the formation of groups may be an important tool to categorize which countries were socially and economically affected by the pandemic in 2020-2021. It is important to mention that the need to use a clustering technique is closely associated with the possibility of analyzing the relationships of similarities between countries. Thus, the clusters formed aim to maximize the similarity between the elements of a group (intra-group similarity) and minimize the similarity between elements of different groups (inter-group similarity),. Furthermore, we clarify that cluster number (three) in our empirical evidence is associated with the four phases of COVID-19. For more details, see:. In this way, the discussion is divided into two distinct phases. First, a fit analysis of the series is conducted. Then, the construction of the Pearson diagram and the interpretation of the results are performed. The first part deals with the fitting of each of the four parts of the time series under consideration using the (,, ) (,, ) process. The specific submodels of this class were selected based on the Akaike information criterion. For illustration, Figs. 3 and 4 show the fitting results for the US and Brazil, respectively. It can be seen that seasonal and trend factors are present. The adjustments were close to the generated series and reflected well the behavior of the whole series, although some specific peaks in the series were not captured. Table 2 shows the model type obtained from SARIMA and the p-values obtained by the Ljung-Box test for the resulting residuals. For most of the observed time series, only one non-seasonal factor had to be included in the modeling. Based on the last table, we compute the covariance matrix with the parametric bootstrap in time series using the estimates of the selected models needed to compute and. After obtaining the estimates of multivariate skewness and kurtosis for each country, we created the maps and performed cluster analysis using k-means (Johnson and Wichern, 2007 ). Kurtosis is a statistical measure of how much the tails of the distribution deviate from the tails of a normal distribution. The classification platykurtic means that the observed kurtosis is smaller than that of the normal distribution; mesokurtic, the observed kurtosis is equal to the normal distribution; leptokurtic, the observed kurtosis is larger than that of the normal distribution. Our results suggest a mathematical relationship between kurtosis and skewness. Looking at the time evolution of the number of deaths at COVID-19, we observe an increase in the kurtosis, which leads to an increase in the skewness modulus. For the countries studied, the distributions generally deviate significantly from normality. The leptokurtic observation includes extreme events. In Fig. 5 we see that we have formed three groups. The expected value for skewness is 0 for a multivariate normal distribution, as shown in Meghan et al.. For this reason, we can see in Fig. 5 that group 3 is closest to 0 for the asymmetry values. As for the kurtosis, we have highlighted group 2 with Spain, Israel and Switzerland. If we look at the behavior of the three countries in Fig. 6, we see that the number of deaths by COVID-19 increases for the corresponding scales. The countries in group 1 are moving away from group 3. Table 3 lists the statistics given in Eqs. and and their respective p-values. It can be seen that the statistics are significant and thus the joint distribution of countries has significant symmetry. Fig. 7 shows that Spain, Switzerland, and Israel are no longer in the extreme group on the map, reflecting actions taken by countries to combat COVID-19. Note that France, a neighbor of Spain, is among the most extreme countries on the map, along with Mexico and Ukraine. Group 3 shows asymmetry close to 0, while group 2 again shows countries close to asymmetry group 1, but with some countries with increasing kurtosis, such as Iran. In Fig. 8 only Ukraine remained in the red group, the group with the highest absolute kurtosis has more than 3 countries and an even larger number of European countries. At the beginning of the pandemic, British rulers pursued a denial policy toward COVID-19, resulting in a large number of deaths controlled by social isolation measures, testing for COVID-19, and others. However, the relaxation of the policy with the small number of cases led to an even stronger second wave, shown in the map in Abbire, Fig. 8. If you look at the ellipse of the group 2 confidence region, you can see that there are countries that belong to both confidence regions. Conclusion This article proposes an approach using multivariate skewness and kurtosis maps according to the Pearson diagram to compare mortality according to COVID-19 in 55 countries. The SARIMA process is one of those that can well describe the series of daily deaths by COVID-19 in countries. When we break down the series, we are left with a simpler model like ARIMA. If we fit it well, we have a good model and consequently our bootstrap with the estimates of the model parameters, resulting in a more accurate bootstrap series. By computing the parametric bootstrap, we obtain estimates of multivariate skewness and kurtosis, and in this way, we can produce maps based on the Pearson diagram. Using the maps of multivariate kurtosis and skewness, we performed clustering to identify groups of countries and see how their positions on the map reflected their ranks. For each portion in the series, three groups were formed on the maps. Using the proposed diagram, it was possible to show a direct mathematical relationship between multivariate skewness and kurtosis. Thus, given the temporal evolution of the spread of SARS-CoV-2, countries had great difficulty in preventing this spread and containing the virus. Our graph shows that the increase in daily deaths leads to an increase in multivariate skewness and kurtosis. Dynamic analysis at the country Thus, through the multivariate skewness and kurtosis maps, we have a new approach to mortality data from COVID-19 that allows the use of time series concepts and does not focus solely on prediction, as was done by Arunkumar et al. and Demir and Kirici. Based on a combination of time series and multivariate analysis techniques, we have developed a method for mapping COVID-19 mortality that takes into account the behaviour of the latent distributions behind the observed series, as well as their extreme events in the areas of excessive peaks and tails. The proposed map is able to promote the diagnosis of COVID-19 mortality by considering phase transitions in a static and dynamic way. In this sense, we understand our proposal as a promising tool for the dynamic analysis of COVID-19 deaths and hope that it can be used by governments to support the decision-making process. |
<filename>IdeaProjects/untitled/src/main/java/springbook/user/test/UserDaoTest.java
package springbook.user.test;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestInstance;
import org.junit.jupiter.api.extension.ExtendWith;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.GenericXmlApplicationContext;
import org.springframework.dao.EmptyResultDataAccessException;
import org.springframework.jdbc.datasource.SingleConnectionDataSource;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit.jupiter.SpringExtension;
import springbook.user.dao.UserDao;
import springbook.user.domain.User;
import javax.sql.DataSource;
import java.sql.SQLException;
import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.junit.jupiter.api.Assertions.assertThrows;
// JUnit 4
//@RunWith(SpringJUnit4ClassRunner.class)
// JUnit 5
// 스프링의 테스트 컨텍스트 프레임워크의 JUnit 확장기능 지점
@ExtendWith(SpringExtension.class)
// 테스트 컨텍스트가 자동으로 만들어줄 애플리케이션 컨텍스트의 위치 지정
// 모든 테스트 클레스에서 같은 애플리케이션 컨텍스트 객체가 사용됨
@ContextConfiguration({"/applicationContext.xml"})
//@BeforeAll 의 non-static 메소드 사용을 위해
//클래스에 @TestInstance(TestInstance.Lifecycle.PER_CLASS) 추가
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
// 테스트 메소드에서 애플리케이션 컨텍스트의 구성이나 상태를 변경한다는 것을
// 테스트 컨텍스트 프레임워크에 알려줌
@DirtiesContext
public class UserDaoTest {
// 테스트 오브젝트가 만들어지고 나면 스프링 테스트 컨텍스트에 의해 자동으로 값이 주입된다.
// 하나의 애플리케이션 컨텍스트가 만들어져 모든 테스트 메소드에 사용됨 (같은 오브젝트 주소)
// @Autowired
// private ApplicationContext context;
private UserDao dao;
private User user1;
private User user2;
private User user3;
// private int counter;
@BeforeAll
public void setUp() {
/** 의존관계 검색 */
// 1. XML방식
ApplicationContext context = new GenericXmlApplicationContext("applicationContext.xml");
dao = context.getBean("userDao", UserDao.class); // 메소드 명
// 2. 어노테이션 방식
// ApplicationContext context = new AnnotationConfigApplicationContext(DaoFactory.class);
// UserDao dao = new DaoFactory().userDao();
// 커넥션 횟수
// DConnectionMaker pcm = context.getBean("connectionMaker", DConnectionMaker.class);
// this.counter = pcm.getCounter();
// DataSource dataSource = new SimpleConnectionHandle
// @DirtiesContext 사용 시 컨텍스트 구성 변경 (운영->로컬 테스트 DB)
DataSource dataSource = new SingleConnectionDataSource(
"jdbc:mysql://localhost/testdb", "spring", "book", true);
dao.setDataSource(dataSource);
user1 = new User("whiteship", "백기선", "married");
user2 = new User("abc", "가나다", "123");
user3 = new User("DEF", "라마바", "456");
}
@Test
public void addAndGet() throws SQLException {
// 커넥션 횟수
// System.out.println("커넥션 횟수 : "+ this.counter);
dao.deleteAll();
assertThat(dao.getCount(), is(0));
dao.add(user1);
assertThat(dao.getCount(), is(1));
dao.add(user2);
assertThat(dao.getCount(), is(2));
dao.add(user3);
assertThat(dao.getCount(), is(3));
User userGet1 = dao.get(user1.getId());
assertThat(userGet1.getName(), is(user1.getName()));
assertThat(userGet1.getPassword(), is(<PASSWORD>.getPassword()));
}
@Test
public void getUserFailure() throws SQLException {
// ApplicationContext context = new GenericXmlApplicationContext("applicationContext.xml");
// UserDao dao = context.getBean("userDao", UserDao.class);
dao.deleteAll();
assertThat(dao.getCount(), is(0));
assertThrows(EmptyResultDataAccessException.class, () -> {
dao.get("unkown_id");
});
}
}
|
Membranes of Chitosan and Collagen-Type 1 for Biomineralization/Ostheogenesis The main objective of this work was to produce membranes of chitosan and collagen type I and check their ability to undergo in vitro calcification. The membranes of chitosan-collagen blends were characterized by TGA, infra-red spectroscopy and DSC. Samples of dense and porous membranes were immersed in solution SBF (Simulated Body Fluid) in order to verify their in vitro calcification. The membranes were observed by SEM. The production of chitosan-collagen membranes is possible, in dense and porous versions. We can conclude that the blend is less resistant to high temperatures, in comparison to pristine chitosan membranes shown in literature. Through the initial assays of calcification, we observe that it is possible to induce the calcium deposition on a chitosan-collagen membrane, as seen by SEM. Microscopy of fracture surfaces showed fibril structures, probably formed by collagen. |
Enhanced expression of constitutive and inducible forms of nitric oxide synthase in autoimmune encephalomyelitis. To elucidate the role of nitric oxide synthase (NOS) in the pathogenesis of experimental autoimmune encephalomyelitis (EAE), we analyzed the expression of constitutive neuronal NOS (nNOS), endothelial NOS (eNOS), and inducible NOS (iNOS) in the spinal cords of rats with EAE. We further examined the structural interaction between apoptotic cells and spinal cord cells including neurons and astrocytes, which are potent cell types of nitric oxide (NO) production in the brain. Western blot analysis showed that three forms of NOS significantly increased in the spinal cords of rats at the peak stage of EAE, while small amounts of these enzymes were identified in the spinal cords of rats without EAE. Immunohistochemical study showed that the expression of either nNOS or eNOS increased in the brain cells including neurons and astrocytes during the peak and recovery stages of EAE, while the expression of iNOS was found mainly in the inflammatory macrophages in the perivascular EAE lesions. Double labeling showed that apoptotic cells had intimate contacts with either neurons or astrocytes, which are major cell types to express nNOS and eNOS constitutively. Our results suggest that the three NOS may play an important role in the recovery of EAE. Experimental autoimmune encephalomyelitis (EAE) is a T cell-mediated autoimmune disease of the CNS, which is designed to study human demyelinating diseases such as multiple sclerosis. The clinical course of EAE is characterized by weight loss, ascending progressive paralysis, and spontaneous recovery. This coincides with an inflammatory response in the CNS that is characterized by infiltration of T cells and macrophages and activation of microglia and astrocytes at the peak stage of EAE, and apoptotic elimination of inflammatory cells leading to recovery. Several studies have shown that iNOS is an important mediator of CNS inflammation through the generation of NO in the course of EAE as well as in human multiple sclerosis lesions. Contrary to these previous findings, NO and its relevant enzymes including iNOS have been shown to play a beneficial role in the course of EAE because iNOS inhibition aggravated EAE progression depending on the stage of inflammation and because EAE was exacerbated in mice lacking the NOS2 gene. Furthermore, animals with EAE at high levels of NO and iNOS recover from *Corresponding author Phone: 82-64-754-3363; Fax: 82-64-756-3354; E-mail: [email protected] paralysis, suggesting that iNOS may have a capacity to prevent immunologically privileged CNS from invading inflammatory cells in EAE. Recently, Gonzalez-Hernandez and Rustioni reported that the three isoforms of NOS including nNOS, eNOS and iNOS, exert a beneficial effect on peripheral nerve regeneration. In the course of acute EAE in mice, we examined the quantitative changes of the three isoforms of NOS by Western blot analysis and the structural interaction between apoptotic cells and brain cells by immunohistochemistry. Animals Lewis male rats (7-12 weeks old) were obtained from the Korea Research Institute of Bioscience and Biotechnology, KIST (Taejon, Korea) and bred in our animal facility. The animals weighing 160-200 g were used throughout the experiments. EAE induction Each rat was injected in the hind foot pads bilaterally with an emulsion containing an equal part of fresh rat spinal cord homogenates in phosphate buffer (g/ml) and complete Freunds adjuvant (CFA; Mycobacterium tuberculosis H37Ra, 5 mg/ml; Difco). Immunized rats were further given Bordetella pertussis toxin (2 g/ea) (Sigma Chemical Co., St. Louis, MO) intraperitoneally and observed daily for clinical signs of EAE. The progress of EAE was divided into seven clinical stages (Grade (G) 0, no signs; G1, floppy tail; G2, mild paraparesis; G3, severe paraparesis; G4, tetraparesis; G5, moribund condition or death; R0, recovery stage). Control rats were immunized with CFA only. Five rats were killed under ether anesthesia at the various stages of the EAE. Tissue sampling In this study, tissue sampling was performed on day 13 and 21 post-immunization (PI) during the peak and recovery stages of EAE, respectively. Five rats in each group were killed under ether anesthesia. The spinal cords of rats were removed and frozen in a deep freeze (-70 o C) for protein analysis. Pieces of the spinal cords were processed for paraffin embedding after fixation in 4% paraformaldehyde in phosphate-buffered saline (PBS, pH 7.4). Western blot analysis Frozen spinal cords with EAE were thawed at room temperature (RT), minced, weighed, placed in PBS (1 : 4 w/v), and homogenized with a Tissue-Tearor (Biospec, USA). The homogenate was sonicated three times for 5 sec at RT and centrifuged at 12,000 g for 10 min. The supernatant was diluted with electrophoretic sample buffer to obtain a protein concentration of 3 g/l, and then heated at 100°C for 5 min. The heated samples were electrophoresed under denaturing conditions in sodium dodecyl sulfate-polyacrylamide gels (SDS-PAGE) using a discontinuous procedure. Stacking gels were 4.5% polyacrylamide and separating gels were 7.5% polyacrylamide. Paired mini-gels (Mini-protein II cell, Bio-Rad Laboratories, U.S.A.) were loaded with 30 g protein per well. The protein concentration was estimated using the method of Bradford. Samples containing standard markers of nNOS (155 kDa), eNOS (140 kDa), and iNOS (130 kDa) (Transduction Laboratories, Lexington, KY) were run at 100 Volts/gel slab. After electrophoresis, one mini-gel was routinely stained by the Coomassie bluestaining method and the other was equilibrated in a transfer buffer (25 mM Tris, 192 mM glycine, 20% methanol (v/v), pH 7.3). The proteins were then electrotransferred in the transfer buffer to a PROTRAN ® nitrocellulose transfer membrane (Schleicher and Schuell, Keene N. H., USA) overnight at 4°C and 30 Volts. To visualize the transferred proteins, the nitrocellulose membrane was stained with Brilliant Blue R-250 (Sigma, St. Louis, MO) for 10 min and subsequently incubated in TBS (50 mM Tris/HCl, 20 mM NaCl, pH 7.4) containing 5% bovine serum albumin for 2 hrs at RT to block non-specific sites. The blot was then rinsed with TBS-T (TBS with 0.1% Triton X-100). The iNOS, nNOS and eNOS bindings were detected by incubating the membrane in a moist chamber overnight at 4°C, with the primary antibody rabbit anti-iNOS, rabbit anti-eNOS, or rabbit anti-nNOS (Transduction Laboratories, Lexington, KY) and rabbit anti-nitrotyrosine (1 : 100 in dilution, Upstate Biotechnology Inc., NY). The finding of nitrotyrosine (NT) indicates the generation of peroxynitrite and the potential damage of proteins by nitration. After washing in TBS-T, the membrane was incubated with the second antibody (anti-rabbit IgG and anti-mouse IgG peroxidase conjugate diluted in TBS 1 : 3000) for 3 hrs at RT. Visualization was achieved using 1% 3,3'-diaminobenzidine-HCl in 0.1% TBS. Immunoblot signals were quantified with a densitometer (M GS-700 imaging Densitometer, Bio-Rad, U.K.). Immunohistochemistry Five-micron sections of the paraffin-embedded spinal cords were deparaffinized and treated with 0.3% hydrogen peroxide in methyl alcohol for 30 min to block endogenous peroxidase. After three washes with PBS, the sections were exposed to 10% normal goat serum, and then incubated with primary antisera including rabbit anti-nNOS, rabbit anti-eNOS or rabbit anti-iNOS antisera (1 : 200 dilution) (Transduction Laboratories, Lexington, KY) for 60 min at RT. For the identification of astrocytes and macrophages, rabbit anti-glial fibrillary acidic protein (GFAP) (Sigma Chemical Co., St. Louis, MO) and ED1 (Serotec, London, U.K.) were applied. After three washes, the appropriate biotinylated second antibody and the avidinbiotin peroxidase complex Elite kit (Vector, Burlingame, CA) were added sequentially. Peroxidase was developed with diaminobenzidine-hydrogen peroxidase solution (0.001% 3,3'-diaminbenzidine and 0.01% hydrogen peroxidase in 0.05 M Tris buffer). Before being mounted, the sections were counterstained with hematoxylin. Terminal deoxynucleotidyl transferase (TdT)-mediated dUTP nick end-labeling (TUNEL) DNA fragments were detected by in situ nick end-labeling as described in the manufacturers instructions (Oncor, London, UK). In brief, the paraffin sections were deparaffinized, rehydrated, and washed with PBS. The sections were treated with 0.005% pronase (Dako, Denmark) for 20 min at 37°C and incubated in a TdT buffer solution (140 mM sodium cacodylate, 1 mM cobalt chloride, 30 mM Tris-HCl, pH 7.2, 0.004 nmol/l digoxigenine-dUTP) containing 0.15 U/l TdT for 60 min at 37°C. After another incubation in TB buffer (300 mM sodium chloride, 30 mM sodium citrate) for 15 min at 37°C, the sections were reacted with peroxidase-labeled anti-digoxigenine antibody for 60 min. Positive cells were visualized using a diaminobenzidine substrate kit (Vector) and counterstained with hematoxylin. Double labeling of TUNEL and either astrocytes or macrophages In the first step, apoptotic cells were detected by the TUNEL method when DAB developed a brown color. After thorough washing, the slides were stained for microglia or astrocytes using an avidin-biotin alkaline phosphatase kit (Vector). Alkaline phosphatase was developed in blue using BCIP/NBT (Sigma). The antisera used for double labelling were rabbit anti-GFAP for astrocytes and ED1 for macrophages/activated microglia. Clinical observation of EAE The clinical course of EAE is shown in Fig. 1. EAE rats immunized with the spinal cord homogenates showed floppy tail (G1) on days 9-10 PI and severe paresis (G3) on days 11-15 PI. All the rats were recovered after day 17 PI (Fig. 1). Histological examination showed that a large number of inflammatory cells were infiltrated into the perivascular lesions and the parenchyma of the spinal cord of rats with EAE at the peak stages. In normal rats and CFA-immunized control rats, the infiltration of inflammatory cells was not found in the spinal cord parenchyma (data not shown). Western blot analysis of three isoforms of NOS in EAE The expression of nNOS ( Fig. 2A), eNOS (Fig. 2B) and iNOS (Fig. 3) was assessed semiquantitatively by densitometry. The intense immunoreactivity of both nNOS and eNOS was identified at the peak stage (day 13 PI, G3) of EAE (Fig. 2), and remained until the recovery stage of EAE (day 21 PI, R0) (Fig. 2). Although little nNOS and eNOS were identified in the normal spinal cords, their expression was increased in the spinal cord of 5CFAtreated rats (day 13 PI), as compared with normal control rats (Fig. 2). The increase in the expression of nNOS and eNOS was evident by the densitometric semiquantitative analysis (Fig. 2, graphs). Unlike the expression of both nNOS and eNOS in the spinal cords of rats with EAE, small amounts of iNOS were identified in the normal spinal cords but its expression slightly increased in the spinal cord of 5CFAtreated rats, as compared with normal control rats (Fig. 3). Increased iNOS immunoreactivity was evident during the peak (G3) and recovery stages (R0) of EAE (Fig. 3). Using densitometric semiquantitative analysis (Fig. 3, graph), iNOS immunoreactivity in the spinal cord of rats with EAE significantly increased compared with that in the spinal cord of normal rats. The increased expression of iNOS persisted through the EAE recovery stage (day 21 PI, R0). These data indicate that the induction of EAE upregulates three isoforms of NOS. In addition, NT immunoreactivity was recognized during the peak and recovery stages of EAE, but not in the normal or the CFAimmunized spinal cords (data not shown). The increased expression of NT during the peak stage of EAE suggests that peroxynitrite or NO is generated in the autoimmune spinal cord lesions. Immunohistochemical localization of nNOS, eNOS, and iNOS in EAE In the spinal cords of rats with EAE, the expression of nNOS was found in some small neurons and in the spinal cord parenchyma with a granular pattern. In addition, the expression of nNOS was also found in some inflammatory cells in the EAE lesions of the spinal cord (Fig. 4D). The expression of eNOS was observed in the endothelial cells of blood vessels and some astrocytes (Fig. 4E). The expression of iNOS (Fig. 4F) was found predominantly in infiltrating cells stained with ED1 and some astrocytes in the EAE lesions. Meanwhile, the expression of nNOS (Fig. 4A), eNOS (Fig. 4B), and iNOS (Fig. 4C) were rarely identified in the parenchyma of spinal cords of normal or adjuvant-immunized rats. Structural interaction between apoptotic cells and brain cells. In rats with EAE, the majority of apoptotic cells were distributed in the parenchyma, but scarcely found in the perivascular cuffings of the spinal cords. Double labeling showed that the apoptotic cells were commonly found in the area adjacent to neurons (Fig. 5A) and some GFAPpositive processes were identical to astrocytes (Fig. 5B). In some cases, the apoptotic cells were co-localized with ED1 (+) cells, suggesting that macrophages undergo apoptosis (Fig. 5C). The apoptotic cells were barely seen in the neurons and glial cells in the spinal cords of rat with EAE. Discussion In this study, the expression of both nNOS and eNOS was significantly increased in the hyperacute autoimmune CNS inflammation, suggesting that the constitutive NOS is stimulated by the inflammatory cells in the pathogenesis of EAE, as does iNOS in EAE. However, our study did not support the finding of EAE in iNOS knockout mice in which both nNOS and eNOS were not increased 39. The brain cells including neurons and some astrocytes exhibited an increased expression of nNOS in the course of EAE. There was an intimate structural interaction between apoptotic cells and either neurons or astrocytes, which are potent cell types to express nNOS and eNOS, respectively. Although the functional role of both nNOS and eNOS in neurons and/or astrocytes in CNS diseases has not been fully understood, nNOS may be involved in either the tissue destruction in traumatic brain injury or in the survival of neuronal cells in vesicular stomatitis virus infections. Taken these dual effects of nNOS in the brain injury, we prefer to compromise that both nNOS and eNOS might mediate either stasis of T cell proliferation in the spinal cord parenchyma out of neurons or survival of neuronal cells in EAE. Our findings are further supported by the observation that the brain cells such as oligodendroglia do not undergo apoptosis in the murine EAE model, while homing inflammatory cells are selectively vulnerable to the apoptotic process. A question remains to be explained in EAE. Why are few apoptotic figures found in brain cells that are potent cell types of NOS expression? In a recent study using a murine EAE model, the brain cells including oligodendroglia and astrocytes were proven to escape from the apoptosis. We suppose that additional activation of caspase family and/or Fas-Fas ligand interaction would be necessary to induce the apoptosis of T cells in EAE, although endogenously generated NO via either eNOS or iNOS may be involved in the process of apoptosis. In conclusion, our results showed that the three isoforms of NOS including nNOS, eNOS, and iNOS were increased in the initiation of EAE and suggested that the brain cells including neurons and astrocytes are possible sources for either nNOS or eNOS in the course of EAE. We postulate that NO, produced via both constitutive nNOS and eNOS from the brain cells, has a beneficial role by removing inflammatory cells through the stasis of T cell proliferation and eventually by the apoptosis of inflammatory cells in Fig. 5. Double labeling of TUNEL method on either astrocytes or macrophages in EAE lesions on days 13 PI. Apoptotic cells (brown) were commonly detected around neurons (5A) and some GFAP (+) processes (blue) identical to astrocytes (5B). Some apoptotic cells were co-localized with ED1 (+) cells (5C, blue). TUNEL and ABC-alkaline phosphatase reaction. Original magnification: A, 33; B and C, 132. A: TUNEL and hematoxylin, B: TUNEL and either rabbit anti-GFAP, C: TUNEL and rabbit and ED1. |
Antioxidant and Understanding the Anticancer Properties in Human Prostate and Breast Cancer Cell Lines of Chemically Characterized Methanol Extract from Berberis hispanica Boiss. : The current research was conducted to investigate the chemical profile, antiproliferative, and antioxidant activities of methanol extracts obtained by two different methods including maceration and Soxhlet from Berberis hispanica Boiss. & Reut. Antiproliferative activities were The Berberis hispanica Bois. and Reut. can serve society as it provides potentially bioactive compounds that may find application in the medical sector to control such diseases. Introduction Since ancient times, humans have developed natural products from plants, marine organisms, and microorganisms for various applications. Interest in natural resources goes back to over 1000 years. Medicinal plants have been used for centuries as remedies, and continue to provide alternative agents to fight various devastating diseases. Numerous drugs are derived from natural sources including medicinal plants, which can be available in the form of food supplements, nutraceuticals, and complementary alternative medicine. Plants synthesize secondary metabolites with various chemical structures including tannins, terpenoids, alkaloids, and flavonoids, which are involved in several therapeutic pharmacological properties like antimicrobial, anti-oxidants, anticancer, and anti-inflammatory activities. Medicinal plants are characterized in priority as an exhaustive source of bioactive compounds that are used in drug development. The genus Berberis belongs to the family Berberidaceae with about 500 species. Berberis possesses many medicinal properties since it has been used in the treatment of diseases including leishmaniasis, heart disease, cholecystitis, hypertension, colds, cholelithiasis, dysentery, gallstones, digestive ailments, jaundice, malaria, ischemic heart disease, cardiomyopathies, and urinary tract problems. The Spanish barberry is used in traditional medicine to cure gastrointestinal stones, inflammation, liver, and biliary disorders. Genus Berberisis is rich in compounds including tamarixetin, rutin, caffeic, and chlorogenic acids with bioactivity potential. However, the available results on its chemical composition are more limited. These compounds are responsible for Berberis biological activities such as antidepressant, antinociceptive, and immunomodulation effects. Several works have reported that reactive oxygen species (ROS) are involved in cancer, meanwhile, antioxidant agents are used to counteract them. Plants have been found to contain significant ROS scavenging and antiproliferative activities toward cancer cells. Several studies have shown that plants serve as anticancer agents through apoptosis in numerous cancer cell lines. The goal of this work was to study the phytochemical composition, antioxidant, and antiproliferative activities of Berberis hispanica Boiss. & Reut. (B. hispanica). These goals may open new approaches to valorize this species as a source of promising agents to fight such diseases. Preparation of Plant Extract The plant was harvested in September from the region of Errachidia, Morocco. The botanical authentication was done by Dr. Fennan and given the voucher specimen of #LHE.11 before being deposited at the herbarium. The bark of B. hispanica roots was removed, washed, and dried in the shade at room temperature before being ground into a fine powder. Next, a total of 20 g of plant powder was extracted with 100 mL of methanol using two methods including Sand maceration. The Soxhlet was set to 46 C for 6 h and the maceration was done at room temperature for 24 h. Afterward, the mixture was meticulously filtered using Whatman filter paper before being concentrated using a rotary evaporator. The extract obtained was then saved at 4 C until further use. Cell Cultures Four cancer cell lines were selected for testing including prostate (LnCap and 22RV1) and breast (MDA-MB-231 and MCF-7), which were grown in RPMI (Roswell Park Memorial Institute) and DMEM Dulbecco's Modified Eagle's) medium, respectively. RPMI and DMEM media had 10% heat-inactivated fetal calf serum, antibiotics, and glutamine with 1% in each. Cell culture was incubated at 37 C and 5% CO 2. Next, cells were washed by PBS followed by trypsin (Gibco, 0.25%) for being detached. In Vitro Antiproliferative Activity Assay The viability of cells was estimated based on cell metabolic activity using the MTT assay. Briefly, MDA-MB-231. MCF-7, LnCap, and 22 RV1 cells were adjusted at a density of around 8000 cells per well in plates. After 24 h, the culture medium was replaced with plant concentrations ranging from 4.68 to 150 g/mL. Afterward, the plates were reincubated for 72 h. Next, 100 L of culture medium was replaced with 10 L of MTT reagent before the plates being incubated again for a further 4 h. Cell viability was assessed by measuring the absorbance at 450 nm. Mitomycin was used as a drug reference (positive control) and untreated cells were used as a negative control. The results were expressed as percentages of cell inhibition. Determination of Phenolic Contents Total phenolic contents (TPCs) were conducted using methods based on the Folin-Ciocalteu reagent by Spanos with limited modifications. Briefly, 2.5 mL of 10% (v/v). Folin-Ciocalteu chemical was added to 0.5 mL of the sample solution. Next, the reaction was conducted at 45 C for 30 min before 4 mL of 7.5% (w/v) Na 2 CO 3 being added. The absorbance of the sample was read at 765 nm. TPCs were expressed as mg GGE/g extract. Determination of Total Flavonoid Content The total flavonoid content (TFCs) of B. hispanica extracts was assessed according to the method by Dewanto et al. with limited modifications. Briefly, 1 mL of plant extract was mixed with 0.3 mL of NaNO 3 (5%) and 0.3 mL of 1% (w/v) AlCl 3. Next, 2 mL of 1 M NaOH was also added to the whole solution before being stirred and allowed to stand. The absorbance of the sample was read at 510 nm. TFCs were expressed as mg RE/g extract. 2.6. Evaluation of Antioxidant Activity 2.6.1. DPPH Radical Scavenging Assay Radical scavenging activity of plant extracts was evaluated by using the DPPH assay as described by Sayah et al.. Briefly, 2.5 mL of different plant concentrations were mixed with a solution of DPPH (0.2 mM of DPPH). The mixture was vigorously vortexed before being kept in the dark at room temperature for 30 min. Afterward, the absorbance of the sample was read at 517 nm. The antioxidant activity was expressed as a percentage of DPPH inhibition by using the following formula: where Abs DPPH is the absorbance of DPPH used for testing and Abs sample is the absorbance of DPPH in the presence of the plant extract tested. The scavenging results were expressed as IC 50 (required concentration to inhibit 50% of free radicals). Ferric Reducing Antioxidant Power (FRAP) Assay The ferric-reducing capacity of B. hispanica was effectuated using the method of potassium ferricyanide-ferric chloride with limited changes. Briefly, 1 mL of plant extract was added to a mixture including 2.5 mL of phosphate buffer (0.2 M, pH 6.6) and 2.5 mL of potassium ferricyanide (1%) and then incubated at 50 C for 20 min. Next, 2.5 mL of 10% trichloroacetic acid was added to the final solution before being centrifuged at 3000 rpm for 1 min. Finally, 2.5 mL of the supernatant was added to 2.5 mL of distilled water with 0.5 mL FeCl3 (0.1%, w/v). The absorbance of the sample was read at 700 nm and the findings were expressed as mg AAE/g extract. Trolox Equivalent Antioxidant Capacity (TEAC) Assay TEAC was studied according to the previously reported protocol. Briefly, ABTS radical solution was obtained by mixing 10 mL of 2 mM ABTS and 100 L of 70 mM potassium persulfate at ambient temperature for 16 h. Next, the solution of ABTS+ was diluted in methanol to have an absorbance value of about 0.70 at 734 nm. The absorbance was read at 734 nm. Scavenge capability toward ABTS radical was assessed using the following equation: where A 0 is the absorbance of the control solution and A 1 is the absorbance of the sample solution. Scavenging activity was expressed as IC 50. Gas Chromatography-Mass Spectrometry Analysis The gas chromatography-mass spectrometry analysis (GC-MS) technique was used in this study to identify phytocomponents present in the studied extract obtained by maceration. GC-MS characterization of the plant extract was conducted after methylation using a PerkinElmer Clarus 580 gas chromatograph equipped with a capillary column (5% phenyl, 95% methyplisyloxane; 30.0 MX 250 m). Helium was used as a carrier gas at 1 mL/min. The split was 1/75 and the injection volume of the sample was 1 L. The temperature of both injection and detection was set to 250 C and 280 C, respectively. The temperature of the furnace was programmed as follows: from 50 C to 200 C at a rate of 11 C/min, then from 200 C to 240 C at a rate of 6 C/min. The identification of phytocomponents was done by comparing the retention times with those of the references obtained from the database of the technique. Statistical Analysis Data were expressed as means ± standard deviation of triplicate assays using analysis of variance (ANOVA). Statistical analysis was conducted using GraphPad Prism 6. Values were statically considered significant at a p-value ≤ 0.05. Total Phenolic and Flavonoids Contents The phenolic and flavonoids contents were dosed using gallic acid and quercetin calibration curves (Figures 1 and 2). The total phenol and total flavonoid contents are presented in Table 1. Even though the methanol fraction by maceration was found to be noticeably higher in TPC (321.56 ± 3.05 mg GAEs/g extract) and TFC (118.4 ± 2.24 REs/g extract), no statistically significant difference between the two extraction methods was found (p > 0.05). The extract obtained by the Soxhlet method was slightly lower in polyphenol (289.02 ± 2.32 GAEs/g) and flavonoids (98.4 ± 2.56 GAEs/g). The results obtained showed that B. hispanica is potentially rich in polyphenols and flavonoids with some differences resulting from the two methods of extraction used (maceration or Soxhlet). Gas Chromatography-Mass Spectrometry (GC-MS) Analysis The presence of chemical components in the methanol extract obtained by maceration of B. hispanica was identified by GC-MS after methylation. The results obtained showed that forty-five chemical compounds were identified in the extract ( Figure 3; Table 2), among them 2-heptenal, (Z); 2,4-decadiena; 2,4-decadienal; heptadecane; 2,6,10,14-tetramethyl; hexadecane; and 11,14-eicosadienoic acid methyl ester were the chief chemical compounds in the plant extract. The findings of chemical analysis obtained with GC-MS displayed in Table 2 agreed with the results of the total polyphenolic and flavonoid contents presented in Table 1, since both affirm the presence of common chemical classes. Gas Chromatography-Mass Spectrometry (GC-MS) Analysis The presence of chemical components in the methanol extract obtained by maceration of B. hispanica was identified by GC-MS after methylation. The results obtained showed that forty-five chemical compounds were identified in the extract ( Figure 3; Table 2), among them 2-heptenal, (Z); 2,4-decadiena; 2,4-decadienal; heptadecane; 2,6,10,14-tetramethyl; hexadecane; and 11,14-eicosadienoic acid methyl ester were the chief chemical compounds in the plant extract. The findings of chemical analysis obtained with GC-MS displayed in Table 2 agreed with the results of the total polyphenolic and flavonoid contents presented in Table 1, since both affirm the presence of common chemical classes. Gas Chromatography-Mass Spectrometry (GC-MS) Analysis The presence of chemical components in the methanol extract obtained by maceration of B. hispanica was identified by GC-MS after methylation. The results obtained showed that forty-five chemical compounds were identified in the extract ( Figure 3; Table 2), among them 2-heptenal, (Z); 2,4-decadiena; 2,4-decadienal; heptadecane; 2,6,10,14-tetramethyl; hexadecane; and 11,14-eicosadienoic acid methyl ester were the chief chemical compounds in the plant extract. The findings of chemical analysis obtained with GC-MS displayed in Table 2 agreed with the results of the total polyphenolic and flavonoid contents presented in Table 1, since both affirm the presence of common chemical classes. It has been reported in different studies that the pharmacological activities of B. hispanica are related to its chemical composition, especially alkaloid and polyphenol classes. The chemical characterization of B. hispanica extracts investigated in this work showed the presence of various compounds belonging to these families in the methanolic extracts, which are probably responsible for the antioxidant and antiproliferative activities of the studied plant. The chemical analysis affirmed the presence of many phenolic compounds in the extract like N-(1-Hydroxy-4-oxo-1-phenylper hydroquinolizin-3-yl) carbamic acid, benzyl ester; benzoic acid 3-methyl-4-(1,3,3,3-tetrafluoro-2-methoxycarbonyl-propenylsulfanyl)-phenylester, and Decan-2-yl trimethylsilyl phthalate 1,2 Benzenedicarboxylic acid. Therefore, we can confirm that the chemical content of the studied plant is strongly correlated to the biological outcomes. It was demonstrated that the rutin and tamarixetin contained in Berberis were responsible for the inhibition of cancer cell lines in a concentration-or time-dependent manner by inducing apoptosis and blocked cell cycle progression at the G2-M phase. Antiproliferative activity in breast (MCF-7), colon (Caco-2), and pancreas (BxPC-3) cancer cell lines induced by Genus berberis was well reported elsewhere. Moreover, rutin exhibited a dose-or time-dependent inhibitory effect on U-937 and HT-60 and glioma human cancer cell lines [12,. Fernndez-Poyatos et al. reported that the most abundant compounds found in B. hispanica are phenolic acids, primarily chlorogenic acid and other caffeoylquinic acids. These compounds have been reported to have antioxidant effects. Moreover, these compounds have also been shown to possess an in vivo and in vitro anticancer activity in various cancerous cell lines including MCF-7, HCT-116, Hep-G2, and PC-3. Some compounds from caffeoylquinic acids were reported to have an inhibitory effect on the stomach (KatoIII), colon (DLD-1), and promyelocytic leukemia cancer cell lines (HL-60). The results obtained in this work showed that B. hispanica possesses interesting antioxidant and antiproliferative activities, which can be explained by its richness in phenolic, alkaloids, and other potentially bioactive compounds. This species appears to be an interesting source of various compounds, which can be applied in medicines to fight such diseases. Antioxidant Activity In the current work, plant extracts were tested for their antioxidant capacity using different tests including DPPH, ABTS radical scavenging capacity, and FRAP (Table 3). The results of the DPPH test showed that the extract tested had scavenging activity in a dose-dependent manner. The extracts showed a potent antioxidant activity, especially that obtained by maceration extract with IC 50 = 0.180 ± 0.020 mg/mL when compared to the one obtained by Soxhlet extraction (IC 50 = 0.210 ± 0.017 mg/mL), however, no significant difference was observed between the two methods (p > 0.05). Regarding the FRAP bioassay, the highest reducing power was interestingly observed with the maceration extract (80.066 ± 3.28 AAE/g extract) when compared to the one obtained by the Soxhlet method (79.4 ± 0.45 mg AAE/g extract). In these tests, B. hispanica revealed interesting antioxidant activity with insignificant differences between the two methods of extraction. For the antioxidant activity measured by the ABTS method, we noted that both extracts exhibited potent antioxidant activity in a dose-dependent manner in the DPPH test. The ability of the extracts to scavenge the ABTS cation is presented in Table 2. The B. hispanica extract obtained by maceration showed the highest antioxidant ability (60.203 ± 0.76 mg TE/g extract) when compared with the Soxhlet extract (56.564 ± 1.63 mg TE/g extract). The obtained results showed that the plant possessed potent antioxidant activity with IC 50 = 0.180 ± 0.020 mg/mL for the extract obtained by maceration, and IC 50 = 0.210 ± 0.017 mg/mL for the Soxhlet extract in the DPPH assay. In the present work, we used different methods to evaluate the antioxidant potential of B. hispanica methanolic extract; using the ATBS bioassay, maceration, and Soxhlet extracts showed values of 60.203 ± 0.76 mg TE/g and 56.564 ± 1.63 TE/g extract, respectively. The FRAP assay showed that maceration and Soxhlet extracts scored values of 80.066 ± 3.28 mg AAE/g extract and 79.4 ± 0.45 mg AAE/g extract, respectively. These results were in close accordance with those reported by Fernndez-Poyatos et al., who revealed that methanolic and aqueous extracts from Spanish species exhibited antioxidant activity with 648 and 212 mmol Trolox equivalent g −1 dried extracts, respectively. These three methods differ from each other due to characteristics such as substrate type, reaction conditions, and data quantitation methods. A complete picture of the total antioxidant capacity of the methanol extracts from B. hispanica was obtained via analysis of FRAP, DPPH, and ABTS, which showed that studied plant extracts can show different antioxidant power. In Vitro Antiproliferative Activity Assay The MTT assay revealed that B. hispanica extracts have a potent antiproliferative effect on both breast and prostate cancer cells in a dose-dependent manner after 72 h of treatment (Figures 4 and 5; Table 4 The methanolic extract showed higher antiproliferative activity in both breast and prostate cancer cell lines (Figures 4 and 5; Table 4). Indeed, in breast cancer cell lines MDA-MB-231 and MCF-7, the IC50 values obtained by the maceration extract were 16.55 ± 0.58) g/mL and 17.95 ± 0.58) g/mL, respectively. The values were slightly lower than the IC50 values obtained with the Soxhlet extract MDA-MB-231 (19.93 ± 0.74 g/mL) and MCF-7 (20.22 ± 0.89 g/mL). Regarding prostate cancer cells 22 RV and LnCap, the IC50 values obtained by the maceration extract (22 RV: 11.75 ± 0.35 g/mL; LnCap: 11.91 ± 0.54 g/mL) were also lower than those obtained with Soxhlet (22 RV: 13.47 ± 0.52 g/mL; LnCap: 19.64 ± 1.05 g/mL). Even though the maceration extract was found better in reducing cell viability in cancer cell lines, the two extracts (maceration and Soxhlet) did not differ significantly (p > 0.05). The methanolic extract showed higher antiproliferative activity in both breast and prostate cancer cell lines (Figures 4 and 5; Table 4). Indeed, in breast cancer cell lines MDA-MB-231 and MCF-7, the IC 50 values obtained by the maceration extract were 16.55 ± 0.58 g/mL and 17.95 ± 0.58 g/mL, respectively. The values were slightly lower than the IC 50 values obtained with the Soxhlet extract MDA-MB-231 (19.93 ± 0.74 g/mL) and MCF-7 (20.22 ± 0.89 g/mL). Regarding prostate cancer cells 22 RV and LnCap, the IC 50 values obtained by the maceration extract (22 RV: 11.75 ± 0.35 g/mL; LnCap: 11.91 ± 0.54 g/mL) were also lower than those obtained with Soxhlet (22 RV: 13.47 ± 0.52 g/mL; LnCap: 19.64 ± 1.05 g/mL). Even though the maceration extract was found better in reducing cell viability in cancer cell lines, the two extracts (maceration and Soxhlet) did not differ significantly (p > 0.05). The results obtained demonstrated that the studied extract possessed antiproliferative potential against the various cell lines resulting in the inhibition of cell proliferation in a dose-dependent manner. In breast cancer cell lines MDA-MB-231 and MCF-7, the IC50 values obtained by maceration were 16.55 ± 0.58 and 17.95 ± 0.58 g/mL, respectively, which were slightly lower than those obtained with the Soxhlet extract MDA-MB-231; 19.93 ± 0.74 g/mL and MCF-7; 20.22 ± 0.89 g/mL. Regarding prostate cancer cells 22 RV1 and LnCap, the IC50 values obtained by the maceration extract (22 RV1: 11.75 ± 0.35 g/mL; LnCap: 11.91 ± 0.54 g/mL) were also lower than those obtained with Soxhlet (22 RV1: 13.47 ± 0.52 g/mL; LnCap: 19.64 ± 1.05 g/mL). Therefore, we can confirm that our extract of B. hispanica possesses an important anti-proliferative activity against different cancerous cell lines. These results agree with those reported in the literature, which showed that genus The results obtained demonstrated that the studied extract possessed antiproliferative potential against the various cell lines resulting in the inhibition of cell proliferation in a dose-dependent manner. In breast cancer cell lines MDA-MB-231 and MCF-7, the IC 50 values obtained by maceration were 16.55 ± 0.58 and 17.95 ± 0.58 g/mL, respectively, which were slightly lower than those obtained with the Soxhlet extract MDA-MB-231; 19.93 ± 0.74 g/mL and MCF-7; 20.22 ± 0.89 g/mL. Regarding prostate cancer cells 22 RV1 and LnCap, the IC 50 values obtained by the maceration extract (22 RV1: 11.75 ± 0.35 g/mL; LnCap: 11.91 ± 0.54 g/mL) were also lower than those obtained with Soxhlet (22 RV1: 13.47 ± 0.52 g/mL; LnCap: 19.64 ± 1.05 g/mL). Therefore, we can confirm that our extract of B. hispanica possesses an important anti-proliferative activity against different cancerous cell lines. These results agree with those reported in the literature, which showed that genus Berberis contained tamarixetin, rutin, and caffeic acid, all known for their cytotoxic effects against cancer cell lines including HeLa cells with IC 50 > 100 g/mL. Moreover, the alkaloid extract from indigenous species to Algeria induced cell death and morphological changes. Conclusions B. hispanica is a medicinal plant among the family Berberidaceae that has been found to have pharmacological potential as reported in several previous studies. This work was intended to study the chemical profile, antiproliferative, and antioxidant activities of the organic extracts obtained with two different methods of extractions including maceration and Soxhlet. Based on the findings obtained, the studied plant exhibited potent antioxidant and antiproliferative activity toward human prostate and breast cancer cell lines. Moreover, our results showed that both methods of extraction were closely similar in terms of the activities studied, and therefore, did not differ significantly. B. hispanica can serve society as it provides potentially active compounds that may find applications in medical sectors to control such diseases. Data Availability Statement: The data used to support the findings of this study are available from the corresponding author M.B. |
Microwave permittivity, permeability, and absorption of Ni nanoplatelet composites. The Ni nanoplatelets with an average diameter of 75 nm and an average thickness of 10 nm are coated with MnO2 by a simple solution phase chemical method. The MnO2-coated Ni nanoplatelets are dispersed in paraffin wax to form the composite samples of the magnetic filler dispersed in the nonmagnetic insulating matrix. The effect of the Ni nanoplatelet volume fraction on the complex permittivity, complex permeability, and microwave absorption of the composites has been studied in the frequency range of 0.1-10 GHz. The complex permittivity of the composites with different volume fractions of the Ni nanoplatelets is almost constant in the 0.1-10 GHz frequency range. The complex permeability of the composites shows several resonance peaks. Besides the natural resonance peak, the exchange resonance peaks are observed. The composite with 17% volume fraction of Ni nanoplatelets has excellent microwave absorption properties of a minimum reflection loss value -31 dB at 9.1 GHz for a thickness of 2 mm and a broad absorption bandwidth of 2.3-10 GHz (R < -10 dB). The Ni nanoplatelets are a possible candidate as high performance microwave absorption filler. For the Ni nanoplatelet composites, the magnetic loss is the dominant term for microwave absorption. |
package org.drools.modelcompiler.benchmark;
import java.util.concurrent.TimeUnit;
import org.drools.compiler.kie.builder.impl.ZipKieModule;
import org.drools.modelcompiler.CanonicalKieModule;
import org.kie.api.KieBase;
import org.kie.api.KieServices;
import org.kie.api.builder.KieModule;
import org.kie.api.builder.KieRepository;
import org.kie.api.builder.ReleaseId;
import org.kie.api.builder.model.KieModuleModel;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Level;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Param;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.TearDown;
import org.openjdk.jmh.annotations.Warmup;
import org.openjdk.jmh.infra.Blackhole;
@Fork(1)
@State(Scope.Thread)
@BenchmarkMode(Mode.AverageTime)
@Warmup(iterations = 5, time = 10, timeUnit = TimeUnit.SECONDS)
@Measurement(iterations = 5, time = 5, timeUnit = TimeUnit.SECONDS)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
public class BuildFromKJarBenchmark {
public enum BenchmarkType {
DRL(false, false), MODEL(true, false), MODEL_WITH_EXPR_ID(true, true);
public final boolean useRuleModel;
public final boolean generateExprId;
BenchmarkType( boolean useRuleModel, boolean generateExprId ) {
this.useRuleModel = useRuleModel;
this.generateExprId = generateExprId;
}
}
@Param({"10000"})
private int numberOfRules;
@Param("50")
private int numberOfRulesPerFile;
// @Param({"DRL", "MODEL", "MODEL_WITH_EXPR_ID"})
@Param({"DRL"})
private BenchmarkType type;
public BuildFromKJarBenchmark() { }
public BuildFromKJarBenchmark( int numberOfRules, int numberOfRulesPerFile, BenchmarkType type ) {
this.numberOfRules = numberOfRules;
this.numberOfRulesPerFile = numberOfRulesPerFile;
this.type = type;
}
private KieServices kieServices;
private KieRepository kieRepository;
private ReleaseId releaseId;
private KJarWithKnowledgeFiles kjarFiles;
private KieModuleModel kieModuleModel;
@Setup(Level.Trial)
public void setUpKJar() {
kieServices = KieServices.get();
kieRepository = kieServices.getRepository();
releaseId = kieServices.newReleaseId("org.kie", "kjar-test", "1.0");
kjarFiles = BenchmarkUtil.createJarFile( kieServices, releaseId, numberOfRules, numberOfRulesPerFile, type );
kieModuleModel = BenchmarkUtil.getDefaultKieModuleModel( kieServices );
}
@Setup(Level.Invocation)
public void cleanUpRepo() {
kieRepository.removeKieModule(releaseId);
}
@TearDown(Level.Invocation)
public void tearDown() {
System.gc();
}
@Benchmark
public KieBase buildKnowledge(final Blackhole eater) {
final KieModule zipKieModule = type.useRuleModel ?
new CanonicalKieModule( releaseId, kieModuleModel, kjarFiles.getJarFile(), kjarFiles.getKnowledgeFiles()) :
new ZipKieModule( releaseId, kieModuleModel, kjarFiles.getJarFile());
kieRepository.addKieModule(zipKieModule);
if (eater != null) {
eater.consume( zipKieModule );
}
return kieServices.newKieContainer(releaseId).getKieBase();
}
}
|
Adult Prostitution Recidivism: Risk Factors and Impact of a Diversion Program The purpose of this study was to explore the risk factors and the impact of a prostitution diversion program on prostitution recidivism. Risk factors and recidivism were explored using chi-square, t tests, and survival analysis. Participants were 448 individuals who were arrested for prostitution and attended a prostitution-focused diversion program. Of the sample, 65 were rearrested for prostitution (14.5 %) within the first 12 months after the arrest leading to their involvement in the diversion program. Prior arrest for prostitution, addiction to drugs and/or alcohol, and childhood physical abuse were found to be risk factors for prostitution rearrest. The relationship between program completion and recidivism was found to be significant with the participants who completed all program requirements less likely to have been rearrested. Future studies on risk factors for recidivism and program impact should include separating males and females as well as exploring those who began sex work before age 18 compared to those who began after age 18. The program components could also be provided to women while incarcerated to compare risk factors and the impact on recidivism. |
package com.adonis.ui.print;
import com.adonis.data.renta.RentaHistory;
import com.adonis.ui.MainUI;
import com.adonis.ui.converters.DatesConverter;
import com.vaadin.annotations.Widgetset;
import com.vaadin.server.VaadinRequest;
import com.vaadin.ui.JavaScript;
import com.vaadin.ui.UI;
import com.vaadin.v7.ui.Table;
import org.apache.poi.hssf.usermodel.HSSFRow;
import org.apache.poi.hssf.usermodel.HSSFSheet;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
import java.io.File;
import java.io.FileOutputStream;
import java.util.List;
/**
* Created by oksdud on 13.04.2017.
*/
@Widgetset("com.vaadin.v7.Vaadin7WidgetSet")
public class PrintRentaUI extends UI {
@Override
protected void init(VaadinRequest request) {
Table table = new Table();
table.setContainerDataSource(MainUI.rentaHistoryCrudView.container);
table.setVisibleColumns("person", "vehicle", "fromDate", "toDate", "price", "priceDay", "priceWeek","priceMonth","summa", "paid");
// Have some content to print
setContent(table);
// createXLS("renta.xls", MainUI.rentaHistoryCrudView.objects);
// Print automatically when the window opens
JavaScript.getCurrent().execute(
"setTimeout(function() {" +
" print(); self.close();}, 0);");
}
public static int index = 1;
public static File createXLSRenta(String fileName, List<RentaHistory> rents) {
try {
File file = new File(fileName);
if(file.exists()) file.delete();
HSSFWorkbook wb = new HSSFWorkbook();
HSSFSheet sheet = wb.createSheet("Excel Sheet");
HSSFRow rowhead = sheet.createRow((short) 0);
rowhead.createCell((short) 0).setCellValue("person");
rowhead.createCell((short) 1).setCellValue("vehicle");
rowhead.createCell((short) 2).setCellValue("fromDate");
rowhead.createCell((short) 3).setCellValue("toDate");
rowhead.createCell((short) 4).setCellValue("price");
rowhead.createCell((short) 5).setCellValue("summa");
// rowhead.createCell((short) 6).setCellValue("paid");
rents.forEach(rentaHistory -> {
HSSFRow row = sheet.createRow((short) index);
row.createCell((short) 0).setCellValue(rentaHistory.getPerson());
row.createCell((short) 1).setCellValue(rentaHistory.getVehicle());
row.createCell((short) 2).setCellValue(DatesConverter.timestampToString(rentaHistory.getFromDate()));
row.createCell((short) 3).setCellValue(DatesConverter.timestampToString(rentaHistory.getToDate()));
row.createCell((short) 4).setCellValue(rentaHistory.getPrice());
row.createCell((short) 5).setCellValue(rentaHistory.getSumma());
// row.createCell((short) 6).setCellValue(rentaHistory.getPaid());
index++;
}
);
FileOutputStream fileOut = new FileOutputStream(fileName);
wb.write(fileOut);
fileOut.close();
System.out.println("Data is saved in excel file.");
} catch (Exception ex) {
ex.printStackTrace();
}
return new File(fileName);
}
}
|
Cauchy Problem And Boundary- Value Problems For Multicomponent Cross-Diffusion Systems The paper considers the Cauchy problem and boundary value problems for multicomponent cross-diffusion systems, since studying the global solvability of the Cauchy problem and boundary value problems, obtaining estimates for solutions, studying the asymptotics of solutions for large values of time and the behavior of the free boundary without linearizing the equation is an important problem. Obtaining estimates for various types of solutions and the free boundary approximately or asymptotically makes it possible to numerically simulate the processes under study. |
Laminated materials such as, for example, composites are widely utilized to increase structural rigidity in a wide variety of products. For example, composites are generally utilized by the airplane construction industry to build structural members of airframes. In some of the most advanced aircraft, where high strength and rigidity and low weight are extremely important, composites may account for a significant portion of the airframe as well as the external surface or skin. Typically, these composites are constructed from a plurality of layers placed over a form. These layers are often referred to as partial or full plies. For structures exceeding the available material width, each layer is typically made up of a series of strips or courses of material placed edge to edge next to each other. Each ply may be in the form of woven fibers in a fabric, unidirectional fiber material or a variety of other conformations. Unidirectional fiber material is often termed, “tape.” The fibers may be made from any of a multitude of natural and/or “man-made” materials such as fiberglass, graphite, Kevlar®, and the like.
While these plies may simply include the above described fibers, generally the plies are pre-impregnated with a resin. Resins are typically formulated to allow the ply to adhere to the form as well as to previously applied plies. If some plies do not adequately adhere to their respective substrate, such as the previously applied plies or the form, internal and/or external surface imperfections. Accordingly, in order to facilitate proper adhesion, pressure is typically applied to the plies during and/or after ply placement.
For relatively small items, a press may be employed. For example, some known presses utilize a vacuum debulking table. In such arrangements, following placement of the plies, the part, referred to as a layup, is placed on the debulking table, a membrane is placed over the layup, and a pump is employed to remove the air from the layup. As the layup is depressurized, a compressive force is applied by the atmospheric pressure and air within the layup is removed. However, as the size of the layup increases and/or permeability of the layup decreases, the use of debulking tables tends to become undesirably expensive and cumbersome.
For relatively larger items, a rolling press may be employed. For example, in some known rolling presses, tape is dispensed from a dispensing head and then pressed on the substrate surface with a compaction roller. While the exact amount of force exerted by the roller depends upon a variety of factors, 100 Kg or more is often utilized in certain applications. In order to exert this relatively large force while accurately placing plies, substantial support and guidance structures are generally required. Another disadvantage of such known rolling presses is that a correspondingly substantial support is required for the form in order to withstand the force exerted by the roller. These and other disadvantages associated with the relatively large forces employed by rolling press systems greatly increase the costs of producing composite items.
Accordingly, it is desirable to provide a method and apparatus capable of overcoming the disadvantages described herein at least to some extent. |
<gh_stars>0
package cn.itcast.connectors;
import org.apache.flink.api.common.RuntimeExecutionMode;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.redis.RedisSink;
import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommand;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommandDescription;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisMapper;
import org.apache.flink.util.Collector;
/**
* Author itcast
* Desc 演示Flink-Connectors-三方提供的RedisSink
*/
public class RedisDemo {
public static void main(String[] args) throws Exception {
//TODO 0.env
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(RuntimeExecutionMode.AUTOMATIC);
//TODO 1.source
DataStream<String> lines = env.socketTextStream("node1", 9999);
//TODO 2.transformation
SingleOutputStreamOperator<Tuple2<String, Integer>> result = lines.flatMap(new FlatMapFunction<String, Tuple2<String, Integer>>() {
@Override
public void flatMap(String value, Collector<Tuple2<String, Integer>> out) throws Exception {
String[] arr = value.split(" ");
for (String word : arr) {
out.collect(Tuple2.of(word, 1));
}
}
}).keyBy(t -> t.f0).sum(1);
//TODO 3.sink
result.print();
FlinkJedisPoolConfig conf= new FlinkJedisPoolConfig.Builder().setHost("127.0.0.1").build();
RedisSink<Tuple2<String, Integer>> redisSink = new RedisSink<Tuple2<String, Integer>>(conf,new MyRedisMapper());
result.addSink(redisSink);
//TODO 4.execute
env.execute();
}
public static class MyRedisMapper implements RedisMapper<Tuple2<String, Integer>>{
@Override
public RedisCommandDescription getCommandDescription() {
//我们选择的数据结构对应的是 key:String("wcresult"),value:Hash(单词,数量),命令为HSET
return new RedisCommandDescription(RedisCommand.HSET,"wcresult");
}
@Override
public String getKeyFromData(Tuple2<String, Integer> t) {
return t.f0;
}
@Override
public String getValueFromData(Tuple2<String, Integer> t) {
return t.f1.toString();
}
}
}
|
def visit(self, tree):
var_refs = []
for subtree in tree.iter_subtrees():
refs = self._call_userfunc(subtree)
if refs is None:
continue
for ref in refs:
if ref not in var_refs:
var_refs.append(ref)
return var_refs |
/*
Given a 2D array, find the maximum sum rectangle in it. In other words find maximum sum over all rectangles in the matrix.
Input
First line contains 2 numbers n and m denoting number of rows and number of columns. Next n lines contain m space separated integers denoting elements of matrix nxm.
Output
Output a single integer, maximum sum rectangle.
Constraints
1<=n,m<=100
Sample Input
4 5
1 2 -1 -4 -20
-8 -3 4 2 1
3 8 10 1 3
-4 -1 1 7 -6
Sample Output
29
*/
#include <bits/stdc++.h>
using namespace std;
// O(n^4) solution - TLE
int max_sum(int n,int m,int **mat){
// find sum from each index to the end
int **storage=new int*[n];
for(int i=0;i<n;i++){
storage[i]=new int[m]{};
}
// O(n^4)
for(int i=0;i<n;i++){
for(int j=0;j<m;j++){
// find sum for each ~ [i][j] till end ~ [n-1][m-1]
for(int r=i;r<n;r++){
for(int c=j;c<m;c++){
storage[i][j]+=mat[r][c];
}
}
}
}
int overall_max=INT_MIN;
// select one start and 1 end
for(int i=0;i<n;i++){
for(int j=0;j<m;j++){
// find sum from that [i][j] till [r][c]
for(int r=i;r<n;r++){
for(int c=j;c<m;c++){
int curr_sum=0;
curr_sum+=storage[i][j];
if(r<n-1){
curr_sum-=storage[r+1][j];
}
if(c<m-1){
curr_sum-=storage[i][c+1];
}
if(r<n-1&&c<m-1){
curr_sum+=storage[r+1][c+1];
}
if(curr_sum>overall_max){
overall_max=curr_sum;
}
}
}
}
}
for(int i=0;i<n;i++){
delete storage[i];
}
delete storage;
return overall_max;
}
int max_sum_optimized(int n,int m, int **mat){
// select start and end col
// int *sum_storage = new int[n]{};
int overall_max=INT_MIN;
for(int s=0;s<m;s++){
int *sum_storage = new int[n]{};
for(int e=s;e<m;e++){
// find sum of all rows in between these start and end columns
// using previously stored sum to find next sum
for(int i=0;i<n;i++){
sum_storage[i]+=mat[i][e];
}
// now i have a storage of sums of rows between start and end columns
// use kadane's algo to find the max sum consecutive subarray and their indices in this sums of rows
int max_sum_kadane=INT_MIN;
int curr_sum_kadane=0;
for(int i=0;i<n;i++){
curr_sum_kadane+=sum_storage[i];
max_sum_kadane=max(max_sum_kadane,curr_sum_kadane);
if(curr_sum_kadane<0){
curr_sum_kadane=0;
}
}
overall_max=max(overall_max,max_sum_kadane);
}
}
return overall_max;
}
int main()
{
int n,m;
cin>>n>>m;
int **mat=new int*[n];
for(int i=0;i<n;i++){
mat[i]=new int[m];
}
for(int i=0;i<n;i++){
for(int j=0;j<m;j++){
cin>>mat[i][j];
}
}
// O(n^4)
// cout<<max_sum(n,m,mat)<<endl;
// O(n^3)
cout<<max_sum_optimized(n,m,mat)<<endl;
for(int i=0;i<n;i++){
delete mat[i];
}
delete mat;
return 0;
}
|
Orbital Zeeman effect: Signature of charge in the carriers of ferromagnetism By deriving quantum hydrodynamic equations for an isotropic single-band ferromagnet in a finite magnetic field, we find that a massive mode recently predicted splits under the action of the field. The splitting is a peculiarity of charged fermions and is linear in the field to leading order in q, bearing a resemblance to the Zeeman effect in this limit, and providing a way to determine whether ferromagnetism arises from charged objects or not. |
The U.S. is the world's largest holder of emergency crude stockpiles, but now it's selling off reserves, while China, the second-largest, is taking advantage of low crude oil prices to fill storage.
And earlier this month, for the first time in history, China bought crude oil from the U.S. Strategic Petroleum Reserves (SPR), scooping up 550,000 barrels for US$28.8 million. It's not a massive purchase, but it is unprecedented.
So while China is stockpiling emergency reserves at a time when crude prices are low, the U.S. appears to have come to the view that its SPR is no longer a critical part of energy security, or a critical element in the case of disruptions. It is, after all, expensive to keep up this storage, and part of the reason for the U.S. sell-off is to finance the upkeep.
Back in 1973-74, the fuel embargo, at a time of declining U.S. production, greatly affected the U.S. Hence, in 1975, the SPR was launched to safeguard the U.S. against any future supply disruptions. As of March 17, the SPR inventory held 693.4 million barrels of oil, which is below the all-time record of 727 million barrels of oil held in 2009.
Ideally, with oil prices trading below $50 a barrel, it's a good time to buy, as China is doing. It is not only filling its storage tanks at these low prices, new storage capacity is also being added, thereby increasing the SPR.
Though China's push to store crude oil started relatively late in 2007, it has quickly ramped up its capacity. However, unlike the U.S., which regularly reports its data, the Chinese like to keep their SPR details a secret. Hence, most of the information available about China's SPR is only an estimate, and different agencies have arrived at different figures.
According to a June 29, 2016, research note by JPMorgan Chase & Co. analysts, including Ying Wang, the Chinese had built around 400 million barrels of capacity by the middle of last year, compared to its target of 511 million barrels.
Japan, the third largest SPR holder at about 324 million barrels, has neither added nor sold its stockpile aggressively in this oil crisis. It probably considers the current levels sufficient to tide through any temporary disruption, hence, has maintained status quo.
Though South Korea's SPR capacity is the fourth largest in the world, at 146 million barrels of oil, it doesn't use its complete capacity for SPR purposes. 92.6 million barrels is used for SPR; 26.6 million barrels of foreign oil is stored under various agreements; Korea National Oil Corp. trades use up 5.9 million barrels, and 800,000 barrels are for other commercial uses.
South Korea, however, has also taken advantage of low crude oil prices and increased its allocation for SPR purchases from Won 54.9 billion in 2015 to Won 90 billion in 2016.
Next in line is Spain, which has a capacity of 120 million barrels. It has also maintained status quo, holding 90 days of average domestic consumption in SPR, according to the EU policy.
Although it is not among the top five, India also plans to quickly ramp up its SPR to meet 90 days of net import coverage. Currently, Indian reserves hold 39.1 million barrels of oil, and the government plans to add another 91 million barrels of capacity by 2020. Once complete, India's SPR will be among the top 5.
An analysis of the above figures show that barring the U.S., all other nations are either maintaining status quo or are increasing their crude stockpiles.
But why is the U.S. selling?
Back in 2005, there was a call to increase the capacity of the U.S. SPR to 1 billion barrels of oil, however, as the shale boom took hold, many felt that the addition was not needed.
A U.S. Department of Energy report to Congress titled "Long-Term Strategic Review of the U.S. Strategic Petroleum Reserve" released in August of last year has suggested reducing the SPR from the current levels to a range of 530-600 million barrels, which is equal to about 60 days of supply.
The aging infrastructure of the SPR needs an upgrade at a likely cost of $2 billion, without which the facility might not be of any use in an emergency. Hence, Congress had passed a temporary bill in December to sell $375 million worth of oil from the SPR. The sale has been completed, and the major buyers are BP, which has purchased 5.4 million barrels of oil priced at $278 million, and Valero Marketing and Supply Co., which has purchased 1.6 million barrels priced at $83 million.
China has also purchased oil from the SPR sale through PetroChina International, the overseas trading arm of state-owned oil giant PetroChina, reports S&P Global Platts.
Another reason for the sale is that back in 2007, the U.S. imported about 6 million barrels from OPEC. But by 2015, imports from the oil cartel dipped to 2.9 million barrels. In a recent report, the EIA said that the U.S. could become energy independent by 2026.
"Yes, the U.S. could be completely, I think the phrase used at one time was energy independent," said EIA Administrator Adam Sieminski in a press conference announcing the report, reports the Time.
The U.S. is shielded to a large extent from any supply disruptions compared to the 1970s, hence, it is reducing its SPR. On the other hand, the remaining top 4 SPR holders are not yet energy independent, therefore, they are either maintaining status quo or are adding oil to their SPRs at the current levels. |
Partial crosstalk cancellation in a multi-user xDSL environment In modern DSL systems, crosstalk is a major source of performance degradation. Crosstalk cancellation schemes have been proposed to mitigate the effect of crosstalk. However, the complexity of crosstalk cancellation grows with the square of the number of lines in the binder. Fortunately, most of the crosstalk originates from a limited number of lines on a limited number of tones. As a result, a fraction of the complexity of full crosstalk cancellation suffices to cancel most of the crosstalk. The challenge is then to determine which crosstalk to cancel on which tones, given a certain complexity constraint. This paper presents an algorithm based on a dual decomposition to optimally solve this problem. The proposed algorithm naturally incorporates rate constraints and the complexity of the algorithm compares favourably to a known resource allocation algorithm, where a multi-user extension is made to incorporate the rate constraints. |
#ifndef _SCENEMANAGER_H_
#define _SCENEMANAGER_H_
#include "../render/d3d.h"
#include "../render/camera.h"
#include "../scene/model/celmodel.h"
#include "../render/light.h"
#include "../scene/model/bitmap.h"
#include "../render/materials/celshader.h"
#include "../render/text/text.h"
#include "../core/input.h"
#include "../scene/player.h"
#include "../scene/cube.h"
#include "../scene/sphere.h"
#include "sceneloader.h"
#include "scene.h"
#include <iostream>
using namespace std;
const int OBST_AMM = 8;
const bool FULL_SCREEN = false; //don't turn on if resolution is different from 800x600
const bool VSYNC_ENABLED = true; //false == no fps cap, true == 60fps cap // don't turn on if resolution isn't 800 x 600
const float SCREEN_DEPTH = 1000.0f;
const float SCREEN_NEAR = 0.1f;
class SceneManager
{
public:
SceneManager();
SceneManager(const SceneManager&);
~SceneManager();
bool Initialize(int, int, HWND, Input&);
void Shutdown();
bool Frame(int, int, float);
void Update(float);
void DetectInput(double time);
bool Render(float);
void projectObjects();
void restoreObjects();
void loadLevel(int level);
private:
//float camX, camY, camZ;
//float camXrot, camYrot, camZrot;
void ProjectScene();
void MovePlayerInScene();
void MoveCameraInScene();
void ResetCameraToIsometricView();
private:
D3D* m_Direct3D;
Camera* m_Camera;
CelShader* m_celShader;
Light* m_Light;
Input* m_Input;
Player* m_player;
Cube* m_Walls[3];
vector<Cube*> m_obstacles;
Sphere* m_sphere;
Scene* UKRlevel;
Scene* RUSlevel;
Scene* CurrentLevel;
HWND m_hwnd;
DIMOUSESTATE mouseLastState;
Text* m_Text;
//isometrix view positon and rotations for the camera.
const XMFLOAT3 fixedIsometricCameraPosition = XMFLOAT3(16, 21, -16);
const float fixedIsometricPitch = 0.6108f, fixedIsometrixYaw = -0.7853f;
bool isProjected;
unsigned long m_startTime;
bool CameraControl;
};
#endif |
He had just played what was very likely his last Olympic match, with no events lined up for the next few months.
But for someone who knows a step back from elite-level badminton is on the cards, Derek Wong cut a content, collected figure yesterday.
Following the 18-21, 8-21 defeat by world No. 1 Lee Chong Wei that ended his second Olympic campaign, the 27-year-old Singaporean dropped the biggest hint yet that he could soon call time on his career.
The world No. 55, a silver medallist at the 2014 Commonwealth Games, ruled out a run for a third straight Games at the 2020 Tokyo edition.
"I'm going to slow down, or stop - it all depends," said Wong.
He later explained to The Straits Times that while his decision to take a step back may be unexpected, his body is unable to sustain the training that high-level badminton demands. He also felt his style of play is gradually becoming irrelevant for the game today.
He said: "The game is more about rallies and playing a patient game overall but my game has all along been about aggression, speed, power play, so changing has always been a big hurdle for me.
"I don't want to (block) the younger generation from coming up and participating in the Olympics. The Singapore Badminton Association has a plan for the next batch of juniors to come up, so I hope they can increase their level (of play) at a faster pace and go for it."
Wong took heart from the fact that his two Olympics have included matches against some of the best players, including world No. 13 Jan O Jorgensen of Denmark in 2012 and Lee yesterday.
Despite a fourth defeat in as many encounters, his strategy to move his Malaysian counterpart around the court yielded results for Wong, who posted a creditable effort in the first game and strung four straight points against the double Olympic silver medallist twice.
Said Wong, who tipped Lee to at least make the final: "It's not easy to sustain momentum like that against someone like Chong Wei.
"You can see the focus in his eyes for every point. He played a perfect second game, made very few mistakes and it was very difficult to get him out of position.
"I've managed to have quite a few good matches at the Olympics. It's not a bad run."
Team-mate Liang Xiaoyu also ended her maiden Olympic campaign yesterday after going down 21-17, 21-11 to South Korea's Sung Ji Hyun.
For the 20-year-old, the Olympics have proven to be a stage like no other. She said: "The nerves, the excitement you feel... you don't get it anywhere else. I wasn't as nervous today as I was yesterday, but was more anxious in trying to win points." |
Updated. If you’re a FreedomPop customer wondering where in the hell that iPhone(s AAPL) WiMAX sleeve you ordered is, CEO Stephen Stokols has an answer for you: talk the Federal Communications Commission. Stokols told FierceWireless that the feds are holding its first 5,000 sleeves at customs while the FCC goes over an untested design element.
That iPhone shell is of course FreedomPop’s signature product. It’s a WiMAX modem that fits over the iPhone 4 and 4S, substituting its 4G connectivity for the 3G radios embedded within. Ultimately, FreedomPop’s plan is to turn the iPhone into a full-on softphone providing IP voice and messaging services through textPlus. The mobile virtual network operator’s (MVNO) key point of differentiation is that it provides its baseline services for free: 500 MB of free data each month, plus the opportunity to earn more.
It’s a bit hard to execute that plan, though, if the device providing that connectivity isn’t anywhere near its customers. FreedomPop started taking pre-orders for the device back in May, and it officially launched its beta service without the sleeve in September, shipping mobile hotspots to customers. Since then we’ve gotten several reports from GigaOM readers that they had pre-ordered the sleeve but had heard no word from FreedomPop on when it should arrive.
Stokols told Fierce he expects the sleeves to pass FCC approval in the next few weeks, allowing the virtual operator to ship its first batch of 5,000 iPhone modems, but he added that the delay has cost FreedomPop and its overseas manufacturing partner $550,000 so far. Unless Stokols is somehow valuing that shipment at $1,100 per sleeve, FreedomPop probably has a lot more than 5,000 sleeves waiting to come into the country.
Stokols added in the email it plans to make it up to customers waiting to for the iPhone shell by offering them a free 1 GB of data once they’re activated. And while the iPhone sleeve isn’t available, FreedomPop has begun selling (and shipping) its iPod sleeve, which fits over the iPod touch, effectively turning it into a data all-IP smartphone.
How _long_ has this been held up in customs? I question their latest claim of “a few more weeks”, since they have been using that phrase since summer. Over and over, with definite dates. “We are about to ship.” “It will be in your hands by the end of November”, etc. It is like fusion power: it is 50 years off and always will be.
Since they took pre-payments, I also wonder how much money they have lost.
Thank you for at least _asking_ this question, though. I have been very frustrated that bloggers have continued treating all FreedomPop press releases about upcoming unreleased products as gospel, without mentioning the missing iPhone sleeves and the lack of accuracy of their predicted shipment. |
Because youre worth it In the early days of commissioning mental health services, I tried to explain the value of positive outcome measures, only for purchasers to sneer 'It's not our job to make people happy'.1 Twenty years of scientific research since then has identified key dimensions of wellbeing,2 and the public health benefits of improving mental wellbeing are finally being recognised.3 I felt proud to contribute to two national initiatives: No Health Without Mental Health (aimed at public services)4 and Making the Case for Mental Wellbeing (aimed at policy makers).5 The strategy No Health Without Mental Health included the epiphany that the "Prime Minister, David Cameron, and the Deputy Prime Minister, Nick Clegg, have made it clear that the Coalition Government's success will be measured by the nation's wellbeing, not just by the state of the economy".4 Two declared aims of that strategy were to enable more people to 'have better mental wellbeing' and to implement 'a new national measure of wellbeing'.4Much of the UK literature and early attempts to measure wellbeing were egocentric, focusing on an individual's feelings or circumstances. However, when the strategy was first being drafted, the Department of Health recognised that individual 'wellbeing cannot be seen in isolation from wider society'.6 Both mental wellness and misery can be observed at the family, neighbourhood, workplace or school levels, and resilience in the face of adversity may be related to a past history of wellbeing, current social solidarity or future hopes. Embracing these shared understandings may be crucial for preserving our environment7 or adapting to our changing climate.8 This year, the Royal Statistical Society hosted a discussion of UK Measures of National Wellbeing (7th April 2014) where it was clear a valid measure of wellbeing would need multiple perspectives.Compared to other high income countries, a worrying percentage of children and families in the United Kingdom share miserable experiences.9 For example, trajectories throughout development are influenced by parenting, and four out of ten children are now missing out on 'good' parenting.10 For some groups, the challenges of austerity bring insecurity and unhappiness: now 36% of children say their families have been affected by the economic crisis.11 The good news is that many young people are finding their voice: on 12th July 2014, the National Health Service (NHS) Youth Forum held its first national conference, Celebrating Positive Young Mental Health.There is plenty of room for improvement in adult wellbeing, for example, about 4.7 million people in the United Kingdom do not have a single close friend.12 The Liberal Democrat think tank CentreForum has taken a comprehensive approach to The pursuit of happiness: a focus on mental health and wellbeing should be embedded across the work of government, including in the formulation of policies affecting housing, education, employment, planning, welfare and policing.13 The All-Party Parliamentary Group on wellbeing economics subsequently made recommendations about specific areas of policy for the United Kingdom and for local government. This Group proposes that 'Government should publish a Wellbeing Strategy setting out the ultimate wellbeing objectives of policy and how it plans to deliver them', while local authorities should use 'wellbeing in outcomes-based commissioning'. |
Effect of chronic kidney disease on progression of clinical attachment loss in older adults: A 4-year cohort study. BACKGROUND People with chronic kidney disease (CKD) may have an increased risk of periodontal disease, but longitudinal evidence is sparse. METHODS This 4-year cohort study assessed the association between CKD and changes in periodontal health status, defined by attachment loss (AL) progression, among older adults. Participants were 388 community-dwelling Japanese adults who were 70 years old at baseline with 7053 teeth. Estimated glomerular filtration rate (eGFR) was calculated by using baseline serum creatinine concentration. AL at six sites for every tooth was recorded at baseline and follow-up examinations. Multilevel logistic regression models estimated the tooth-specific risk of AL progression (≥1 site exhibiting a ≥3 mm increase in AL) with baseline CKD (eGFR < 60 mL/min/1.73 m2 ) as the principal exposure. RESULTS At baseline, 27.8% of the study population (108/388 participants) had CKD. After 4 years, 21.8% of the studied teeth (1537/7053 teeth) exhibited AL progression. After applying inverse probability weighting and adjusting for potential confounders, including sex, use of devices for interdental cleaning, smoking, diabetes, tooth location, abutment for a removable denture, and highest AL, CKD was associated with significantly higher odds of AL progression (adjusted odds ratio: 1.73; 95% confidence interval: 1.15-2.60). CONCLUSIONS The results suggest that CKD increases the risk of periodontal disease progression in older community-dwelling Japanese adults. Additional studies with more complete information, as well as in other geographic areas and age groups, are necessary to further generalize the findings. |
Mechanism of activation of the ret proto-oncogene by multiple endocrine neoplasia 2A mutations Transforming activity of the c-ret proto-oncogene with multiple endocrine neoplasia (MEN) 2A mutations was investigated by transfection of NIH 3T3 cells. Mutant c-ret genes driven by the simian virus 40 or cytomegalovirus promoter induced transformation with high efficiencies. The 170-kDa Ret protein present on the cell surface of transformed cells was highly phosphorylated on tyrosine and formed disulfide-linked homodimers. This result indicated that MEN 2A mutations induced ligand-independent dimerization of the c-Ret protein on the cell surface, leading to activation of its intrinsic tyrosine kinase. In addition to the MEN 2A mutations, we further introduced a mutation (lysine for asparaginic acid at codon 300 ) in a putative Ca(2+)-binding site of the cadherin-like domain. When c-ret cDNA with both MEN 2A and D300K mutations was transfected into NIH 3T3 cells, transforming activity drastically decreased. Western blot (immunoblot) analysis revealed that very little of the 170-kDa Ret protein with the D300K mutation was expressed in transfectants while expression of the 150-kDa Ret protein retained in the endoplasmic reticulum was not affected. This result also demonstrated that transport of the Ret protein to the plasma membrane is required for its transforming activity. |
<reponame>tirthasheshpatel/Data-Structures-and-Algorithms<filename>trees/bst.c
#include <stdio.h>
#include <stdlib.h>
typedef struct node Node;
typedef struct node* tree_iterator;
struct node
{
int key;
tree_iterator parent, left, right;
};
int totalNodesInTree(tree_iterator root)
{
if(root == NULL) return 0;
return totalNodesInTree(root->left) + totalNodesInTree(root->right) + 1;
}
tree_iterator create(int key)
{
tree_iterator root = (tree_iterator)malloc(sizeof(Node));
root->key = key;
root->left = 0;
root->right = 0;
root->parent = 0;
return root;
}
tree_iterator insert(int key, tree_iterator root)
{
tree_iterator it = root;
if(root==0) return root;
while(1)
{
if(key == it->key)
{
printf("Node with same value exists!\n\nTerminating Program...\n\n");
exit(1);
}
else if(key < it->key){
if(it->left == 0) break;
it = it->left;
}
else{
if(it->right == 0) break;
it = it->right;
}
}
tree_iterator newNode = (tree_iterator)malloc(sizeof(Node));
newNode->key = key;
newNode->parent = it;
newNode->left = 0;
newNode->right = 0;
if(key<it->key) it->left = newNode;
else it->right = newNode;
return root;
}
tree_iterator minValueRoot(tree_iterator root)
{
if(0 == root) return root;
if(0 == root->left) return root;
return minValueRoot(root->left);
}
tree_iterator delete(int key, tree_iterator root)
{
if(0 == root) return root;
if(key < root->key) //the key is in the left branch
{
if(0 == root->left)
{
printf("Node with %d value dont exists!\n\nTerminating Program...\n\n", key);
exit(1);
}
root->left = delete(key, root->left);
return root;
}
else if(key > root->key) //the key is in the right branch
{
if(0 == root->right)
{
printf("Node with %d value dont exists!\n\nTerminating Program...\n\n", key);
exit(1);
}
root->right = delete(key, root->right);
return root;
}
else //the root has key
{
if(0 == root->left && 0 == root->right) //has no child
{
free(root);
return 0;
}
else if(0 == root->right) //has one child(left)
{
tree_iterator temp_node = root->left;
temp_node->parent = root->parent;
free(root);
return temp_node;
}
else if(0 == root->left) //has one child(right)
{
tree_iterator temp_node = root->right;
temp_node->parent = root->parent;
free(root);
return temp_node;
}
else //has two child
{
tree_iterator mvr = minValueRoot(root->right);
root->key = mvr->key;
root->right = delete(mvr->key, root->right);
}
}
return root;
}
void inorder(tree_iterator root)
{
if(root==0) return;
inorder(root->left);
printf("%d ", root->key);
inorder(root->right);
}
void postorder(tree_iterator root)
{
if(root==0) return;
postorder(root->left);
postorder(root->right);
printf("%d ", root->key);
}
void preorder(tree_iterator root)
{
if(root==0) return;
printf("%d ", root->key);
preorder(root->left);
preorder(root->right);
}
void printTree(tree_iterator root, int space)
{
if(0 == root) return;
printTree(root->right, space+4); //for indent we add space by 4
printf("\n");
for(int s=0; s<space; s++)
{
printf(" ");
}
printf("%d", root->key);
printTree(root->left, space+4);
}
int main()
{
tree_iterator root = create(10);
root = insert(20,root);
root = insert(5,root);
root = insert(1,root);
root = insert(8,root);
root = insert(6,root);
root = delete(5,root);
printf("Tree shape:\n");
printTree(root, 0);
printf("\nPreorder: ");
preorder(root);
printf("\nInorder: ");
inorder(root);
printf("\nPostorder: ");
postorder(root);
int nb_nodes = totalNodesInTree(root);
printf("\nTotal nodes: %d\n", nb_nodes);
}
|
Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective. PURPOSE This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR™) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR™. Empirically derived dose reduction limits were established for ASiR™ for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence∕adulthood. METHODS Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR™ blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR™ implementation for dose reduction were assessed using yr old and adolescent∕adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR™ reconstruction to maintain noise equivalence of the 0% ASiR™ image. RESULTS The ASiR™ algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR™ reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR™ presented a more smoothed appearance than the pre-ASiR™ 100% FBP image. Finally, relative to non-ASiR™ images with 100% of standard dose across the pediatric phantom age spectrum, similar noise levels were obtained in the images at a dose reduction of 48% with 40% ASIR™ and a dose reduction of 82% with 100% ASIR™. CONCLUSIONS The authors' work was conducted to identify the dose reduction limits of ASiR™ for a pediatric oncology population using automatic tube current modulation. Improvements in noise levels from ASiR™ reconstruction were adapted to provide lower radiation exposure (i.e., lower mA) instead of improved image quality. We have demonstrated for the image quality standards required at our institution, a maximum dose reduction of 82% can be achieved using 100% ASiR™; however, to negate changes in the appearance of reconstructed images using ASiR™ with a medium to low frequency noise preserving reconstruction filter (i.e., standard), 40% ASiR™ was implemented in our clinic for 42%-48% dose reduction at all pediatric ages without a visually perceptible change in image quality or image noise. |
from telegram import Update, ParseMode
from telegram.ext import CallbackContext
from datetime import datetime
import os
from settings import SQLITE_PATH
from ..models import User
from ..constants import Message, States
def statistics_callback(update: Update, context: CallbackContext):
total_users = User.select().count()
active_users = User.select().where(User.active == True).count()
response = Message.statistics.format(total_users=total_users, active_users=active_users)
context.bot.edit_message_text(
chat_id=update.effective_chat.id,
message_id=update.effective_message.message_id,
text=response, parse_mode=ParseMode.HTML
)
def mailing_callback(update: Update, context: CallbackContext):
context.bot.edit_message_text(
chat_id=update.effective_chat.id,
message_id=update.effective_message.message_id,
text=Message.mailing
)
return States.prepare_mailing
def backup_callback(update: Update, context: CallbackContext):
date = datetime.today().strftime('%d.%m.%Y')
context.bot.delete_message(
chat_id=update.effective_chat.id,
message_id=update.effective_message.message_id
)
if os.path.isfile(SQLITE_PATH):
with open(SQLITE_PATH, 'rb') as file:
context.bot.send_document(
chat_id=update.effective_chat.id,
caption=Message.backup.format(date),
document=file
)
else:
context.bot.send_message(
chat_id=update.effective_chat.id,
text=Message.database_not_found
)
|
Numerical Modelling of Plate Load Tests on Unsaturated Silt Loams in Laboratory Stands The paper summarizes the results of the numerical modelling of plate load tests on unsaturated silt loams in laboratory conditions using two different constitutive models and different software packages. The first part of the paper presents laboratory measurements on large specimens of compacted unsaturated silt loams with a constant degree of saturation during the experiments. It comments on the issues of the classical interpretations of static plate load tests including the influence zone theory and further issues with calibration of numerical models using the parameters gained from the classical approach. The second part of the paper dedicated to the problems of numerical modelling of these soils presents results of the Cam-Clay constitutive model modified for capturing the influence of the moisture content implemented in the SIFEL software package and results of the Hypoplastic constitutive model for clays implemented in the GEO 5 software package. The paper presents a comprehensive analysis of the advantages and drawbacks of both constitutive models in details and comments on the possible issues when using them for more topologically complex tasks. Introduction Interpretation of results of numerical modelling of unsaturated soils for complex tasks presents an enormous challenge on engineers to avoid neglecting known inaccuracies of the approximations due to the used constitutive models and to minimise the potential impacts of uncertainties. While the results of numerical modelling of saturated soils (fine, cohesive as well as coarse, cohesionless) can be interpreted with a relatively high degree of certainty, the unsaturated soil mechanics and the numerical modelling of unsaturated soils still present a room for further improvement, however significant improvement in the field of coupled heat and moisture transport modelling has been achieved using both, phenomenological models, as well as complex micro mechanical based models,. The presented work focused on challenges linked with numerical modelling of unsaturated soils subjected to moisture content changes, i.e. modelling of three phased medium. The paper is founded on the results of the full scale laboratory experiments on reconstituted silt loams Czech Republic origin, which were used for calibration and confirmation and are presented in the second chapter. The third chapter of the paper introduces the problems resulting from the interpretation of static plate load tests and the acquisition of material parameters from such widely used and accepted experiments, which are usually carried out in-situ without sufficient information about the reach of the applied loading and the changes related to the moisture content variations. The second part of the paper is devoted to a brief introduction of the used constitutive models and results obtained. Full scale laboratory experiments Governing idea of the experiments was to provide experimental results easily comparable with in-situ measured values. With similar loading / unloading behaviour measured on unsaturated soils in-situ and in laboratory appropriate analogy could be used to predict the effect of wetting, suction cancellation or groundwater table variations. Obtained results were calibration and confirmation of numerical models as shown in chapter 4 in this paper. The stand for the tested soil sample was a massive reinforced concrete box without the top covering part, see figure 1. The bottom part contains a system of pipes 12.5 mm in diameter and is connected to the large water storage tank. The side walls were 200 mm thick and the box is constricted by steel beams in two levels. A steel frame is attached to the box to take the reaction force and additional small frame presents an inertial body to which the deformations are measured. The internal dimensions of the box are 1.0 x 1.0 x 1.0 m. The stand was designed and constructed strictly for this purpose while taking into account the effect of vibrations during the soil sample compaction as well as the impact of the load and water. Although the stand served well with respect to the needs of this thesis, adjustments recommended regarding the water tightness and pore pressure measurements are presented at the end of this chapter. The load was applied through a hydraulic jack to the steel plate 20 mm thick and 300 mm in diameter (70.685.10 3 mm 2 surface area). As the maximum admissible load for the reaction frame was set on 80 kN, the plate was regarded as rigid within the load interval for analytical and numerical purpose. The applied load was measured in the hydraulic system (calibrated manometer, with confirmation using pressure cell). Settlement of the plate was measured by two dial / digital gauges installed on the plate with a guaranteed accuracy of 0.01 mm and 0.001 mm respectively. Settlement of the shear zone was measured by two dial gauges in 50 mm and 100 mm distance from the edge of the rigid plate, see figure 2. Figure 2. Experiment setup -settlement gauges and lateral pressure probe, constraining frame and reaction frame Silt loams -description of the tested soils The experiments were carried out on silt loams samples reconstituted in the laboratory from excavated soils. The soil was inserted into the model stand by layers 200 mm thick. The first layer was placed on nonwoven geotextile (200 g/m 2 ) protecting the outlets from pipelines. Each layer was compacted by a vibrating plate. The time over which was the vibrating plate acting on each layer was estimated by a compacting experiment carried out in advance. The soil contained a natural amount of moisture as it was kept in plastic covers after being removed from the site. The time of storage was kept as short as possible. The moisture content was monitored in selected intervals during the time of storage. Some text. The soil specimens were classified as saSi according to EN ISO 14688 or F5 -F, i.e. silt loam according to old Czech standard CSN 731001. Grain distribution of the three soil samples is shown in figure 3 from which is also clear that the portion of fine particles is very close to F3 -FS class. The optimal moisture content and maximal dry soil density were evaluated using standard Proctor test. This particular soil was obtained from the excavation in the Prague city district near Prague Castle (Kings Park) and built in the model with natural moisture content being close to 7%. Compacting experiment prescribed 19 minutes duration of compaction with a vibrating plate for each 200 mm thick layer while achieving soil density approximately 1550 kg.m -3. This soil is considered highly collapsible and moisture sensitive with high volumetric deformations due to swelling and shrinkage. Interpretation of results from the static plate load test It was experimentally confirmed that different results for the same soils will be obtained when using plates with different diameters and loading forces and employing generally used Bousinesq formula to obtain secant modulus of the subsoil. where r is the radius of the plate, stot represents final settlement, fz is the load magnitude and is the Poisson's ratio. As the formula was derived assuming the infinite half-space. It is very useful as it allows for explicit estimation of secant modulus, but it is also limiting as it neglects the phenomenon of hysteresis of soil's load / unload memory. Therefore, it should only be used, when the depth of the influence zone exceeds approximately 2 times the diameter. For shallow influence zones, the resulting secant moduli will be overestimated, i.e. the subsoil would seem to be stiffer than it is. Influence zone depth for circular loads The governing idea for estimating the depth of the influence zone is the pre-consolidation of the soil, which is generally caused by the excavation and the soil's ability to memorize the highest load it was subjected to. In this particular case, the pre-consolidation was achieved by the compaction process. The problem in estimating the influence zone depth H, according to above mentioned assumptions, can be substituted by the problem of calculating the vertical stress for complex geotechnical problems using, for instance, elastic layer solution in Westergard manner. The following formula describes how to calculate the depth of the influence zone when r represents the radius of the plate. The Fr() function can be introduced as follows and plotted in figure 4: When applying the elastic layer theory presented for the circular foundation we yield the depth of influence zone H = 0.36 m, which does not satisfy the twice as width assumption leading to overestimation of E. The differences in the secant moduli are significant. While the equation 1 yields for 400 kPa loading and 1.25 mm settlement E0 = 73 MPa (for = 0.4), for 0.36 m deep influence zone and 300 kPa pre-consolidation pressure we obtain E0 = 37 MPa, which was used in numerical modelling as a starting point for the calibration process and as it is shown in the next chapter it was also approximately the mean value for both approaches. Modified Cam Clay (SIFEL code) The implementation of the Modified Cam Clay (MCC) model prepared in the SIFEL was in detail described in and the following description tends to introduce it in a more general manner. The model follows Lewis and Schrefler's approach of coupled heat and moisture transfer while employing Darcy's and Fick's laws for moisture transfer, Fourier's law for heat transfer, standard mass and energy balance equations and modified concept of effective stress according to, i.e. one stress variable as shown in following equation is the effective stress, nSw is a volume fraction of water and nSg is a volume fraction of gas. In order to describe the deformation of a porous skeleton or actually the rearrangement of grains, the standard constitutive equation is written in the rating form. where sk D is a tangential matrix of porous skeleton while, represents the strain rate while 0 represents strains indirectly associated with stress changes, such as shrinkage and swelling, creep, etc. and also involves the strain of the bulk material due to pore pressure changes. Combining equations 4 and 5 while assuming negligible shear stress in fluid we obtain where represents the Biot's constant and suction s is defined in agreement with volume fractions as follows. Material parameters and results Two different approaches were employed to simulate the plate load test. The first approach focuses on the good approximation of the loading path while the unloading path is not considered for the calibration. The second approach neglects the initial loading cycle and focuses on the calibration of the rest of the experiment. Following Regular mesh consisting of quadrilateral finite elements using linear interpolation functions was employed. Loading plate is assumed to be rigid and the problem is simplified due to axial symmetry. The unchanging initial tangential stiffness matrix limits the possibilities of the code but also stabilizes and accelerates the computations. Hypoplasticity for clays (GEO 5 FEM code) The constitutive model used for simulation of unsaturated silt loams with steady no-flow boundary condition was developed by Masin and is based on the combination of classical critical state models and generalized hypoplasticity principles. The non-linear behaviour of the soil in this model is governed by generalized hypoplasticity while as the limit stress criteria Matsuoka-Nakai failure surface was selected. The normal compression line for the isotropic compression is similar to the normal consolidation line (NCL) from the Modified Cam Clay model presented in chapter 4. The intergranular strain concept was not involved within carried out calculations. In the figure 9 above quantity, pcr is defined as the mean stress at the critical state line at the current void ratio and * e p is the equivalent pressure at the isotropic normal compression line. The calibration procedure is in detail described in. Material parameters and results As in the case of MCC model, the behaviour of the specimen before the first loading cycle was found to be difficult to simulate as the overall pre-consolidation should be distributed in layer (due to compaction). This time only the second approach was applied, i.e. the initial loading cycle was neglected and calibration procedure focused only on the rest of the experiment (Table 2). Figure 10. Horizontal (left) and vertical (right) displacement on the deformed specimen for the load 500 kPa In contrast to the MCC model calculated the horizontal stress and measured lateral pressure are very close (Figure 10, 11). Figure 11. Comparison of the measured and calculated settlement of the rigid plate using the hypoplastic constitutive model Conclusions The presented results demonstrate the capabilities of the used constitutive models to approximate the static plate load test which was performed on silty loams in laboratory conditions. Despite the demanding calibration process, both of the models failed to successfully approximate the entire experiment which includes two load / unload cycles not mentioning the follow up experiments which involved significant moisture changes and suction cancellation processes. |
<filename>src/test/java/nl/wilbrink/password/generator/PasswordGeneratorTest.java
package nl.wilbrink.password.generator;
import org.junit.Test;
import static org.assertj.core.api.Assertions.assertThat;
public class PasswordGeneratorTest {
private PasswordGenerator sut;
@Test
public void itShouldGenerateALowercasePassword() {
sut = new PasswordGenerator(8, 16, false, false, false);
String generatedPassword = sut.generatePassword();
assertThat(generatedPassword).containsPattern("[a-z]+");
assertThat(generatedPassword.length()).isBetween(8, 16);
}
@Test
public void itShouldGenerateALowerAndUppercasePassword() {
sut = new PasswordGenerator(8, 16, false, true, false);
String generatedPassword = sut.generatePassword();
assertThat(generatedPassword).containsPattern("[a-zA-Z]+");
assertThat(generatedPassword.length()).isBetween(8, 16);
}
@Test
public void itShouldGenerateSpecialCharacterPassword() {
sut = new PasswordGenerator(8, 16, true, true, true);
String generatedPassword = sut.generatePassword();
assertThat(generatedPassword).containsPattern("[a-zA-Z0-9]+");
assertThat(generatedPassword.length()).isBetween(8, 16);
}
}
|
<reponame>Nayaco/OpenGLBoilerplate<gh_stars>0
#ifndef CHUNK_HPP
#define CHUNK_HPP
#include "Core/Common.hpp"
#include "Core/Graphics/Drawable.hpp"
#include "Core/GameObjects/Terrain/Terrain.hpp"
#include "Core/GameObjects/Grass/GrassBlade.h"
class Chunk {
public:
enum class PLACEMENT: int {
POSX = 0, NEGX = 1, POSZ = 2, NEGZ = 3,
};
float placement_x, placement_z;
float chunk_width, chunk_height;
Terrain *terrain;
float landHeight;
float seaLevel;
float grassDensity;
imap2d grassmap;
vector<GrassBlade> grass;
Chunk(float x, float z, float width, float height);
void setEdge(PLACEMENT placement, Chunk* neighbor_chunk);
void setGrassMatrix(glm::mat4 projection, glm::mat4 view, glm::vec3 viewpos, glm::vec3 lightpos);
void initialize_terrain(texture_vector const &other_textures, float tess_level, float terrain_h, int octave);
void initialize_grass(const Shader &grsss_shader);
void initialize_tile();
void destroy();
void draw_terrain(const Shader &terrain_shader) const;
void draw_grass();
void draw_tile(const Shader &tile_shader) const;
};
#endif |
The four executive challenges of project-based strategy Purpose As a great deal of strategy execution takes the form of strategic projects, how you align these projects ultimately determines the success or failure of your strategy. Here, we discuss four executive challenges executives need to tackle to successfully manage a strategy in a project-based world. Design/methodology/approach Conceptual approach entailing illustrative case-examples Findings We find four executive challenges to tackle in order to successfully manage a strategy in a project-based world. Research limitations/implications As the study draws upon conceptual arguments, future studies need to assess the verisimilitude and boundary conditions of the challenges. Practical implications By thinking of a strategy through a project-based lens, and understanding the challenges thereof, executives should be better able to bridge strategy formulation and execution. Social implications A project-based approach to strategy is not necessarily limited to a for-profit sector; NGOs and governmental organizations may similarly learn from and draw upon a project-based approach to strategy. Originality/value As little research within strategy has explicitly conveyed a project-based lens, the study emphasizes a novel approach to strategy. |
s1 = "qwertyuiop[asdfghjkl;'zxcvbnm,./"
lr = raw_input()
s = raw_input()
resp = ""
for a in s:
for i in range(0, len(s1), 1):
if (a == s1[i]):
resp = resp + s1[i + (-1 if lr == 'R' else 1)]
print resp |
Fc receptors for IgG on human neutrophils: analysis of structure and function by using monoclonal antibody probes. Structural and functional characteristics of Fc receptors for IgG (Fc gamma) on human neutrophils were examined with two monoclonal antibody probes specific for the Fc gamma receptors, Leu 11b and 3G8. To determine the distribution, density, and membrane mobility of the Fc gamma receptor, we used immunogold staining techniques, flow cytometry analysis, and fluorescence microscopy. Both 3G8 and Leu 11b inhibited several cell functions, thereby depicting the regulatory role of the Fc gamma receptor in mediating neutrophil activities. Among the functions studied were release of lysosomal enzymes, release of superoxide anion (O2-), and Fc-dependent rosette formation and phagocytosis. The densities of Fc gamma determinants recognized by Leu 11b and 3G8 on cells from a patient with chronic myelogenous leukemia were less than the density of epitopes on neutrophils from a normal individual. Taken together, the detailed analysis of physical and functional aspects of the Fc gamma receptor on neutrophils described in this study serve as a model for further assessment of the use of Fc gamma phenotyping of cells as a diagnostic tool. |
<filename>openstudiocore/src/model/SetpointManagerMixedAir_Impl.hpp
/**********************************************************************
* Copyright (c) 2008-2014, Alliance for Sustainable Energy.
* All rights reserved.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
**********************************************************************/
#ifndef MODEL_SETPOINTMANAGERMIXEDAIR_IMPL_HPP
#define MODEL_SETPOINTMANAGERMIXEDAIR_IMPL_HPP
#include <model/HVACComponent_Impl.hpp>
#include <boost/optional.hpp>
namespace openstudio {
namespace model {
namespace detail {
class MODEL_API SetpointManagerMixedAir_Impl : public HVACComponent_Impl {
Q_OBJECT;
Q_PROPERTY(std::string controlVariable READ controlVariable WRITE setControlVariable);
public:
SetpointManagerMixedAir_Impl(const IdfObject& idfObject, Model_Impl* model, bool keepHandle);
SetpointManagerMixedAir_Impl(const openstudio::detail::WorkspaceObject_Impl& other,
Model_Impl* model,
bool keepHandle);
SetpointManagerMixedAir_Impl(const SetpointManagerMixedAir_Impl& other, Model_Impl* model,bool keepHandles);
virtual ~SetpointManagerMixedAir_Impl();
virtual const std::vector<std::string>& outputVariableNames() const;
virtual IddObjectType iddObjectType() const;
virtual boost::optional<ParentObject> parent() const;
virtual std::vector<ModelObject> children() const;
std::string controlVariable();
void setControlVariable( std::string value );
boost::optional<Node> referenceSetpointNode();
void setReferenceSetpointNode(Node& node );
boost::optional<Node> fanInletNode();
void setFanInletNode(Node& node );
boost::optional<Node> fanOutletNode();
void setFanOutletNode(Node& node );
boost::optional<Node> setpointNode();
void setSetpointNode( Node & node );
bool addToNode(Node & node);
std::vector<openstudio::IdfObject> remove();
ModelObject clone(Model model) const;
private:
REGISTER_LOGGER("openstudio.model.SetpointManagerMixedAir");
};
} // detail
} // model
} // openstudio
#endif // MODEL_SETPOINTMANAGERMIXEDAIR_IMPL_HPP
|
UPDATE (November 18, 2016) MIT Libraries Stand Committed To Diversity, Inclusion, Equity & Social Justice
UPDATE (November 16, 2016) ARL Issues Statement, “Research Libraries and Archives Stand Committed to Diversity, Inclusion, Equity, Social Justice
UPDATE (November 16, 2016) Association for Library Service to Children (ALSC) Announces Open Forum on Diversity, Inclusion, and Our Work, Post-Election
UPDATE: Public Library Association President Felton Thomas Jr. has released a statement on public libraries and inclusiveness.
—
Here’s the full text of the statement that can be also be accessed on the ALA website
Today American Library Association President Julie Todaro released the following statement regarding the invaluable role libraries and librarians will play within their communities as many search for common ground after the election.
“After a contentious campaign season filled with divisive rhetoric, we are now hearing from our members and in the news media about incidents of bigotry and harassment within our communities. From children acting out in schools to adults participating in violent acts, it is clear that our nation is struggling in the wake of this election.
“During times like these, our nation’s 120,000 public, academic, school and special libraries are invaluable allies inspiring understanding and community healing. Libraries provide a safe place for individuals of all ages and backgrounds and for difficult discussions on social issues. Our nation’s libraries serve all community members, including people of color, immigrants, people with disabilities, and the most vulnerable in our communities, offering services and educational resources that transform communities, open minds and promote inclusion and diversity.
“As an association representing these libraries, librarians and library workers, ALA believes that the struggle against racism, prejudice, stereotyping, and discrimination is central to our mission. As we have throughout our 140 year-long history, we will continue to support efforts to abolish intolerance and cultural invisibility, stand up for all the members of the communities we serve, and promote understanding and inclusion through our work.” |
//Budowa naglowa odpowiedzi na request htttp
static int build_response(const http_request_t *request,http_response_t *response)
{
char buffer[MAX_HEADER_VALUE_LENGTH];
int head_count=0;
time_t now=0;
struct tm *t;
strcat(response->resource_path,request->uri);
set_index(response->resource_path);
response->status = check_response_status(request->method,response->resource_path);
handle_error(response->status, response->resource_path);
response->major_version = request->major_version;
response->minor_version = request->minor_version;
now = time(NULL);
t = gmtime(&now);
strftime(buffer, 30,"%a, %d %b %Y %H:%M:%S %Z", t);
set_response_field_name_and_value(response,"Date",buffer);
set_response_field_name_and_value(response,"Server","PUT HTTP");
set_response_field_name_and_value(response,"Content-Type",get_extension(response->resource_path));
sprintf(buffer,"%d",file_size(response->resource_path));
add_to_total_size(atoi(buffer));
set_response_field_name_and_value(response,"Content-Length",buffer);
return 1;
} |
Buffer Analysis for a Data Sharing Environment with Skewed Data Access Examines the effect of skewed database access on the transaction response time in a multisystem data sharing environment, where each computing node has access to shared data on disks, and has a local buffer of recently accessed granules. Skewness in data access can increase data contention since most accesses go to few data items. For the same reason, it can also increase the buffer hit probability. We quantify the resultant effect on the transaction response time, which depends not only on the various system parameters but also on the concurrency control (CC) protocol. Furthermore, the CC protocol can give rise to rerun transactions that have different buffer hit probabilities. In a multisystem environment, when a data block gets updated by a system, any copies of that block in other systems' local buffers are invalidated. Combining these effects, we find that higher skew does not necessarily lead to worse performance, and that with skewed access, optimistic CC is more robust than pessimistic CC. Examining the buffer hit probability as a function of the buffer size, we find that the effectiveness of additional buffer allocation can be broken down into multiple regions that depend on the access frequency distribution. > |
Maximizing Oviposition Efficiency when Mass Rearing the Coccinellid, Sasajiscymnus tsugae, a Predator of the Hemlock Woolly Adelgid, Adelges tsugae Sasajiscymnus tsugae Sasaji and McClure (Coleeptera: Coccinellidae), is a biological control agent imported for management of hemlock woolly adelgid, Adelges tsugae Annand. In mass rearing S. tsugae, accurate estimation of egg numbers is important because larvae are cannibalistic, especially at higher densities. To determine the most accurate means of estimating egg production, three brands of gauze were compared as oviposition substrates. Curad® gauze provided the most accurate estimate of egg production, and was the most cost effective brand. When eggs were collected from oviposition jars, similar adult yields of S. tsugae occurred between rearing cages infested with 1,650 eggs from gauze compared to eggs on the twigs from within these jars. Additionally, orientation of oviposition jars impacted S. tsugae egg production as significantly more eggs were produced in horizontally oriented oviposition jars. Introduction The hemlock woolly adelgid, Adelges tsugae Annand, (Hemiptera: Adelgidae) was accidentally introduced to the eastern US from Japan in the early 1950s (). A. tsugae is now considered a significant threat to both eastern hemlock, Tsuga canadensis L. (Pinales: Pinaceae) and Carolina hemlock (T. caroliniana) in the eastern United States (). A. tsugae feeding causes both needle loss and bud death which can result in tree mortality in as little as four years (Cheah and McClure 1998;). In a forest environment, biological control can be an environmentally and economically effective method of managing A. tsugae. In the United States, many native predators occasionally feed on A. tsugae, but none significantly impact this pest which multiplies rapidly via parthenogenesis (Montgomery and Lyon 1996;Wallace and Hain 2000). The first non-native biological control agent released for A. tsugae management was Sasajiscymnus tsugae Sasaji and McClure (Coleoptera: Coccinellidae), which was introduced from Japan (Cheah and McClure 1998). Since 1997, S. tsugae has been mass reared and released as an important component of A. tsugae management programs (). When mass rearing S. tsugae, Palmer and Sheppard found that within oviposition containers, females would oviposit on both hemlock twigs and squares of gauze placed within A. tsugae infested hemlock bouquets. Sasajiscymnus tsugae eggs oviposited on gauze are easier to locate under microscopy than those on hemlock twigs, and provide a good estimate of total egg production. Accurate estimates of egg production are important because S. tsugae larvae cannibalize (Blumenthal 2002), and adult production decreases when >1,650 eggs are placed in a 61cm x 61cm x 49cm larval rearing chamber (). The objectives of this study were to determine the type of gauze that provided the most accurate and cost effective estimation of oviposition when mass rearing S. tsugae, and to examine whether oviposition jar orientation had an impact on the number of eggs produced by S. tsugae. Oviposition jar protocol In these studies, oviposition jars ( Figure 1) consisted of a 3.8 L glass jar with an 8 cm diameter hole cut in the plastic lid and covered with Noseeum netting (97 holes/cm 2 ) (Equinox, www.equinoxltd.com). Each jar contained a bouquet of A. tsugae infested hemlock twigs (Figure 2). Bouquets were prepared by placing a water-soaked Wet Foam (FloraCraft, www.floracraft.com) cylinder into a 6 cm tall by 4 cm diameter plastic vial. The open end of the vial was then covered by Cling Wrap (www.glad.com) which was secured with a rubber band. Six 20-25 cm long A. tsugae infested hemlock twigs were inserted through the Cling Wrap into the Wet Foam. Three 5 x 5 cm pieces of gauze were placed within each bouquet. After bouquets were placed in jars, 10 female and 5 male sexually mature S. tsugae were added to each jar. In the rearing program bouquets were removed from oviposition jars weekly. Eggs on both gauze and twigs were transferred to larval rearing boxes, and all adults were returned to a jar containing a new bouquet. Missing or dead adults were replaced with beetles of the appropriate gender that were at least 30 days old. Cheah and McClure have shown that at 25! C, S. tsugae males and females reached maturity at approximately 19 and 22 days, respectively. All oviposition jars were maintained in a controlled-environment (25 ± 1 o C, 60 ± 5% humidity, and 16:8 L:D) room. Effect of gauze type Studies to determine whether S. tsugae exhibited a preference for oviposition on different brands of gauze under insectary conditions were conducted from 19-30 January (41 oviposition jars), 9-13 February (45 jars), and 16-20 February (58 jars Companies, Inc., www.jnj.com) were cotton, while First Aid Gauze Pads (Johnson & Johnson) were rayon/cellulose. The three pieces of gauze placed within each hemlock bouquet consisted of one piece of each brand. To control for position effect on oviposition the location of each brand was randomly assigned to the base, middle, or tip of each bouquet. After 1 week hemlock bouquets were removed from oviposition jars and the numbers of viable S. tsugae eggs on each gauze brand were recorded for each container. Standard ANOVA procedures with Tukey-Kramer means comparisons for all pairs were used to determine oviposition differences among gauze brands (SAS 2006). Gauze cost analysis Because cost of gauze varied among the three brands listed above, we examined the average cost of a 5 x 5 cm piece of gauze for each of these brands. Prices were obtained on 10 August 2004 at five local merchants (Wal-Mart, Kmart, Target, CVS Pharmacy, Eckerd Pharmacy) and three online retail sources (Amazon.com, Medico-school.com, and Westburypharmacy.com). Standard ANOVA procedures with Tukey-Kramer means comparisons were used to determine cost differences among gauze brands (SAS 2006). Effect of eggs on gauze vs eggs on twigs Additional studies were conducted from 30 January to 23 June 2004 to determine the relationship between the number of S. tsugae eggs deposited on Curad ® Basic Care gauze and the number of eggs deposited on hemlock twigs within individual oviposition jars. To do this 100 pairs of larval rearing chambers were established as in Conway et al.. S. tsugae eggs were counted on each gauze pad from a group of oviposition jars until a total of 1,650 eggs was reached. These gauze pads were then placed into one larval rearing chamber (Figure 3) of a pair creating a known-number egg cohort. All hemlock twigs from those same oviposition jars were then placed into the other larval rearing chamber of the pair creating an unknown-number egg cohort. Larval rearing chambers were then maintained in a controlled-environment (25 ± 1 o C; 60 ± 5% humidity; 16:8 L:D) room. After a 35 ± 3 day developmental period, the number of adult S. tsugae emerging within each chamber was recorded. The numbers of adults emerging from known-number (gauze) egg cohorts were compared to the numbers emerging from unknown-number (twig) egg cohorts using Matched Pairs with Wilcoxon Sign-Rank (SAS 2006). Effect of jar orientation Because oviposition jars require less shelf space if standing vertically (0.027 m 2 / jar) than lying horizontally (0.093 m 2 / jar), a study was conducted from 6 December 2004 to 14 January 2005 to determine if jar orientation had an impact on egg production. On each day of this study equal numbers of oviposition jars were prepared as described above, then randomly assigned to either the standard horizontal position or a vertical position (Figure 4). A total of 60 jars were assigned to each group per replication, three replications were conducted, and only Curad ® gauze pads were used in this study. Hemlock bouquets were removed weekly from oviposition jars, and the numbers of viable S. tsugae eggs deposited on the Curad ® gauze pads were recorded for each of the two orientations. Standard ANOVA procedures with Tukey-Kramer means comparisons for all pairs were used to determine oviposition differences between jar orientations (SAS 2006). Results and Discussion Effect of gauze type S. tsugae females oviposited significantly more eggs on Curad ® than First Aid Gauze* Pads (Table 1). Although not significant, egg numbers were higher on Curad ® than Kling ® in all replicates (Table 1). Egg numbers did not differ significantly between Kling ® and First Aid Gauze* Pads (Table 1). The mean number of eggs laid per individual gauze pad increased as the experiment progressed through time with highest numbers occurring in the last trial. Although it was not quantified, both Curad ® (cotton) and Kling ® (cotton) were lighter in texture and had thinner thread diameter than the First Aid Gauze* Pads (rayon/cellulose). Seagraves found that coccinellids prefer to lay their eggs close to a food source, and in this study the white cotton gauze closely mimics the color of adelgid ovisacs. Additionally, it was observed that the loose weave on the cotton gauze allowed females to attach eggs in and among the cotton treads. On the more tightly woven rayon/cellulose pads eggs were only found to be attached to the edges of the pads. In most cases, eggs were attached primarily on the top and bottom edges of these pads. Gauze cost analysis A single 5 x 5 cm piece of First Aid Gauze* Pad cost significantly more than both Kling ® and Curad ®, while Kling ® cost significantly more than Curad ® ( Table 1). The higher number of eggs laid on Curad ® gauze reduces handling time when counting eggs on gauze. Additionally, Curad ® gauze cost $5.10 per day less than Kling ® gauze when 100 rearing jars are in production providing a substantial cost savings of over $700.00 per year compared to other readily available gauze materials The mean number of adults emerging per larval rearing box varied throughout the rearing season with highest adult production (> 825 adults per box) occurring in larval rearing boxes established between 2 February and 25 March (adults harvested between 12 March and 29 April). Palmer and Sheppard have shown that S. tsugae development is maximized when feeding on host eggs, and McClure reported that this is when A. tsugae systens egg production is greatest. Effect of eggs on gauze vs eggs on twigs The largest number of S. tsugae adults produced from eggs on twigs in a single rearing box was 1207 adults on 2 February and from eggs on Curad ® gauze was 1308 adults on 24 Feb. There was no significant difference between the mean numbers of S. tsugae adults emerging per rearing chamber across the season when starting with eggs on Curad ® gauze (0 = 399.7 ± 14.7 (SEM)) compared to eggs on hemlock twigs (0 = 395.2 ± 14.7 (SEM)) (t 1, 99 = -0.30, = 0.76, Correlation 0.77). Estimating that there is a 1:1 ratio of eggs on gauze to eggs on twigs allows rearing facility technicians to easily and accurately approximate the total number of eggs placed in larval rearing chambers. Palmer and Sheppard reported an approximate 1:1 ratio of S. tsugae eggs deposited on gauze and twigs in 3.8L glass jars, but did not report on the type of gauze used. The data from this study proved that use of Curad ® gauze as an oviposition substrate is an effective means to efficiently gather approximately 1650 eggs to place in a larval rearing chamber. Adult adelgids aestivate from August to December in forest situations, and are poor quality food for S. tsugae (Cheah and McClure 2000). To ensure use of the highest quality host material possible we began collecting at relatively low elevations in South Carolina and Georgia in December as A. tsugae broke aestivation, and progressed to higher elevations in North Carolina by the end of the rearing cycle in June as they entered aestivation. Sightly higher numbers of S. tsugae emerged from larval rearing containers containing A. tsugae eggs on twigs when host quality was poorest and slightly more S. tsugae emerged from containers with A. tsugae eggs on gauze when host quality was highest. Effect of jar orientation The number of S. tsugae eggs on gauze in oviposition jars maintained in a horizontal position was found to be significantly greater than in jars maintained in a vertical position in the second and third replications as well as in the overall data ( Table 2). The orientation of hemlock twigs in horizontal oviposition jars was similar to the twig arrangement found naturally on hemlock trees. In all replications more eggs were laid in jars held in the horizontal position than the vertical position. When S. tsugae adults were first placed into rearing jars they tended to move around the sides and lid of the container. However, no eggs were found on the noseeum netting on the jar lids. Once adults had settled onto the twigs with A. tsugae, the predators did little wandering. In the laboratory, 100 horizontally positioned oviposition jars were typically maintained each week. Based on oviposition data, it would require an average of 117 vertical jars to equal the egg production of 100 horizontal jars. Although vertical jar placement would require approximately 1/3 the total shelf space (9.3 m 2 for horizontal versus 3.16 m 2 for vertical), vertical placement would also require 17% more staff time to recover eggs from the additional jars. Conclusions Successful establishment of S. tsugae as a biological control agent for HWA relies on development of economically efficient rearing techniques that maximize production. When using quality A. tsugae as the food source, slight modifications to the initial S. tsugae rearing techniques presented by Palmer and Sheppard can increase the number of adult S. tsugae produced while reducing production costs. Techniques developed for mass rearing S. tsugae should provide a solid base for rearing similar coleopteran biological control agents for use in HWA management. |
for z in range(int(input())):
t=input()
s=list(t)
n=len(s)
#print(n)
d=[]
if len(set(s))==1:
print(t)
else:
for i in range(n):
d.append(s[i])
if s[i]=='1' and i+1<n and s[i+1]!='0':
d.append('0')
elif s[i]=='0' and i+1<n and s[i+1]!='1':
d.append('1')
#print(s)
print("".join(d))
|
Adaptive Relaying Method Selection for Multi-Rate Wireless Networks with Network Coding To maximize the throughput of multi-rate wireless networks, in this letter, we propose a scheme to adaptively select the relaying method among analog network coding (ANC), conventional network coding (CNC), no relaying (i.e. direct transmission without relaying), and plain routing. We first discuss the achievable data rates with different relaying methods under specific channel conditions, and then propose an algorithm with polynomial-time complexity that provides a sub-optimal solution to the relaying method selection problem. Simulation results show that the proposed scheme can effectively improve the network throughput compared with existing schemes, and its performance is near to the optimal performance. The results in this letter also provide some insights for the design of routing protocols in the future. |
At IFA 2018, the company took the wraps off its latest Chromebook efforts in the form of the Chromebook 514. This is the company’s latest Chromebooks which will feature a metallic chassis, a backlit keyboard, and also a touchpad the features the use of Corning’s Gorilla Glass. According to Acer, the use of Gorilla Glass will help create a “slicker feel” compared to plastic which is more commonly used.
If you’ve ever used a MacBook (which also uses a glass-covered trackpad) then you probably know Acer is right. The laptop will also sport a narrow bezel that will help make the laptop look smaller, but still manage to pack a 14-inch Full HD IPS display under the hood. Acer is also boasting that the Chromebook 514 will be able to squeeze out 12 hours of battery life, although admittedly your mileage may vary depending on what you do with it.
Acer will also be including USB-C ports which means that for those who are still using devices and accessories with USB-A connections, you’ll need to get some adapters or dongles. In terms of pricing, Acer has priced the Chromebook 514 at $350 and is expected to go on sale this coming October.
Filed in Computers. Read more about Acer, Chrome Os, Chromebook, IFA, IFA 2018 and Laptops. |
/**
bwtb3m
Copyright (C) 2009-2015 German Tischler
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
**/
#include <libmaus2/huffman/RLDecoder.hpp>
#include <libmaus2/util/ArgParser.hpp>
#include <libmaus2/util/OutputFileNameTools.hpp>
#include <libmaus2/fm/SampledISA.hpp>
#include <libmaus2/lf/LF.hpp>
#include <libmaus2/lf/MultiRankCacheLF.hpp>
#include <libmaus2/random/Random.hpp>
#include <iostream>
int bwttestdecodespeed(::libmaus2::util::ArgParser const & arg)
{
std::string const infn = arg[0];
uint64_t const n = libmaus2::huffman::RLDecoder::getLength(infn,1/* numthreads */);
libmaus2::huffman::RLDecoder dec(std::vector<std::string>(1,infn),0/*offset */,1/* numthreads */);
std::string const inputtype = arg.uniqueArgPresent("T") ? arg["T"] : "bytestream";
libmaus2::autoarray::AutoArray<uint8_t> BB(n);
std::string const isaname = libmaus2::util::OutputFileNameTools::clipOff(infn,".bwt") + ".isa";
if ( inputtype == "bytestream" )
{
// read inverse sampled suffix array
libmaus2::aio::InputStreamInstance::unique_ptr_type isaISI(new libmaus2::aio::InputStreamInstance(isaname));
libmaus2::fm::SampledISA<libmaus2::lf::LF>::readUnsignedInt(*isaISI); // isa rate
libmaus2::autoarray::AutoArray<uint64_t> Aisa = libmaus2::fm::SampledISA<libmaus2::lf::LF>::readArray64(*isaISI);
isaISI.reset();
// decode BWT from run length encoding to byte array
std::pair<int64_t,uint64_t> P;
uint64_t z = 0;
while ( (P = dec.decodeRun()).first >= 0 )
{
if ( P.first >= 256 )
{
::libmaus2::exception::LibMausException se;
se.getStream() << "unsupported character value " << P.first << " for inputtype=bytestream" << std::endl;
se.finish();
throw se;
}
for ( uint64_t i = 0; i < P.second; ++i )
BB[z++] = P.first;
}
// compute maximum symbol
int64_t maxsym = -1;
for ( uint64_t i = 0; i < BB.size(); ++i )
maxsym = std::max(maxsym,static_cast<int64_t>(BB[i]));
// compute alphabet size
uint64_t const alsize = static_cast<uint64_t>(maxsym + 1);
for ( uint64_t tpar = 1 ; tpar <= 8; ++tpar )
{
// clone BWT
libmaus2::autoarray::AutoArray<uint8_t> B = BB.clone();
// compute rank dictionary (one bit vector per symbol)
libmaus2::lf::MultiRankCacheLF LF(B.begin(),B.size(),alsize);
// generate random data and read it back to remove B and LF from cache
{
libmaus2::autoarray::AutoArray<uint8_t> RANDin(128*1024*1024);
for ( uint64_t i = 0; i < RANDin.size(); ++i )
RANDin[i] = libmaus2::random::Random::rand8();
libmaus2::autoarray::AutoArray<uint8_t> RANDout = RANDin.clone();
}
uint64_t const step = (Aisa.size() + tpar - 1)/tpar;
uint64_t const par = (Aisa.size() + step - 1)/step;
uint64_t const tsteps = std::min((n + par - 1)/par,static_cast<uint64_t>(128ull*1024ull*1024ull));
libmaus2::autoarray::AutoArray<uint64_t> R(par);
for ( uint64_t i = 0; i < par; ++i )
R[i] = Aisa[i * step];
libmaus2::timing::RealTimeClock rtc; rtc.start();
for ( uint64_t i = 0; i < tsteps; ++i )
for ( unsigned int j = 0; j < par; ++j )
R[j] = LF.step(B[R[j]],R[j]);
double const t = rtc.getElapsedSeconds();
std::cout << tpar << "\t" << t/par << " " << (par * tsteps)/t << std::endl;
}
return EXIT_SUCCESS;
}
else
{
::libmaus2::exception::LibMausException se;
se.getStream() << "input type " << inputtype << " is currently not supported" << std::endl;
se.finish();
throw se;
}
}
int main(int argc, char * argv[])
{
try
{
::libmaus2::util::ArgParser const arg(argc,argv);
if ( arg.size() < 1 )
{
std::cerr << "usage: " << arg.progname << " <in.bwt>" << std::endl;
return EXIT_FAILURE;
}
return bwttestdecodespeed(arg);
}
catch(std::exception const & ex)
{
std::cerr << ex.what() << std::endl;
return EXIT_FAILURE;
}
}
|
In the wake of this week's U.S. election, the symbol of Star Wars' Rebellion had been adopted by many fans protesting the victory of Donald Trump - and now, two of the writers of next month's Rogue One: A Star Wars Story have referenced the relationship between that movie and the current political reality on social media.
Chris Weitz tweeted the following Friday morning -
Please note that the Empire is a white supremacist (human) organization
- Chris Weitz (@chrisweitz) November 11, 2016
- with Gary Whitta, the original writer on the project, responding in kind:
Opposed by a multi-cultural group led by brave women. https://t.co/UUcjwflMWG
- Gary Whitta (@garywhitta) November 11, 2016
Weitz's tweet followed his praise for this op-ed piece from CBR.com, which explicitly connects Rogue One to this week's U.S. elections, with writer Brett White calling the movie "the most relevant movie of 2016," explaining, "When I look at the 'Rogue One' trailers, I see what I want from America. I see a multicultural group standing strong together led by a rebellious and courageous woman. That's what we are working towards, and what we will continue to work towards no matter what. That's what America - a land created as a haven for the persecuted, to be able to realize their limitless dreams - was created to be."
As if to cement the connection, both Weitz and Whitta have changed their Twitter avatars to an image of the Rebel insignia with a safety pin through it, a reference to the symbol of solidarity with persecuted minorities that has gained currency in the U.S. following the election. (It came from the U.K., post Brexit vote, where minorities faced similar prejudice and attacks.)
Rogue One: A Star Wars Story opens in the U.S. Dec. 16.
Read more: 'Rogue One' Trailer: The Biggest 'Star Wars' Clue Was Easy to Miss |
Maximizing Bali Village Tourism Potential Using Penta-Helix Model The rapid growth of the tourism sector has made the government committed to implementing the principle of sustainable tourism development. To develop regional tourism, it is relatively necessary to direct, integrated, cross-sectoral, and sustainable programs so that the economic benefits of tourism are increasingly felt evenly distributed by the community. Bali, popular as one of the most tourist destinations in the world, is used as an example for the development of a national destination by the Indonesian government by creating 10 new Bali destinations, but this scheme must be postponed due to the covid-19 outbreak that hit Indonesia. This study aims to explore the sustainable tourism strategy in Balis village tourism sector using a qualitative approach based on content analysis based on secondary data since Bali has the largest number of village tourism in Indonesia using the Penta-helix model (synergy between academic, business, community, government, and media). With the synergy and collaboration of these five central sectors (especially during decreasing tourism sector of coronavirus outbreak), it is expected to increase awareness and maximizing the potential among the public regarding Bali sustainable tourism. |
package grpc
import (
"strconv"
"github.com/pkg/errors"
"google.golang.org/grpc/metadata"
)
const (
ApplicationIdHeader = "x-application-identity"
UserIdHeader = "x-user-identity"
DeviceIdHeader = "x-device-identity"
ServiceIdHeader = "x-service-identity"
DomainIdHeader = "x-domain-identity"
SystemIdHeader = "x-system-identity"
UserTokenHeader = "x-user-token"
DeviceTokenHeader = "x-device-token"
)
type AuthData metadata.MD
func (i AuthData) SystemId() (int, error) {
return intFromMd(SystemIdHeader, metadata.MD(i))
}
func (i AuthData) DomainId() (int, error) {
return intFromMd(DomainIdHeader, metadata.MD(i))
}
func (i AuthData) ServiceId() (int, error) {
return intFromMd(ServiceIdHeader, metadata.MD(i))
}
func (i AuthData) ApplicationId() (int, error) {
return intFromMd(ApplicationIdHeader, metadata.MD(i))
}
func (i AuthData) UserId() (int, error) {
return intFromMd(UserIdHeader, metadata.MD(i))
}
func (i AuthData) DeviceId() (int, error) {
return intFromMd(DeviceIdHeader, metadata.MD(i))
}
func (i AuthData) UserToken() (string, error) {
return stringFromMd(UserTokenHeader, metadata.MD(i))
}
func (i AuthData) DeviceToken() (string, error) {
return stringFromMd(DeviceTokenHeader, metadata.MD(i))
}
func stringFromMd(key string, md metadata.MD) (string, error) {
values := md[key]
if len(values) == 0 {
return "", errors.Errorf("'%s' is expected id metadata", key)
}
return values[0], nil
}
func intFromMd(key string, md metadata.MD) (int, error) {
value, err := stringFromMd(key, md)
if err != nil {
return 0, err
}
intValue, err := strconv.Atoi(value)
if err != nil {
return 0, errors.WithMessagef(err, "parse '%s' to int", key)
}
return intValue, nil
}
|
WASHINGTON (Reuters) - A possible deal over Iran’s nuclear program that would phase out economic sanctions against Tehran is unlikely to flood world markets with more oil any time soon, despite Iran’s declared intention to claw back market share lost because of the curbs.
Malta-flagged Iranian crude oil supertanker "Delvar" is seen anchored off Singapore March 1, 2012. REUTERS/Tim Chong
Negotiators are still working out details of the deal they aim to seal by the end of June, but it would almost certainly lift sanctions only in stages, deferring even a partial return of Iranian crude exports until at least 2016, according to market experts, former U.S. officials, and Western diplomats.
Progress in talks in Switzerland this month has contributed to a more than 10 percent slide in oil prices over the past week as some traders and analysts brace for up to 1 million barrels per day (bpd) of Iranian crude hitting markets, potentially doubling the estimated global supply surplus.
Many focus on how quickly Iran can technically resume pumping oil to pre-sanctions levels, assuming shipments could follow quickly and brushing off concerns about a diminished customer base and potentially neglected oil fields.
What oil bears may underestimate is the hurdles on the diplomatic path to Iran’s return to world energy markets.
“Don’t expect to open the tap on oil,” one Gulf-based Western diplomat told Reuters. It is much easier to lift financial sanctions because so many components of Iran’s oil trade have been targeted, the diplomat said.
To be sure, a nuclear deal could allow some Iranian oil to return to the market quickly. A Reuters analysis of industry data shows Iran has up to 12 million barrels of oil in floating storage off its shores, and it has leased a storage facility in China to ship crude to India and South Korea.
Some energy experts estimate that Iran could raise its exports by 500,000-800,000 barrels per day (bpd) within six months of sanctions being lifted, but that is likely to be a result of a gradual build-up.
“The initial market response (to a deal) is likely to be quite bearish,” said Richard Mallinson, an analyst at consultancy Energy Aspects in London. “The attention is still on oversupply and there are enough people out there saying this could drive a rapid increase in Iranian volumes.”
Yet whatever emerges this year “may not be the kind of flood of oil...that some in the market are worried about,” Mallinson said.
FIGHTING BACK
Tehran is keen to recover market share lost under the U.S.-led sanctions that curbed the nation’s oil exports to just 1 million bpd from 2.5 million bpd in 2012.
“Under no circumstance will we reduce our global market share, even by one barrel,” Iranian oil minister Bijan Zanganeh said in November.
But for Iran to sell significantly more crude and repatriate hard currency earnings, many U.S. and European restrictions on its shipping, insurance, ports, banking, and oil trade would have to be lifted or waived.
Yet because they represent the bulk of world powers’ leverage over Iran, initial relief would probably be modest, said Zachary Goldman, a former policy advisor at the U.S. Treasury Department’s Office of Terrorism and Financial Intelligence, where he helped develop Iran sanctions policy.
Goldman predicted the first step would be to allow Tehran to use more of its foreign currency reserves abroad, now limited to specific bilateral trade.
“It’s discrete, and it doesn’t involve dismantling the architecture of sanctions that has been built up painstakingly over the last five years,” said Goldman, who now heads the Center on Law and Security at New York University.
Even with a nuclear deal, oil sanctions would probably effectively stay in place until early 2016, said Bob McNally, a former White House adviser under George W. Bush and now president of the Rapidan Group energy consultancy.
“I don’t see why a political deal in March or April or even technical implementation in July would, from a sanctions compliance perspective, enable Iran to ‘unload’ its stored oil any time soon, if doing so significantly increased exports above current levels,” McNally said in an email to Reuters.
Low oil prices may also limit how much Iran will want to ship abroad. Zanganeh has said that the country’s oil industry could survive prices as low as $25 per barrel.
The latest price slump, however, may cause Tehran to think twice about flooding the market even if sanctions are lifted, said David Goldwyn, who served as the U.S. State Department’s Special Envoy and Coordinator for International Energy Affairs from 2009 to 2011 and who now chairs the Atlantic Council’s Energy Advisory Board.
An Iranian national flag flutters during the opening ceremony of the 16th International Oil, Gas & Petrochemical Exhibition (IOGPE) in Tehran April 15, 2011. REUTERS/Morteza Nikoubazl
“It may depend on how desperate they are for cash,” he said.
Iran will also face stiff competition for its main Asian markets with fellow members of the Organization of Petroleum Exporting Countries (OPEC) such as Saudi Arabia, Kuwait and Iraq.
Kuwait and other Arab OPEC members have raised doubts whether Iran will manage to ramp up production quickly given that some of its oil fields have stayed idle because of the sanctions. |
A Case Study: Management of Ankylosing Spondylitis in Ayurveda Ankylosing spondylitis (AS) is chronic inflammatory disorders of unknown cause that primarily affects the axial skeleton (Predominantly sacroiliac joints and spine) peripheral joints and of extra articular structures may also be involved in an asymmetrical pattern. The disease usually begins in the second or third decade; the male to female prevalence is approximately 3:1. Patients having AS, more than 95% of them are positive HLA-B27. Use of NSAIDS are the first line of management and they effectively relieve the symptoms. Few Ayurvedic medicines found to be effective in the management of AS. Here, a case study of AS managed by Ayurvedic treatment approaches is presented. A patient 21yrs male came to OPD of Kayachikitsa i.e., Room No. 9 of GACH, Patna. He complaint of pain in B/L ankle left>right, also B/L knee joint pain as well as low back pain for 6 months. He was diagnosed on the basis of its signs and symptoms of AS with HLAB27 positive. He was managed by Ayurvedic medicines like Panchatikta Ghruta Gugglu, Ekangveer ras, Tab. Shallaki, Cap. Stresscom, Jrumax oil, Vaishwanar churna, Laxarid for 7 months and relief in his signs and symptoms. |
<reponame>owenfeehan/anchor-plugins<gh_stars>0
/*-
* #%L
* anchor-plugin-mpp-sgmn
* %%
* Copyright (C) 2010 - 2020 <NAME>, ETH Zurich, University of Zurich, Hoffmann-<NAME>
* %%
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
* #L%
*/
package org.anchoranalysis.plugin.mpp.segment.bean.optimization.statereporter;
import java.util.Optional;
import lombok.Getter;
import lombok.Setter;
import org.anchoranalysis.bean.annotation.BeanField;
import org.anchoranalysis.bean.annotation.OptionalBean;
import org.anchoranalysis.mpp.feature.energy.marks.VoxelizedMarksWithEnergy;
import org.anchoranalysis.mpp.segment.bean.optimization.feedback.StateReporter;
import org.anchoranalysis.mpp.segment.transformer.Compose;
import org.anchoranalysis.mpp.segment.transformer.StateTransformer;
import org.anchoranalysis.mpp.segment.transformer.StateTransformerBean;
import org.anchoranalysis.plugin.mpp.segment.bean.marks.voxelized.RetrieveDestinationFromVoxelized;
import org.anchoranalysis.plugin.mpp.segment.bean.marks.voxelized.RetrieveSourceFromVoxelized;
import org.anchoranalysis.plugin.mpp.segment.optimization.ToVoxelized;
/**
* Assumes we are interested in {@link VoxelizedMarksWithEnergy} as reporting type, and our
* optimization-state is in the form {@code ToPixelized<T>}
*
* @author <NAME>
* @param <T>
*/
public class StateReporterToPixelized<T>
extends StateReporter<ToVoxelized<T>, VoxelizedMarksWithEnergy> {
// START BEAN PROPERTIES
@BeanField @OptionalBean @Getter @Setter
private StateTransformerBean<T, VoxelizedMarksWithEnergy> secondary;
// END BEAN PROPERTIES
@Override
public StateTransformer<ToVoxelized<T>, VoxelizedMarksWithEnergy> primaryReport() {
return new RetrieveDestinationFromVoxelized<>();
}
@Override
public Optional<StateTransformer<ToVoxelized<T>, VoxelizedMarksWithEnergy>> secondaryReport() {
StateTransformer<ToVoxelized<T>, VoxelizedMarksWithEnergy> compose =
new Compose<>(new RetrieveSourceFromVoxelized<>(), secondary);
return Optional.of(compose);
}
}
|
package commands
import (
"fmt"
"net"
"strings"
"time"
)
type ReportStats struct {
Passed uint64 `json:"passed"`
Rejected uint64 `json:"rejected"`
Duration int64 `json:"run_duration_ms"`
}
type CheckResultFull struct {
Input string `json:"input"`
Valid bool `json:"valid"`
Checks []string `json:"checks_run"`
Passed []string `json:"checks_passed"`
Version uint `json:"version"`
}
func (c CheckResultFull) String() string {
var result = new(strings.Builder)
var err error
f := func(format string, arg ...interface{}) {
if err != nil {
return
}
_, err = fmt.Fprintf(result, format, arg...)
}
var valid = "invalid"
if c.Valid {
valid = "valid"
}
f("%-7s ", valid)
f("Checks:%-27s ", fmt.Sprintf("%+v", c.Checks))
f("Passed:%-27s ", fmt.Sprintf("%+v", c.Passed))
f("Version:%d ", c.Version)
f("%s", c.Input)
return result.String()
}
type CheckSettings struct {
Format string
CSV csvOptions
Check checkOptions
Workers uint64
}
type checkOptions struct {
Resolver net.IP
TTL time.Duration
InputIsEmailAddress bool
}
type csvOptions struct {
skipRows uint64
column uint64
}
|
/**
* Dozer-specific implementation of FieldMapping.
*/
public class DozerFieldMapping extends BaseDozerMapping implements FieldMapping {
private Model source;
private Model target;
/**
* Create a new FieldMapping.
*
* @param source source model field
* @param target target model field
*/
public DozerFieldMapping(Model source, Model target, Mapping mapping, Field field) {
super(mapping, field);
this.source = source;
this.target = target;
}
@Override
public Model getSource() {
return source;
}
@Override
public Model getTarget() {
return target;
}
@Override
public MappingType getType() {
return MappingType.FIELD;
}
} |
Planetfall: New Solar System Visions is a new book by Michael Benson featuring space images (Abrams).
Image Science and Analysis Laboratory, NASA JSC/Michael Benson/Kinetikon Pictures. © All rights reserved.
Sunset on the Pacific as seen from the International Space Station at an altitude of 235 miles. ISS 007 crew, July 21, 2003.
NASA GSFC/Michael Benson/Kinetikon Pictures. © All rights reserved.
View of the solar corona and magnetic loops during an eclipse of the Sun by the Earth. In this image, the outer plasma atmosphere of the Sun, 200 times hotter than the Sun's surface, is occulted by our planet. The graduated reduction in our view is due to the variable density of Earth's atmosphere, which blocks ultraviolet light. Solar Dynamics Observatory, April 2, 2011.
Avalanche in the northern polar region of Mars. On the left the avalanche creates a 180-foot-high dust cloud after falling nearly 2,000 feet from the scarp edge. To the right, black markings on frozen dunes give indications of a partial defrost. Photographed from an altitude of about 200 miles, the clarity of this view gives a clear indication of the spacecraft telescope's power. Mars Reconnaissance Orbiter, January 27, 2010.
NASA/JPL-Caltech/Michael Benson/Kinetikon Pictures. © All rights reserved.
Transit of Io across Jupiter. South is up in this view. Mosaic composite photograph. Cassini, January 1, 2001.
NASA/JPL-Caltech Michael Benson/Kinetikon Pictures. © All rights reserved.
Mimas against shadows cast by Saturn's rings on its northern hemisphere. In the lower third of the picture, we see the lit side of the rings from an oblique angle. North is up. Cassini, November 7, 2004.
Enceladus vents water into space from its south polar region. The moon is lit by the Sun on the left, and backlit by the vast reflecting surface of its parent planet to the right. Icy crystals from these plumes are likely the source of Saturn's nebulous E ring, within which Enceladus orbits. Mosaic composite photograph. Cassini, December 25, 2009. |
/**
* Sets up all dependencies before a test runs.
*/
@Before
public final void before() throws Exception {
File tempFolder = mTestFolder.newFolder();
mMetaManager = TieredBlockStoreTestUtils.defaultMetadataManager(tempFolder.getAbsolutePath());
mManagerView =
new BlockMetadataManagerView(mMetaManager, Collections.<Long>emptySet(),
Collections.<Long>emptySet());
Configuration.set(PropertyKey.WORKER_EVICTOR_CLASS, LRFUEvictor.class.getName());
Configuration.set(PropertyKey.WORKER_ALLOCATOR_CLASS, MaxFreeAllocator.class.getName());
Allocator allocator = Allocator.Factory.create(mManagerView);
mStepFactor = Configuration.getDouble(PropertyKey.WORKER_EVICTOR_LRFU_STEP_FACTOR);
mAttenuationFactor =
Configuration.getDouble(PropertyKey.WORKER_EVICTOR_LRFU_ATTENUATION_FACTOR);
mEvictor = Evictor.Factory.create(mManagerView, allocator);
} |
Scaling Behavior of Polymers at Liquid/Liquid Interfaces. The dynamics of a polymer chain confined in a soft 2D slit formed by two immiscible liquids is studied by means of molecular dynamics simulations. We show that the scaling behavior of a polymer confined between two liquids does not follow that predicted for polymers adsorbed on solid or soft surfaces such as lipid bilayers. Indeed, our results show that in the diffusive regime the polymer behaves like in bulk solution, following the Zimm model, and with the hydrodynamic interactions dominating its dynamics. Although the presence of the interface does not affect the long-time diffusion properties, it has an influence on the dynamics at short time scale, where for low molecular weight polymers the subdiffusive regime almost disappears. Simulations carried out when the liquid interface is sandwiched between two solid walls show that, when the confinement is a few times larger than the blob size, the Rouse dynamics is recovered. |
Can key interest rates decrease output gaps? The difference in the GDP levels is crucial for the macroeconomic forecasting to de- velop adequate and supportive fiscal and monetary policies. Most mismeasurements under current geoeconomics challenges can be explained by the difficulty in predicting recessions and the overestimation of the economys potential capacity. The research aims to consider the GDP gaps effectiveness for the possible forecasting of the mon- etary policy, particularly the central banks interest rate. The study uses quantitative methods, particularly VAR modeling. The VAR model is chosen as a proven useful tool for describing the dynamic behavior of economic time series and forecasting. The data sample is chosen as Eurozone, the United States, and Japan. The similarity is de- tected on output gaps implementation in the considered states; however, the variety in the responses to the financial crisis is revealed. This difference is due to the dif- ferent sensitivity of economies on the impact of monetary instruments. In particular, the Japanese economy has a relatively low level of sensitivity to changes in monetary instruments. In terms of the reactions of central banks to the current economic crisis caused by COVID-19, then due to the global lockdown and the incredible decline in economic activity, almost all countries are in a situation of negative GDP gap according the papers approach. However, the measures to mitigate it will vary in different states. the impact of monetary instruments, particularly the key of central on the of the GDP gap. The GDP gap is considered the deviation from the level of potential GDP, which is determined by the long-term GDP trend. For correct assessments of the long-term trend, the GDP value is normal-ized the corresponding deflator level and translated all values into constant US dollars for comparability, which allowed calculating comparable GDP indices. The GDP for the were calculated based on this. It showed that the quite similar 20082009. INTRODUCTION The contemporary economic theory considers governments and central banks' ability to regulate economic growth. An important part is played by fiscal and monetary policy. For a long time, principal tools are under debate on their importance and effectiveness (Chen & Grnicka, 2020). The final answer is still demanded. Both policies are based on the assumption that regulation's task is to achieve a general equilibrium, in which aggregate demand should be equal not only to aggregate supply but also to the potential output of the state. Any deviation from the potential output level is considered a problem that requires government or central bank regulation. In particular, if the market equilibrium is less than the potential output, it indicates underemployment, inefficient attraction of resources, and inflation. On the contrary, if the economy produces more than the potential level, even more terrible deflationary processes have started. The use of fiscal and monetary instruments allows stimulating or discouraging economic activity. This shifts the equilibrium point towards the potential output level. This response is particularly important in an economic crisis when large-scale stimulus measures are taken. The level of such measures largely depends on the magnitude of the deviation in the potential output level. Obviously, the larger the gap between actual and potential output, the greater the public policy measures to keep the economic situation under control. The magni-tude of such measures, their role, and timing are vital right now when the world is experiencing one of the largest economic crises caused by the coronavirus pandemic. Global lockdown, quarantine, some travel restrictions, social contacts, etc. have led to a significant industry collapse in almost all countries, a significant reduction in energy consumption, disruption of logistics, and interstate trade. The response of the leading central banks and governments of almost all countries in 2020 was almost synchronous: a significant weakening of monetary policy and the introduction of fiscal incentives for the economy. Simultaneously, this response only leads to an increase in government debt and the devaluation of currencies' purchasing power, but does not address the global attempt to reach sustainable economic development. Thus, the question of the effectiveness of fiscal and monetary policy measures arises. The highlighted challenges are quite broad. Thus, this paper focuses on only one of the tools -monetary tools (particularly, key interest rates which are currently the main instrument of inflation-targeting central banks' monetary policy) to stimulate the economy due to the GDP gap. LITERATURE REVIEW An important aspect of the analysis is the correct definition of the GDP gap. As a rule, the GDP gap (output gap) means the difference between actual GDP and potential GDP. Scholars widely discuss the gaps in the tool to forecasts cycles and crises, e.g., Schuler provides empirical evidence suggesting that the credit-to-GDP gap is subject to spurious medium-term cycles, i.e., artificial boom-bust cycles with a maximum duration of around 40 years. However, most papers are devoted to the regional combating of the GDP gap, e.g., Farrell tested from a South African perspective how the credit-to-GDP gap can be used as a guide to making decisions regarding the countercyclical capital buffer. This study confirmed that the mechanical application of the credit-to-GDP guide for the region is not advisable. In the same tendency, Kauko and Tl considered the trend deviation of the creditto-GDP ratio ("Basel gap") as an early warning indicator of banking crises. They concluded that the 2008 crisis does not dominate the results while the long sample almost eliminates filter initialization problems. Analysis of fiscal instruments to combat GDP gaps is given in Kharlamova discuss the criticism anxiety and presented some evidence that many of the criticisms are focused to an excessive degree on the role of potential output in EU fiscal surveillance, with the practice of surveillance being much more flexible and less rigid than many commentators tended to suggest. The dispute is still boosting (Heimberger & Kapeller, 2020). Note that there are differences in the GDP gap calculation due to different understandings of the potential GDP level. On the one hand, the level of GDP should be achieved under the conditions of complete and efficient involvement of resources, technological and demographic development. However, not all of these factors can be fully taken into account, and, therefore, some research-ers identify several approaches to potential GDP calculation: 1) institutional approach. Based on the institutional function of Cobb-Douglas, the state output is estimated under the condition of complete utilization of capital and labor, taking into account demographic changes, a constant level of technological improvement (e.g., Gazda & Godziszewski, 2011); 2) regression approach. It is used to calculate a certain long-term trend of actual GDP growth, which continues for the following periods (e.g., where the coefficient is determined by regression. This equation can be rewritten as follows: where Y is actual output, q is a potential output, u is actual unemployment, u* is the natural rate of unemployment, is a constant derived from the regression to show the relationship between deviations from the natural output and natural unemployment. Each of these approaches has its advantages and disadvantages. In particular, the first approach can generate the most accurate estimates but is associated with the difficult task of a statistical base maintaining. Although relatively easy one, the second approach does not indicate which interval to take to assess the long-term trend. The potential GDP assessment behavior will significantly depend on the choice of the initial sample. It is worth mentioning here that in early 2008, the US Federal Reserve estimated potential GDP in this way, expecting continued economic growth. However, the crisis of 2008-2009 has shown that the economy has not yet reached the level of potential GDP, shown by Coibion, Gorodnichenko, and Ulate. Moreover, another disadvantage of this method can be noted (Drehmann & Tsatsaronis, 2014). For example, at the end of the year, the central bank calculates the level of potential GDP to shape policy, but after one quarter, such a forecast becomes inaccurate due to the requirement to recalculate the potential GDP level caused by the changes in the sample, which already contains data for the first quarter. As a result, the GDP gap changes for each subsequent quarter of the year. That leads to a change in monetary policy, representatively (Bank of England, 2014). However, these cases are quite technical and do not significantly affect the accuracy of forecasting. Finally, the third approach has the right to exist with a constant understanding of who works in the economy. However, the trends of recent decades show that more and more people choose the path of freelance or informal employment, which leads to the impossibility of determining the real unemployment, but also, accordingly, the natural level of employment (). METHODS Thus, the GDP gap plays an important role in shaping the fiscal and monetary policy of the state. It can be calculated in several ways, but the most suitable at this stage is a regression approach, taking into account the rules of sampling to determine the long-term trend of economic growth. The paper aims to determine the interaction of the central bank's main interest rate and the size of the GDP gap in the country. So, the research hypothesis is formulated: the states' central banks can effectively regulate the GDP gap by altering the interest rates. The obtained results are aimed to boost the discourse on the topic by the international community of financiers, both academics and practitioners. For further analysis, the VAR model tool is used, showing the relationship between the country's main interest rate level and the GDP gap level. This toolkit allows quantifying the impact of interest rates and exploring it in dynamics through the use of impulse functions. Building a vector autoregression model is one of the most effective methods of analyzing financial and monetary transmission channels' impact on key macroeconomic parameters. The VAR model allows investigating the relationship of each model variable's current values with current and past (lag) values of all variables included in the model. In other words, the model enables us to simultaneously assess many macroeconomic dependencies, taking into account their dynamics and relationships. The general technique for constructing a vector autoregressive model involves selecting inputs based on a cause-and-effect relationship analysis, for which a stationary analysis is then performed. In the case of non-stationary time series, reduction to a stationary form is carried out by taking differences of the corresponding order. It also checks for cointegration to take into account long-term relationships between variables. In this case, the vector autoregressive model will include the so-called error correction mechanism. The last step is to evaluate the unknown parameters of the model and analyze the results. In general, the VAR model is a system of n equations, which in matrix form can be written as follows: where y t is a k-measurable vector of endogenous variables, i.e., those estimated using the model; x t is an m-dimensional vector of exogenous variables that reflect external influences on the model; c is the vector of constants; A 1,..., A p and B are matrices of dimensional coefficients (k k) and (k m), respectively, to be estimated; t is the error vector, t ~ N (0, 2 ). VAR is an economic model that reflects the evolution and interdependence between variables of multidimensional time series, generalizing one-dimensional autoregressive models. VARs were first proposed by Sims as an alternative to structural models, i.e., such models formed based on economic laws of the economic system (for example, the dependence of Phillips for unemployment and inflation or Taylor's rule for the refinancing rate, etc.). Instead, in the VAR model, all variables are considered simultaneously by including for each variable an equation that explains the evolution (dynamics) of the variable based on the previous values of the variable and the lag values of other variables of the model. This, at first glance, the simple tool allows you to systematically and internally consistently reflect the dynamics of multidimensional time series. So, a long-term trend for the entire observation period from the first quarter of 1999 to the fourth quarter of 2019 is built to determine the GDP gap. RESULTS According to these data, the long-term trend of the GDP index is estimated. For this purpose, the usual regression is evaluated (Table 1). All models are adequate with significant coefficients. Thus, it can be concluded that the American economy developed much faster than the European and Japanese. As one can see, Japan has not yet overcome the consequences of the "lost decade". The rate of economic growth in the United States is more than twice bigger than in Japan. Using the obtained coefficients, the GDP gap values were calculated: _. tt t gap GDP long trend = − The result of calculating the GDP gap for these countries can be seen in Figure 2. As can be seen from Figure 2, the US, EU, and Japan synchronized economic cycles that led to similar economic problems and reactions to change. The only difference is in the EU's reaction, which has used more fiscal instruments since the 2008 global financial crisis, while the US has used monetary stimulus. However, as can be seen from Figure 2, the difference is not critical. In turn, Japan does not rely on monetary measures at all. The graphs of the dependences of the GDP gap and the key rate dynamics are constructed to illustrate it ( Figure 3). Source: Calculated by the authors. US_gap Thus, the reaction to GDP gaps in these countries differs significantly. In particular, the ECB is trying to respond to the positive gap by raising rates. The Bank of Japan makes insignificant and extremely rare changes in interest rates. In the Fed, the GDP gap response is fairly standard: with a positive gap -rates increase, with a negativerates approach 0. Figure 2. Dynamics of GDP gaps of the considered countries However, the correct identification of the VAR model requires that the time series be stationary. The stationarity is understood as the invariance in time of mathematical expectation, variance, and covariance of the time series. The stationary requirement is necessary to obtain unbiased estimates of VAR-model coefficients' matrices by the least-squares method. Therefore, the input data are to be checked on stationarity using the augmented Dickey-Fuller test. This test's main essence is to calculate the ADF statistics for the series itself, then for the first, second, etc. differences. The stationary condition is satisfied if the ADF statistics' value does not exceed the corresponding critical value. In this case, a series whose k-th differences are stationary is called an integrated series of k-th order and is denoted by I(k). The stationary series is denoted by I. The Augmented Dickey-Fuller test is performed, which shows that all variables are non-stationary in levels, but stationary in the first differences (see Table A3 in Appendix). Thus, it is appropriate to build a model in the first differences of variables. The next step is to select the optimal number of lags for the VAR model. The results of the analysis are shown in Table 2. Thus, two lags are chosen for the European model, and one lag for the Japanese and American models. The results of the model evaluation are given in Table A4 in Appendix. The impulse functions for each country are constructed based on the evaluated models. For the European Union (Figure 4), it can be seen that the shock of the GDP gap plays a significant role for at least 9 quarters, reaching a maximum impact in Response of D(US_RATE) to D(US_RATE) the 4th quarter. Therefore, avoiding such shocks is an important task of the ECB. Simultaneously, the impact of changes in interest rates reaches a maximum in a year, gradually decreasing over 2.5 years. For Japan, the situation is significantly different ( Figure 5). The impact of the GDP gap shock is observed only for 1 period, and the impact of interest rate changes -for 2 quarters. Thus, the intervention of the Central Bank of Japan has a very shortterm effect. The USA occupies an intermediate position between the considered states ( Figure 6). The shocks have an obvious effect for 2 quarters, and the Fed rate's impact -for at least 3 quarters. Simultaneously, in contrast to Japan, the new rate determines the change itself for a long time (up to 6 quarters); meanwhile, this effect ends after two quarters in Japan. The variance decomposition in these models turns out that EU_RATE variance due to EU_ GAP is from 40% in the first period to 80% after 9 periods. In the US and Japan, this percentage is between 10 and 20%, indicating that the rate remains a fairly effective mean of combating GDP gaps in the EU. DISCUSSION Of course, several important aspects of this paper are not fully considered. In particular, the effect of other monetary policy channels is not enough disclosed, e.g., changes in the money supply and various operations to maintain liquidity in financial markets through the government bonds issued. The impact of monetary channels on other aspects of the economy, particularly inflation and unemployment, is not considered. Another unresolved issue is the study of the asymmetric effects of positive and negative GDP gaps. Monetary policy may differ during periods of economic boom and bust (e.g., Kangur, Kirabaeva, Natal, & Voigts, 2019). However, additional analysis is needed to determine the level of these differences. Another discussable aspect is supposed to be in studying the changing role of interest rates in many countries. So far, almost all countries have faced either lower interest rates (Eastern European countries) or the lowest possible values (Switzerland, Denmark, etc.). It is shown in the research that this situation leads to limited opportunities for the state to stabilize the economy and increase fiscal influence, which only complicates the problems with debt payments. Also, since low rates have persisted for a long time, it leads to a change in the economy and investments structure. In particular, at low-interest rates, incentives to keep deposits in banks are lost, and, accordingly, there is an accumulation of money from people who do not work in the economy. However, after some time, in the event of a revival of economic activity, these funds will be directed to the real economy, which will result in a surge in inflation. Another aspect of low rates in developed countries is also worth to be noted. Due to the policy of cheap money and the actual impossibility of investing within developed countries, there is a demand for risky transactions abroad. Thus, significant speculative capital is formed, which significantly increases the volatility of emerging markets and increases losses from the upcoming financial crisis. The results are obtained for developed countries (USA, Japan, EU countries) and are slightly different from those obtained for developing countries. In particular, Brandao-Marques, Gelos, Harjes, Sahay, and Xue showed that there are significant transmission changes in monetary rates to output and prices. However, it should be mentioned that other studies claim that the stabilization of short-term interest rates is the main operational goal of central banks, i.e., changes in rates generally negatively affect the country's economic performance. Thus, it is obvious that there is currently no clear economic opinion on the feasibility of actively changing interest rates, and therefore banks are using traditional tools to combat GDP gaps. Thus, there is currently no clear economic opinion on the feasibility of actively changing interest rates, and therefore banks are using traditional tools to combat GDP gaps. CONCLUSION This paper examines the impact of monetary instruments, particularly the key rate of central banks on the size of the GDP gap. The GDP gap is considered the deviation from the level of potential GDP, which is determined by the long-term GDP trend. For correct assessments of the long-term trend, the GDP value is normalized to the corresponding deflator level and translated all values into constant US dollars for comparability, which allowed calculating comparable GDP indices. The GDP gaps for the European Union, Japan, and the United States were calculated based on this. It showed that the size and direction of GDP gaps are quite similar for the countries considered. The only difference is the response to the global financial crisis of 2008-2009. This difference is due to the different sensitivity of economies on the impact of monetary instruments. In particular, the Japanese economy has a relatively low level of sensitivity to changes in monetary instruments. The VAR models investigate how the interest rate channel is related to the shock GDP gaps in the analyzed countries. It has been shown that this channel has the greatest and most significant influence in the European Union. Despite the current negative rates, the economy's response to changes in rates remains active, accounting for 80% of the GDP gap changes. This explains the so-called quantitative easing in these countries, as the main channel is no longer operational. This study attempts to be valuable in terms of studying the reactions of central banks to the current economic crisis caused by COVID-19. Due to the global lockdown and the incredible decline in economic activity, almost all countries face a negative GDP gap. It was found out from the considered models, a simple reduction in rates may not help all countries. In particular, if in the EU such a policy seems promising, in Japan its effect will be very low, and in the USA-short-lived one. Thus, given the huge GDP gap, other channels should be expected to be used, which will create the preconditions for the strengthening of the euro. ACKNOWLEDGMENT The paper is done in the framework of scientific faculty research 16F040-04 "Steady-state security assessment: a new framework for analysis", Taras Shevchenko National University of Kyiv (Ukraine). |
<gh_stars>100-1000
package audit
import (
"encoding/csv"
"encoding/json"
"fmt"
"os"
"text/tabwriter"
"github.com/kubesphere/kubeeye/pkg/kube"
"github.com/pkg/errors"
)
func defaultOutput(receiver <-chan kube.ValidateResults) {
w := tabwriter.NewWriter(os.Stdout, 10, 4, 3, ' ', 0)
fmt.Fprintln(w, "\nKIND\tNAMESPACE\tNAME\tMESSAGE")
for r := range receiver {
for _, result := range r.ValidateResults {
if len(result.Message) != 0 {
s := fmt.Sprintf("%s\t%s\t%s\t%-8v", result.Type, result.Namespace, result.Name, result.Message)
fmt.Fprintln(w, s)
}
}
}
w.Flush()
}
func JSONOutput(receiver <-chan kube.ValidateResults) {
var output []kube.ResultReceiver
for r := range receiver {
for _, result := range r.ValidateResults {
if len(result.Message) != 0 {
output = append(output, result)
}
}
}
// output json
jsonOutput, _ := json.MarshalIndent(output, "", " ")
fmt.Println(string(jsonOutput))
}
func CSVOutput(receiver <-chan kube.ValidateResults) {
var output []kube.ResultReceiver
for r := range receiver {
for _, result := range r.ValidateResults {
if len(result.Message) != 0 {
output = append(output, result)
}
}
}
filename := "kubeEyeAuditResult.csv"
// create csv file
newFile, err := os.Create(filename)
if err != nil {
createError := errors.Wrap(err, "create file kubeEyeAuditResult.csv failed.")
panic(createError)
}
defer newFile.Close()
// write UTF-8 BOM to prevent print gibberish.
newFile.WriteString("\xEF\xBB\xBF")
// NewWriter returns a new Writer that writes to w.
w := csv.NewWriter(newFile)
header := []string{"name", "namespace", "kind", "message", "reason"}
data := [][]string{
header,
}
for _, receiver := range output {
var resourcename string
for _, msg := range receiver.Message {
if resourcename == "" {
contexts := []string{
receiver.Name,
receiver.Namespace,
receiver.Type,
msg,
receiver.Reason,
}
data = append(data, contexts)
resourcename = receiver.Name
} else {
contexts := []string{
"",
"",
"",
msg,
receiver.Reason,
}
data = append(data, contexts)
}
}
}
// WriteAll writes multiple CSV records to w using Write and then calls Flush,
if err := w.WriteAll(data); err != nil {
fmt.Println("The result is exported to kubeeyeauditResult.CSV, please check it for audit result.")
}
}
|
Impact of Integration of Inclusive Education and Information and Communication Technology on the Learning Process of a Child with Down Syndrome Dawn Syndrome's life is being facilitated by Information and Communication Technology (ICT) & Assistive Technology (AT). Assistive technology allows people with Down syndrome to participate in daily activities and become more autonomous and social. India is making crucial contributions to improving the lives of people with Dawn Syndrome and promoting their participation in everyday social activities. However, the use of assistive technology to support these populations in these nations needs to be improved. The primary goal of the current study is to assess the situation with regard to the use of auxiliary aids in the teaching and learning of Down syndrome pupils in inclusion schools and rehabilitation facilities in India. Furthermore, the effects of ICT & AT on improving the independence, performance, and social interaction of students with Down syndrome were investigated. To accomplish these goals, two distinct surveys were given to a non-random sample of teachers, experts, and Down syndrome families in India. Overall, the findings indicate that implementing ICT & AT in the instruction and learning of Down syndrome pupils can help them become more autonomous and sociable. In turn, this can promote independence, social interaction, and performance in people with Down syndrome. The ability and talents of the teachers, professionals, and families still need to be improved in order for them to accept the ICT & AT and achieve the best results in order to realize the greatest and most long-lasting benefits of adopting it. The study's findings include various suggestions for improving India's educational system for students with Down syndrome and other disabilities. The study also makes a significant contribution to theoretical literature and understanding by developing a novel model for analyzing the effects of ICT & AT on Down syndrome, a model that has not been frequently developed in earlier work. Additionally, it has created new metrics that can be used in similar future research. |
A new paper setting out proposals for a future customs relationship with the EU has been unveiled today by the Government in the first of a series of papers on the UK’s future partnership with the EU.
The document highlights the UK’s strong starting position and how we can build on the strong foundation through two broad approaches:
A highly streamlined customs arrangement between the UK and the EU, with customs requirements that are as frictionless as possible. This would aim to continue some existing arrangements we have with the EU, reduce or remove barriers to trade through new arrangements, and adopt technology-based solutions to make it easier for businesses to comply with customs procedures.
A new customs partnership with the EU by aligning our approach to the customs border in a way that removes the need for a UK-EU customs border. One potential approach would involve the UK mirroring the EU’s requirements for imports from the rest of the world where the final destination is the EU.
The paper also sets out new details on an interim period with the EU. The proposed model, which would mean close association with the EU Customs union for a time-limited period, would ensure that UK businesses only have to adjust once to a new customs relationship. This would minimise disruption and offering business a smooth and orderly transition.
Secretary of State for Exiting the EU David Davis said:
The approaches we are setting out today will benefit both the EU and UK and avoid a cliff-edge for businesses and individuals on both sides.
The way we approach the movement of goods across our border will be a critical building block for our independent trade policy. An interim period would mean businesses only need to adjust once to the new regime and would allow for a smooth and orderly transition.
The UK is the EU’s biggest trading partner so it is in the interest of both sides that we reach an agreement on our future relationship. The UK starts from a strong position and we are confident we can deliver a result that is good for business here in the UK and across the EU.
Chancellor of the Exchequer, Philip Hammond said:
Our proposals are ambitious, and rightly so. They set out arrangements that would allow UK businesses to continue to trade with their European partners in the future, while expanding their markets beyond the EU.
And in the near term they will reassure people and companies that, the day after we leave the EU, they will still be able to go about their business without disruption as we make a smooth transition to our bright future outside the EU and deliver a Brexit that works for Britain.
The leading document crucially sets out that the UK will be guided by what delivers the greatest economic advantage to the UK, and by three key objectives: to ensure trade with the EU is frictionless as possible, to avoid any form of hard-border between Ireland and Northern Ireland and to establish an independent international trade policy.
International Trade Secretary, Dr Liam Fox said:
Leaving the Customs Union will allow us to operate a fully independent trade policy in Britain’s national interest which will benefit UK businesses and consumers.
We will seek a new customs arrangement that ensures that trade between the UK and the EU remains as frictionless as possible and allows us to forge new trade relationships with our partners in Europe and around the world.
As we leave the EU and establish an independent trade policy, the Government will prioritise ensuring that the UK and EU businesses and consumers can continue to trade freely with one another as part of a new free trade agreement. In 2016, UK imports and exports from the EU totalled £553 billion alone.
The paper can be found here. |
// Returns true if lhs and rhs contain identical string values.
// Lower bound: O(1) when lengths are different.
// Upper bound: O(len(lhs)) when all elements are equal.
func ElementsEqual(lhs, rhs []string) bool {
if len(lhs) != len(rhs) {
return false
}
for i, v := range lhs {
if v != rhs[i] {
return false
}
}
return true
} |
<filename>main.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <sys/stat.h>
#include <unistd.h>
char *program_name;
void exit_with_error(int error, char *error_str)
{
fprintf(stderr, "error: %s: %s\n", program_name, error_str);
exit(errno);
}
int main(int argc, char **argv)
{
char ch;
FILE *fp;
int section_counter = 0;
char id_str[10] = "";
unsigned int id_int = 0;
int id_i = 0;
char directory_prefix[] = "/run/user/";
char directory_to_make[sizeof(directory_prefix) + sizeof(id_str)];
program_name = argv[0];
fp = fopen("/etc/passwd", "r");
if (fp == NULL)
{
perror("Error while opening the file.\n");
exit(EXIT_FAILURE);
}
while((ch = fgetc(fp)) != EOF)
{
switch (ch)
{
case ':':
/* get id stored in the section */
id_int = atoi(id_str);
if (id_int >= 1000)
{
if (section_counter == 2)
{
int ret;
strcpy(directory_to_make, directory_prefix);
strcat(directory_to_make, id_str);
ret = mkdir(directory_to_make, S_IRWXU);
if (ret == -1) {
switch (errno) {
case EEXIST:
break;
case EACCES:
exit_with_error(2, "the parent directory does not allow write");
case ENAMETOOLONG:
exit_with_error(2, "pathname is too long");
default:
exit_with_error(2, "mkdir");
}
}
ret = chmod(directory_to_make, 0700);
if (ret == -1) {
switch (errno) {
case EACCES:
exit_with_error(3, "the parent directory does not allow changing permissions");
case ENAMETOOLONG:
exit_with_error(3, "pathname is too long");
default:
exit_with_error(3, "chmod");
}
}
ret = chown(directory_to_make, id_int, -1);
if (ret == -1) {
switch (errno) {
case EACCES:
exit_with_error(4, "the parent directory does not allow changing permissions");
case ENAMETOOLONG:
exit_with_error(4, "pathname is too long");
default:
exit_with_error(4, "chown");
}
}
}
else if (section_counter == 3)
{
int ret = chown(directory_to_make, -1, id_int);
if (ret == -1) {
switch (errno) {
case EACCES:
exit_with_error(4, "the parent directory does not allow changing permissions");
case ENAMETOOLONG:
exit_with_error(4, "pathname is too long");
default:
exit_with_error(4, "chown");
}
}
}
}
/* move to next section */
section_counter++;
/* reset values */
id_i= 0;
for (int i = 0; i < 10; i++)
id_str[i] = 0x0;
break;
case '\n':
section_counter = 0;
break;
default:
if (section_counter == 2 || section_counter == 3)
{
if (id_i > 9)
exit_with_error(1, "reading /etc/passwd: ID went over max size");
id_str[id_i] = ch;
id_i++;
}
}
}
fclose(fp);
return 0;
}
|
Audit of Microbiological Profile of 95 Serial Cases Presenting with Otorrhea to ENT OPD at Tertiary Care Hospital ABSTRACT: Otorrhea is a common ENT presentation affecting all the age groups which requires accurate assessment. The etiology of ear discharge is of complex nature, majority being aerobic bacteria, and the rest are anaerobes, fungi and mixed infections. Therefore, the need to determine the cause behind the discharge and to find out the nature and microorganism related to it becomes important. The current study is being conducted to determine the microbial flora in patients presenting with otorrhea and guide towards empirical treatment based on the susceptibility of the safest antimicrobial to the causative organism. The current study is a prospective observational (descriptive) study which involves a preliminary mycobacterial analysis of the ear discharge in a sample of 95 patients presenting with otorrhea. The commonest age group affected in the current study is 20-35 years with males commonly affected. Most common amongst the bacteria are pseudomonas aeruginosa (18.94%) and coagulase negative staphylococcus aureus (14.73%). Mycological profile revealed the predominance of Aspergillus species (9.47%). most common diagnosis was CSOM mucosal (84.21%) followed by CSOM squamous (11.58%). Other less common diagnosis included otomycosis (3.16%) and otitis externa (1.05%).The results of the current study is in concurrence with the other literature available on the microbial flora of otorrhea. However a detailed and in depth prospective study would be more than useful to enrich the knowledge and would enhance the understanding of disease processes which cause acute or chronic otorrhoea, so that appropriate preventive and curative measures can be undertaken. |
import React from 'react';
import { List } from '@material-ui/core';
import { EventListItem } from '../EventListItem';
import { EventsListViewComponent } from './EventsList.types';
export const EventsListView: EventsListViewComponent = ({
events,
onEventClick
}) => {
return (
<List>
{events.map((event, i) => (
<EventListItem
key={event.remoteId}
id={event.remoteId}
type={event.type}
location={event.location.name}
eventTime={event.name.split(',')[0]}
showDivider={i !== events.length - 1}
onClick={() => onEventClick(event.remoteId)}
/>
))}
</List>
);
};
|
#include "d3d11DeviceContext.h"
D3D11CustomContext::D3D11CustomContext(ID3D11DeviceContext* devCon, ID3D11DeviceContext*** ret)
{
m_devContext = devCon;
*ret = &m_devContext;
}
D3D11CustomContext::D3D11CustomContext(ID3D11DeviceContext* devCon)
{
m_devContext = devCon;
}
void D3D11CustomContext::VSSetConstantBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer* const* ppConstantBuffers)
{
m_devContext->VSSetConstantBuffers(StartSlot, NumBuffers, ppConstantBuffers);
}
void D3D11CustomContext::PSSetShaderResources(UINT StartSlot, UINT NumViews, ID3D11ShaderResourceView* const* ppShaderResourceViews)
{
m_devContext->PSSetShaderResources(StartSlot, NumViews, ppShaderResourceViews);
}
void D3D11CustomContext::PSSetShader(ID3D11PixelShader* pPixelShader, ID3D11ClassInstance* const* ppClassInstances, UINT NumClassInstances)
{
m_devContext->PSSetShader(pPixelShader, ppClassInstances, NumClassInstances);
}
void D3D11CustomContext::PSSetSamplers(UINT StartSlot, UINT NumSamplers, ID3D11SamplerState* const* ppSamplers)
{
m_devContext->PSSetSamplers(StartSlot, NumSamplers, ppSamplers);
}
void D3D11CustomContext::VSSetShader(ID3D11VertexShader* pVertexShader, ID3D11ClassInstance* const* ppClassInstances, UINT NumClassInstances)
{
m_devContext->VSSetShader(pVertexShader, ppClassInstances, NumClassInstances);
}
void D3D11CustomContext::DrawIndexed(UINT IndexCount, UINT StartIndexLocation, INT BaseVertexLocation)
{
m_devContext->DrawIndexed(IndexCount, StartIndexLocation, BaseVertexLocation);
}
void D3D11CustomContext::Draw(UINT VertexCount, UINT StartVertexLocation)
{
m_devContext->Draw(VertexCount, StartVertexLocation);
}
HRESULT D3D11CustomContext::Map(ID3D11Resource* pResource, UINT Subresource, D3D11_MAP MapType, UINT MapFlags, D3D11_MAPPED_SUBRESOURCE* pMappedResource)
{
return m_devContext->Map(pResource, Subresource, MapType, MapFlags, pMappedResource);
}
void D3D11CustomContext::Unmap(ID3D11Resource* pResource, UINT Subresource)
{
m_devContext->Unmap(pResource, Subresource);
}
void D3D11CustomContext::PSSetConstantBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer* const* ppConstantBuffers)
{
m_devContext->PSSetConstantBuffers(StartSlot, NumBuffers, ppConstantBuffers);
}
void D3D11CustomContext::IASetInputLayout(ID3D11InputLayout* pInputLayout)
{
m_devContext->IASetInputLayout(pInputLayout);
}
void D3D11CustomContext::IASetVertexBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer* const* ppVertexBuffers, const UINT* pStrides, const UINT* pOffsets)
{
m_devContext->IASetVertexBuffers(StartSlot, NumBuffers, ppVertexBuffers, pStrides, pOffsets);
}
void D3D11CustomContext::IASetIndexBuffer(ID3D11Buffer* pIndexBuffer, DXGI_FORMAT Format, UINT Offset)
{
m_devContext->IASetIndexBuffer(pIndexBuffer, Format, Offset);
}
void D3D11CustomContext::DrawIndexedInstanced(UINT IndexCountPerInstance, UINT InstanceCount, UINT StartIndexLocation, INT BaseVertexLocation, UINT StartInstanceLocation)
{
m_devContext->DrawIndexedInstanced(IndexCountPerInstance, InstanceCount, StartIndexLocation, BaseVertexLocation, StartInstanceLocation);
}
void D3D11CustomContext::DrawInstanced(UINT VertexCountPerInstance, UINT InstanceCount, UINT StartVertexLocation, UINT StartInstanceLocation)
{
m_devContext->DrawInstanced(VertexCountPerInstance, InstanceCount, StartVertexLocation, StartInstanceLocation);
}
void D3D11CustomContext::GSSetConstantBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer* const* ppConstantBuffers)
{
m_devContext->GSSetConstantBuffers(StartSlot, NumBuffers, ppConstantBuffers);
}
void D3D11CustomContext::GSSetShader(ID3D11GeometryShader* pShader, ID3D11ClassInstance* const* ppClassInstances, UINT NumClassInstances)
{
m_devContext->GSSetShader(pShader, ppClassInstances, NumClassInstances);
}
void D3D11CustomContext::IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY Topology)
{
m_devContext->IASetPrimitiveTopology(Topology);
}
void D3D11CustomContext::VSSetShaderResources(UINT StartSlot, UINT NumViews, ID3D11ShaderResourceView* const* ppShaderResourceViews)
{
m_devContext->VSSetShaderResources(StartSlot, NumViews, ppShaderResourceViews);
}
void D3D11CustomContext::VSSetSamplers(UINT StartSlot, UINT NumSamplers, ID3D11SamplerState* const* ppSamplers)
{
m_devContext->VSSetSamplers(StartSlot, NumSamplers, ppSamplers);
}
void D3D11CustomContext::Begin(ID3D11Asynchronous* pAsync)
{
m_devContext->Begin(pAsync);
}
void D3D11CustomContext::End(ID3D11Asynchronous* pAsync)
{
m_devContext->End(pAsync);
}
HRESULT D3D11CustomContext::GetData(ID3D11Asynchronous* pAsync, void* pData, UINT DataSize, UINT GetDataFlags)
{
return m_devContext->GetData(pAsync, pData, DataSize, GetDataFlags);
}
void D3D11CustomContext::SetPredication(ID3D11Predicate* pPredicate, BOOL PredicateValue)
{
m_devContext->SetPredication(pPredicate, PredicateValue);
}
void D3D11CustomContext::GSSetShaderResources(UINT StartSlot, UINT NumViews, ID3D11ShaderResourceView* const* ppShaderResourceViews)
{
m_devContext->GSSetShaderResources(StartSlot, NumViews, ppShaderResourceViews);
}
void D3D11CustomContext::GSSetSamplers(UINT StartSlot, UINT NumSamplers, ID3D11SamplerState* const* ppSamplers)
{
m_devContext->GSSetSamplers(StartSlot, NumSamplers, ppSamplers);
}
void D3D11CustomContext::OMSetRenderTargets(UINT NumViews, ID3D11RenderTargetView* const* ppRenderTargetViews, ID3D11DepthStencilView* pDepthStencilView)
{
m_devContext->OMSetRenderTargets(NumViews, ppRenderTargetViews, pDepthStencilView);
}
void D3D11CustomContext::OMSetRenderTargetsAndUnorderedAccessViews(UINT NumRTVs, ID3D11RenderTargetView* const* ppRenderTargetViews, ID3D11DepthStencilView* pDepthStencilView, UINT UAVStartSlot, UINT NumUAVs, ID3D11UnorderedAccessView* const* ppUnorderedAccessViews, const UINT* pUAVInitialCounts)
{
m_devContext->OMSetRenderTargetsAndUnorderedAccessViews(NumRTVs, ppRenderTargetViews, pDepthStencilView, UAVStartSlot, NumUAVs, ppUnorderedAccessViews, pUAVInitialCounts);
}
void D3D11CustomContext::OMSetBlendState(ID3D11BlendState* pBlendState, const FLOAT BlendFactor[4], UINT SampleMask)
{
m_devContext->OMSetBlendState(pBlendState, BlendFactor, SampleMask);
}
void D3D11CustomContext::OMSetDepthStencilState(ID3D11DepthStencilState* pDepthStencilState, UINT StencilRef)
{
m_devContext->OMSetDepthStencilState(pDepthStencilState, StencilRef);
}
void D3D11CustomContext::SOSetTargets(UINT NumBuffers, ID3D11Buffer* const* ppSOTargets, const UINT* pOffsets)
{
m_devContext->SOSetTargets(NumBuffers, ppSOTargets, pOffsets);
}
void D3D11CustomContext::DrawAuto()
{
m_devContext->DrawAuto();
}
void D3D11CustomContext::DrawIndexedInstancedIndirect(ID3D11Buffer* pBufferForArgs, UINT AlignedByteOffsetForArgs)
{
m_devContext->DrawIndexedInstancedIndirect(pBufferForArgs, AlignedByteOffsetForArgs);
}
void D3D11CustomContext::DrawInstancedIndirect(ID3D11Buffer* pBufferForArgs, UINT AlignedByteOffsetForArgs)
{
m_devContext->DrawInstancedIndirect(pBufferForArgs, AlignedByteOffsetForArgs);
}
void D3D11CustomContext::Dispatch(UINT ThreadGroupCountX, UINT ThreadGroupCountY, UINT ThreadGroupCountZ)
{
m_devContext->Dispatch(ThreadGroupCountX, ThreadGroupCountY, ThreadGroupCountZ);
}
void D3D11CustomContext::DispatchIndirect(ID3D11Buffer* pBufferForArgs, UINT AlignedByteOffsetForArgs)
{
m_devContext->DispatchIndirect(pBufferForArgs, AlignedByteOffsetForArgs);
}
void D3D11CustomContext::RSSetState(ID3D11RasterizerState* pRasterizerState)
{
m_devContext->RSSetState(pRasterizerState);
}
void D3D11CustomContext::RSSetViewports(UINT NumViewports, const D3D11_VIEWPORT* pViewports)
{
m_devContext->RSSetViewports(NumViewports, pViewports);
}
void D3D11CustomContext::RSSetScissorRects(UINT NumRects, const D3D11_RECT* pRects)
{
m_devContext->RSSetScissorRects(NumRects, pRects);
}
void D3D11CustomContext::CopySubresourceRegion(ID3D11Resource* pDstResource, UINT DstSubresource, UINT DstX, UINT DstY, UINT DstZ, ID3D11Resource* pSrcResource, UINT SrcSubresource, const D3D11_BOX* pSrcBox)
{
m_devContext->CopySubresourceRegion(pDstResource, DstSubresource, DstX, DstY, DstZ, pSrcResource, SrcSubresource, pSrcBox);
}
void D3D11CustomContext::CopyResource(ID3D11Resource* pDstResource, ID3D11Resource* pSrcResource)
{
m_devContext->CopyResource(pDstResource, pSrcResource);
}
void D3D11CustomContext::UpdateSubresource(ID3D11Resource* pDstResource, UINT DstSubresource, const D3D11_BOX* pDstBox, const void* pSrcData, UINT SrcRowPitch, UINT SrcDepthPitch)
{
m_devContext->UpdateSubresource(pDstResource, DstSubresource, pDstBox, pSrcData, SrcRowPitch, SrcDepthPitch);
}
void D3D11CustomContext::CopyStructureCount(ID3D11Buffer* pDstBuffer, UINT DstAlignedByteOffset, ID3D11UnorderedAccessView* pSrcView)
{
m_devContext->CopyStructureCount(pDstBuffer, DstAlignedByteOffset, pSrcView);
}
void D3D11CustomContext::ClearRenderTargetView(ID3D11RenderTargetView* pRenderTargetView, const FLOAT ColorRGBA[4])
{
m_devContext->ClearRenderTargetView(pRenderTargetView, ColorRGBA);
}
void D3D11CustomContext::ClearUnorderedAccessViewUint(ID3D11UnorderedAccessView* pUnorderedAccessView, const UINT Values[4])
{
m_devContext->ClearUnorderedAccessViewUint(pUnorderedAccessView, Values);
}
void D3D11CustomContext::ClearUnorderedAccessViewFloat(ID3D11UnorderedAccessView* pUnorderedAccessView, const FLOAT Values[4])
{
m_devContext->ClearUnorderedAccessViewFloat(pUnorderedAccessView, Values);
}
void D3D11CustomContext::ClearDepthStencilView(ID3D11DepthStencilView* pDepthStencilView, UINT ClearFlags, FLOAT Depth, UINT8 Stencil)
{
m_devContext->ClearDepthStencilView(pDepthStencilView, ClearFlags, Depth, Stencil);
}
void D3D11CustomContext::GenerateMips(ID3D11ShaderResourceView* pShaderResourceView)
{
m_devContext->GenerateMips(pShaderResourceView);
}
void D3D11CustomContext::SetResourceMinLOD(ID3D11Resource* pResource, FLOAT MinLOD)
{
m_devContext->SetResourceMinLOD(pResource, MinLOD);
}
FLOAT D3D11CustomContext::GetResourceMinLOD(ID3D11Resource* pResource)
{
return m_devContext->GetResourceMinLOD(pResource);
}
void D3D11CustomContext::ResolveSubresource(ID3D11Resource* pDstResource, UINT DstSubresource, ID3D11Resource* pSrcResource, UINT SrcSubresource, DXGI_FORMAT Format)
{
m_devContext->ResolveSubresource(pDstResource, DstSubresource, pSrcResource, SrcSubresource, Format);
}
void D3D11CustomContext::ExecuteCommandList(ID3D11CommandList* pCommandList, BOOL RestoreContextState)
{
m_devContext->ExecuteCommandList(pCommandList, RestoreContextState);
}
void D3D11CustomContext::HSSetShaderResources(UINT StartSlot, UINT NumViews, ID3D11ShaderResourceView* const* ppShaderResourceViews)
{
m_devContext->HSSetShaderResources(StartSlot, NumViews, ppShaderResourceViews);
}
void D3D11CustomContext::HSSetShader(ID3D11HullShader* pHullShader, ID3D11ClassInstance* const* ppClassInstances, UINT NumClassInstances)
{
m_devContext->HSSetShader(pHullShader, ppClassInstances, NumClassInstances);
}
void D3D11CustomContext::HSSetSamplers(UINT StartSlot, UINT NumSamplers, ID3D11SamplerState* const* ppSamplers)
{
m_devContext->HSSetSamplers(StartSlot, NumSamplers, ppSamplers);
}
void D3D11CustomContext::HSSetConstantBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer* const* ppConstantBuffers)
{
m_devContext->HSSetConstantBuffers(StartSlot, NumBuffers, ppConstantBuffers);
}
void D3D11CustomContext::DSSetShaderResources(UINT StartSlot, UINT NumViews, ID3D11ShaderResourceView* const* ppShaderResourceViews)
{
m_devContext->DSSetShaderResources(StartSlot, NumViews, ppShaderResourceViews);
}
void D3D11CustomContext::DSSetShader(ID3D11DomainShader* pDomainShader, ID3D11ClassInstance* const* ppClassInstances, UINT NumClassInstances)
{
m_devContext->DSSetShader(pDomainShader, ppClassInstances, NumClassInstances);
}
void D3D11CustomContext::DSSetSamplers(UINT StartSlot, UINT NumSamplers, ID3D11SamplerState* const* ppSamplers)
{
m_devContext->DSSetSamplers(StartSlot, NumSamplers, ppSamplers);
}
void D3D11CustomContext::DSSetConstantBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer* const* ppConstantBuffers)
{
m_devContext->DSSetConstantBuffers(StartSlot, NumBuffers, ppConstantBuffers);
}
void D3D11CustomContext::CSSetShaderResources(UINT StartSlot, UINT NumViews, ID3D11ShaderResourceView* const* ppShaderResourceViews)
{
m_devContext->CSSetShaderResources(StartSlot, NumViews, ppShaderResourceViews);
}
void D3D11CustomContext::CSSetUnorderedAccessViews(UINT StartSlot, UINT NumUAVs, ID3D11UnorderedAccessView* const* ppUnorderedAccessViews, const UINT* pUAVInitialCounts)
{
m_devContext->CSSetUnorderedAccessViews(StartSlot, NumUAVs, ppUnorderedAccessViews, pUAVInitialCounts);
}
void D3D11CustomContext::CSSetShader(ID3D11ComputeShader* pComputeShader, ID3D11ClassInstance* const* ppClassInstances, UINT NumClassInstances)
{
m_devContext->CSSetShader(pComputeShader, ppClassInstances, NumClassInstances);
}
void D3D11CustomContext::CSSetSamplers(UINT StartSlot, UINT NumSamplers, ID3D11SamplerState* const* ppSamplers)
{
m_devContext->CSSetSamplers(StartSlot, NumSamplers, ppSamplers);
}
void D3D11CustomContext::CSSetConstantBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer* const* ppConstantBuffers)
{
m_devContext->CSSetConstantBuffers(StartSlot, NumBuffers, ppConstantBuffers);
}
void D3D11CustomContext::VSGetConstantBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer** ppConstantBuffers)
{
m_devContext->VSGetConstantBuffers(StartSlot, NumBuffers, ppConstantBuffers);
}
void D3D11CustomContext::PSGetShaderResources(UINT StartSlot, UINT NumViews, ID3D11ShaderResourceView** ppShaderResourceViews)
{
m_devContext->PSGetShaderResources(StartSlot, NumViews, ppShaderResourceViews);
}
void D3D11CustomContext::PSGetShader(ID3D11PixelShader** ppPixelShader, ID3D11ClassInstance** ppClassInstances, UINT* pNumClassInstances)
{
m_devContext->PSGetShader(ppPixelShader, ppClassInstances, pNumClassInstances);
}
void D3D11CustomContext::PSGetSamplers(UINT StartSlot, UINT NumSamplers, ID3D11SamplerState** ppSamplers)
{
m_devContext->PSGetSamplers(StartSlot, NumSamplers, ppSamplers);
}
void D3D11CustomContext::VSGetShader(ID3D11VertexShader** ppVertexShader, ID3D11ClassInstance** ppClassInstances, UINT* pNumClassInstances)
{
m_devContext->VSGetShader(ppVertexShader, ppClassInstances, pNumClassInstances);
}
void D3D11CustomContext::PSGetConstantBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer** ppConstantBuffers)
{
m_devContext->PSGetConstantBuffers(StartSlot, NumBuffers, ppConstantBuffers);
}
void D3D11CustomContext::IAGetInputLayout(ID3D11InputLayout** ppInputLayout)
{
m_devContext->IAGetInputLayout(ppInputLayout);
}
void D3D11CustomContext::IAGetVertexBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer** ppVertexBuffers, UINT* pStrides, UINT* pOffsets)
{
m_devContext->IAGetVertexBuffers(StartSlot, NumBuffers, ppVertexBuffers, pStrides, pOffsets);
}
void D3D11CustomContext::IAGetIndexBuffer(ID3D11Buffer** pIndexBuffer, DXGI_FORMAT* Format, UINT* Offset)
{
m_devContext->IAGetIndexBuffer(pIndexBuffer, Format, Offset);
}
void D3D11CustomContext::GSGetConstantBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer** ppConstantBuffers)
{
m_devContext->GSGetConstantBuffers(StartSlot, NumBuffers, ppConstantBuffers);
}
void D3D11CustomContext::GSGetShader(ID3D11GeometryShader** ppGeometryShader, ID3D11ClassInstance** ppClassInstances, UINT* pNumClassInstances)
{
m_devContext->GSGetShader(ppGeometryShader, ppClassInstances, pNumClassInstances);
}
void D3D11CustomContext::IAGetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY* pTopology)
{
m_devContext->IAGetPrimitiveTopology(pTopology);
}
void D3D11CustomContext::VSGetShaderResources(UINT StartSlot, UINT NumViews, ID3D11ShaderResourceView** ppShaderResourceViews)
{
m_devContext->VSGetShaderResources(StartSlot, NumViews, ppShaderResourceViews);
}
void D3D11CustomContext::VSGetSamplers(UINT StartSlot, UINT NumSamplers, ID3D11SamplerState** ppSamplers)
{
m_devContext->VSGetSamplers(StartSlot, NumSamplers, ppSamplers);
}
void D3D11CustomContext::GetPredication(ID3D11Predicate** ppPredicate, BOOL* pPredicateValue)
{
m_devContext->GetPredication(ppPredicate, pPredicateValue);
}
void D3D11CustomContext::GSGetShaderResources(UINT StartSlot, UINT NumViews, ID3D11ShaderResourceView** ppShaderResourceViews)
{
m_devContext->GSGetShaderResources(StartSlot, NumViews, ppShaderResourceViews);
}
void D3D11CustomContext::GSGetSamplers(UINT StartSlot, UINT NumSamplers, ID3D11SamplerState** ppSamplers)
{
m_devContext->GSGetSamplers(StartSlot, NumSamplers, ppSamplers);
}
void D3D11CustomContext::OMGetRenderTargets(UINT NumViews, ID3D11RenderTargetView** ppRenderTargetViews, ID3D11DepthStencilView** ppDepthStencilView)
{
m_devContext->OMGetRenderTargets(NumViews, ppRenderTargetViews, ppDepthStencilView);
}
void D3D11CustomContext::OMGetRenderTargetsAndUnorderedAccessViews(UINT NumRTVs, ID3D11RenderTargetView** ppRenderTargetViews, ID3D11DepthStencilView** ppDepthStencilView, UINT UAVStartSlot, UINT NumUAVs, ID3D11UnorderedAccessView** ppUnorderedAccessViews)
{
m_devContext->OMGetRenderTargetsAndUnorderedAccessViews(NumRTVs, ppRenderTargetViews, ppDepthStencilView, UAVStartSlot, NumUAVs, ppUnorderedAccessViews);
}
void D3D11CustomContext::OMGetBlendState(ID3D11BlendState** ppBlendState, FLOAT BlendFactor[4], UINT* pSampleMask)
{
m_devContext->OMGetBlendState(ppBlendState, BlendFactor, pSampleMask);
}
void D3D11CustomContext::OMGetDepthStencilState(ID3D11DepthStencilState** ppDepthStencilState, UINT* pStencilRef)
{
m_devContext->OMGetDepthStencilState(ppDepthStencilState, pStencilRef);
}
void D3D11CustomContext::SOGetTargets(UINT NumBuffers, ID3D11Buffer** ppSOTargets)
{
m_devContext->SOGetTargets(NumBuffers, ppSOTargets);
}
void D3D11CustomContext::RSGetState(ID3D11RasterizerState** ppRasterizerState)
{
m_devContext->RSGetState(ppRasterizerState);
}
void D3D11CustomContext::RSGetViewports(UINT* pNumViewports, D3D11_VIEWPORT* pViewports)
{
m_devContext->RSGetViewports(pNumViewports, pViewports);
}
void D3D11CustomContext::RSGetScissorRects(UINT* pNumRects, D3D11_RECT* pRects)
{
m_devContext->RSGetScissorRects(pNumRects, pRects);
}
void D3D11CustomContext::HSGetShaderResources(UINT StartSlot, UINT NumViews, ID3D11ShaderResourceView** ppShaderResourceViews)
{
m_devContext->HSGetShaderResources(StartSlot, NumViews, ppShaderResourceViews);
}
void D3D11CustomContext::HSGetShader(ID3D11HullShader** ppHullShader, ID3D11ClassInstance** ppClassInstances, UINT* pNumClassInstances)
{
m_devContext->HSGetShader(ppHullShader, ppClassInstances, pNumClassInstances);
}
void D3D11CustomContext::HSGetSamplers(UINT StartSlot, UINT NumSamplers, ID3D11SamplerState** ppSamplers)
{
m_devContext->HSGetSamplers(StartSlot, NumSamplers, ppSamplers);
}
void D3D11CustomContext::HSGetConstantBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer** ppConstantBuffers)
{
m_devContext->HSGetConstantBuffers(StartSlot, NumBuffers, ppConstantBuffers);
}
void D3D11CustomContext::DSGetShaderResources(UINT StartSlot, UINT NumViews, ID3D11ShaderResourceView** ppShaderResourceViews)
{
m_devContext->DSGetShaderResources(StartSlot, NumViews, ppShaderResourceViews);
}
void D3D11CustomContext::DSGetShader(ID3D11DomainShader** ppDomainShader, ID3D11ClassInstance** ppClassInstances, UINT* pNumClassInstances)
{
m_devContext->DSGetShader(ppDomainShader, ppClassInstances, pNumClassInstances);
}
void D3D11CustomContext::DSGetSamplers(UINT StartSlot, UINT NumSamplers, ID3D11SamplerState** ppSamplers)
{
m_devContext->DSGetSamplers(StartSlot, NumSamplers, ppSamplers);
}
void D3D11CustomContext::DSGetConstantBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer** ppConstantBuffers)
{
m_devContext->DSGetConstantBuffers(StartSlot, NumBuffers, ppConstantBuffers);
}
void D3D11CustomContext::CSGetShaderResources(UINT StartSlot, UINT NumViews, ID3D11ShaderResourceView** ppShaderResourceViews)
{
m_devContext->CSGetShaderResources(StartSlot, NumViews, ppShaderResourceViews);
}
void D3D11CustomContext::CSGetUnorderedAccessViews(UINT StartSlot, UINT NumUAVs, ID3D11UnorderedAccessView** ppUnorderedAccessViews)
{
m_devContext->CSGetUnorderedAccessViews(StartSlot, NumUAVs, ppUnorderedAccessViews);
}
void D3D11CustomContext::CSGetShader(ID3D11ComputeShader** ppComputeShader, ID3D11ClassInstance** ppClassInstances, UINT* pNumClassInstances)
{
m_devContext->CSGetShader(ppComputeShader, ppClassInstances, pNumClassInstances);
}
void D3D11CustomContext::CSGetSamplers(UINT StartSlot, UINT NumSamplers, ID3D11SamplerState** ppSamplers)
{
m_devContext->CSGetSamplers(StartSlot, NumSamplers, ppSamplers);
}
void D3D11CustomContext::CSGetConstantBuffers(UINT StartSlot, UINT NumBuffers, ID3D11Buffer** ppConstantBuffers)
{
m_devContext->CSGetConstantBuffers(StartSlot, NumBuffers, ppConstantBuffers);
}
void D3D11CustomContext::ClearState()
{
m_devContext->ClearState();
}
void D3D11CustomContext::Flush()
{
m_devContext->Flush();
}
D3D11_DEVICE_CONTEXT_TYPE D3D11CustomContext::GetType()
{
return m_devContext->GetType();
}
UINT D3D11CustomContext::GetContextFlags()
{
return m_devContext->GetContextFlags();
}
HRESULT D3D11CustomContext::FinishCommandList(BOOL RestoreDeferredContextState, ID3D11CommandList** ppCommandList)
{
return m_devContext->FinishCommandList(RestoreDeferredContextState, ppCommandList);
}
void D3D11CustomContext::GetDevice(ID3D11Device** ppDevice)
{
m_devContext->GetDevice(ppDevice);
}
HRESULT D3D11CustomContext::GetPrivateData(const GUID& guid, UINT* pDataSize, void* pData)
{
return m_devContext->GetPrivateData(guid, pDataSize, pData);
}
HRESULT D3D11CustomContext::SetPrivateData(const GUID& guid, UINT DataSize, const void* pData)
{
return m_devContext->SetPrivateData(guid, DataSize, pData);
}
HRESULT D3D11CustomContext::SetPrivateDataInterface(const GUID& guid, const IUnknown* pData)
{
return m_devContext->SetPrivateDataInterface(guid, pData);
}
HRESULT D3D11CustomContext::QueryInterface(const IID& riid, void** ppvObject)
{
return m_devContext->QueryInterface(riid, ppvObject);
}
ULONG D3D11CustomContext::AddRef()
{
return m_devContext->AddRef();
}
ULONG D3D11CustomContext::Release()
{
return m_devContext->Release();
}
|
#include "qVideoSwitcher.h"
#include "MeeGoVideoSwitch.h"
qVideoSwitcher::qVideoSwitcher(QObject *parent)
:QObject(parent), m_videoSwitch(new MeeGoVideoSwitch())
{}
qVideoSwitcher::~qVideoSwitcher() { delete m_videoSwitch; }
void qVideoSwitcher::toClone() { m_videoSwitch->toClone(); }
void qVideoSwitcher::toExtend() { m_videoSwitch->toExtend(); }
void qVideoSwitcher::toSingle() { m_videoSwitch->toSingle(); }
void qVideoSwitcher::toVideoExtend() { m_videoSwitch->toVideoExtend(); }
bool qVideoSwitcher::isHDMIconnected() const
{
return m_videoSwitch->isHDMIconnected();
}
|
The Christian and the State: Romans 13:1-7 Each new year brings to most people a sense of new beginnings, a fresh start, hopeful resolutions, and optimism. It takes only one trip to the post office, however, for most of those positive feelings to be tempered as the Federal Income Tax booklet arrives, usually thicker than the year before. It is a reminder to citizens that whether or not they like it, they will have to render unto Caesar by April 15th. Christians are reminded of Paul's words in Romans 13:1-7 to "be subject to the governing authorities." But are there ever circumstances in which a Christian may resist rendering unto Caesar? This passage has quite a history and the issue still surfaces today. What must Christians do when they feel their government is acting in ways that oppress or that sides with evil? How do we balance Paul's words, written in his context, with the context of our world today? Paul, writing in the fifties, enjoyed the relative Pax Romana that made it possible to move about freely and to enjoy the benefits of a stable government.' In fact, there is little evidence that Paul was at odds with the government though he was indeed cognizant of problems that could arise within governmental persecution. Alan Culpepper explains that Paul was aware that in 49 C.E., Claudius had issued and edict which Suetonius (Claudius XXVA) explained: "Since the Jews constantly made disturbances at the instigation of Chrestus, he (Claudius) expelled them from Rome." Since Priscilla and Aquila were probably among those Christians banished it is not difficult to assume that the edict may have come in a response to a Jewish Messianic uprising in Rome. When Claudius died in 54 C.E., some of the Jewish Christians probably returned to Rome. Paul, therefore, was encouraging the Christian community not to provoke another edict," Paul knew that the least show of resistance in a Christian would be very prejudicial to the whole society; and therefore they had more need than others to be exact in their subjection.' |
<reponame>VladimirBessonov/COMM644-assingmnt07<gh_stars>0
import Model from "./Model";
import View from "./View";
class Controller {
model: Model
view: View
constructor(model : Model, view: View) {
this.model = model
this.view = view
this.model.bindTodoListChanged(this.onTodoListChanged)
this.view.bindToggleTodo(this.handleToggleTodo)
this.view.bindDeleteTodo(this.handleDeleteTodo)
this.view.bindRestartTodo(this.handleRestartTodo)
this.view.bindSelectActiveTask(this.handleActiveTask)
this.view.bindSelectActiveTab(this.handleActiveTab)
this.view.bindExecuteTask(this.handleExecuteTask)
this.onTodoListChanged(this.model.activeTab,this.model.todos)
// this.onActiveTabChanged(this.model.activeTab,this.model.todos)
}
onTodoListChanged = (activeTab, todos) => {
this.view.displayTodos(activeTab, todos)
}
handleToggleTodo = id => {
this.model.toggleTodo(id)
}
handleDeleteTodo = id => {
this.model.deleteTodo(id)
}
handleRestartTodo = () => {
this.model.restartTodos()
}
handleActiveTask = id => {
this.model.selectActiveTask(id)
}
handleActiveTab = id => {
this.model.selectActiveTab(id)
}
handleExecuteTask = value => {
this.model.executeTask(value)
}
}
export default Controller |
So here I am… 4 days after my first year of writing fanfics passed, here I am with another important day. Today marks a year since I began dating my girlfriend and I couldn't have gotten far without her supporting me… she is the reason why I started writing in the first place since she is a RWBY fanfic writer like me and her first fanfic was one of the first stories I read before making my own story (hint, hint). I believe she knows who she is and I hope you enjoy my gift for our one year anniversary… well also I can't forget about you guys, my readers so this is a gift to the both of you. My girlfriend will know how this story was made and it's similar to something we did, anyways, enough boring you all, here is my gift… happy anniversary…
I was initially going to make this one-shot (could be a story… I don't know) with me and her as the main characters but I decided to stay with my tradition and keep it RWBY related…
Also I did not draw that art even though it looks super awesome but I forgot who it belongs to so I'll just say that do not compliment me for the art even though it is not mine... I do not steal art or "roll that way" anyways here is the story...
A cool wind blew ever so gently at the outskirts of the city known as Vale, making the leaves of Fall scatter off their respective trees, landing on the ground softly. In the distance of the trees stood a lake, one that a young red-haired girl went to if her parents fought with each other which seemed to be an occasional trip in her mind. She sat down in front of the lake and looked at the reflection the water mirrored in front of her. I wish they could stop fighting. The red-haired girl thought and sighed. She hummed a gentle lullaby that her mother taught her before she had her hands full with her father. When will the good times come back? She asked herself and gently tapped the mirror image of her, the image rippled ever so slowly. The girl stood up and picked up a few rocks that were scattered next to her, and then she aimed towards the blue distance and threw one, making it skid along the rippling water.
Splish. Splash.
Little did she know that someone was watching her from afar. That someone was a white-haired girl with a side ponytail and sea blue eyes and she herself wondered how she had found this lake. She guessed that this mysterious girl found it or so she thought. Should I walk up to her? She thought but seemed hesitant to meet the strange red-haired girl. Before she decided to leave and come back later, her feet moved by themselves, making her go towards the girl close to the lake, The red-haired girl stopped what she was doing and looked to her right to see the white-haired girl walk towards her. "Um… hello." the white-haired girl said softly. She was wearing a white, buttoned dress with black edging and a skirt with black frills. She also wore thigh-high white high-heel boots with frilly black thigh-high stockings that come up slightly above the top of her boots.
The red-haired girl smiled and greeted her back. She wore a long-sleeve, light gray shirt with black belted cuffs near the ends of the sleeves. Over the top of the gray shirt is a black piece of clothing with two vertical lines of red stitching that resembles a cross between a corset and overalls. She also wore a red skirt with a large black-colored print of a rose emblem on the side of it, and the inside of her skirt is black. black stockings and her black-and-red boots. A few pieces of metal arranged in an abstract shape are attached to her sleeve on her left shoulder and she wore a red cape with it but doubled it as a scarf. "I guess I'm not the only one that knows about this place then." she replied.
"I thought I was the only one." The white-haired girl said.
"What's your name?" The red-haired girl asked and held out her hand. "The name's Ruby Rose."
The other girl took her hand and shook it gently. "It's Weiss… Weiss Schnee."
Ruby's eyes widened. "You mean the Weiss Schnee, the one that has a father that owns the biggest dust company in the world?"
The girl known as Weiss sighed and gave her a nod. "Yes… I am that Weiss Schnee."
Ruby realized what she was doing and calmed down. "Sorry about that, I just never thought I would meet someone like you."
"It's okay Ruby…" she looked down at the rock that was on the red-scarfed girl's hand. "Were you skipping rocks?" Weiss asked. Ruby nodded and Weiss apologized. "I shouldn't have bothered you."
"It doesn't matter… I was beginning to want some company so do you want to skip rocks with me?"
"Sure." Weiss reached down to get a rock from the dirt floor and aimed at the blue lake. "Let's see who can make their rock go farther." she proposed and threw her rock. It skipped four times and landed in the water while Ruby's went only three. "Yes!"
"Aww, no fair you cheated." Ruby replied and laughed. It was a moment later when Weiss joined in. After they composed themselves the red-haired girl asked why she had come down to the lake.
Weiss looked away towards the blue water and shrugged. "I guess it's because… I just want to not be forced into training. My father wants me to take over his company when he's gone." Ruby noticed her voice was in a more dejected tone.
"What's wrong about that?"
"I want to do the things that I want to do… I gave up so many things because of him and it only makes things worse."
Ruby recalled a time she and her parents had went to a concert in the city of Vale that had her vocal performance as the main event. "I remember you singing to everyone in Vale." she replied and smiled.
Weiss chuckled. "Well… you don't see that anymore because of him." They both sat down in front of the lake and looked out into the distance. "Why are you here?"
Ruby did the same expression as Weiss but a single tear dropped from her cheek. "My mom and dad keep fighting and I just wish for them to stop… sometimes they take their anger out on me even though I didn't do anything so I spend most of my time here." The sun was beginning to set and they both sighed. "We should get going or we'll have to deal with the Grimm." Ruby said and stood up, lending Weiss a hand too.
"Good idea… it was nice meeting you Ruby." Weiss replied and began walking back the way she came from.
"Wait!" Ruby shouted which made her stop. "Will you come back here tomorrow?" she asked, hoping that she will be given a positive answer.
Weiss turned around and gave her a smile. "Of course… same time right?" Ruby nodded. "Then I'll see you tomorrow Ruby."
"Okay… tomorrow it is."
They both took off towards their homes as quick as they could since nighttime proved to be dangerous due to the creatures of darkness known as the Grimm. They would meet again the next day and talk about their daily lives so they could get to know each other better, and then they would agree to meet again the very next day and it became a routine for them soon after. After a week of talking to each other, they began making trips to the city and became great friends. Weiss and Ruby had given each other their numbers so they could call them at any time on their scroll. Back at home, Ruby and Weiss did not dare to tell their parents why they went out so much and were home late sometimes. Weiss's father one day called her to his office.
"Please sit down Weiss dear." her father said in which she complied. "You are missing your training days which is unacceptable and you've also been out for very long periods of time every day. Is there a reason for the sudden change of your... motives?"
Weiss huffed and crossed her arms. "Why would you care father, the only thing you care about is dust and the family company?"
He gritted his teeth. "I will not let our family legacy go down because of your incompetence!"
"It'll stay alive but I won't be the one continuing it!" Weiss exclaimed, getting up as she did and stormed out.
This filled her father with rage "Where are you going, I'm not finished with you yet?!" He rose from his chair quickly and followed her. "You do not walk out on your own father!" He caught up with her in a corridor and grabbed her wrist.
Weiss was turned towards him and she shouted something that she will regret. "You are not my father, you don't even think of me as your own daughter!"
In an instant, he had slapped her, making her gasp and holding her right cheek. "How dare you say such horrifying words Weiss?!"
Tears flooded from her cheeks but she didn't let up. "You only took my happiness away!" Another slap and a whimper were heard. "Do you even want to know why I've been out?!"
"Of course I do!" He replied in angst.
"It's because I made a friend... Ruby Rose!"
His eyes widened. "Why would you spend time with a fiend like her!?"
Weiss snapped. "She is not a fiend father!" He raised his hand again to slap her and she cowered in fear.
"You will not spend any more time with that scoundrel you hear me?!" Her father shouted.
"She is not worthless! I love her!" Weiss was even surprised by what she had blurted out. Did she love her? She thought in that moment, Weiss did indeed spend a lot more time with Ruby than at home and became friends in an instant. Is that the reason why I felt happy when I was close to Ruby? She asked herself mentally. The white-haired girl felt so happy and safe next to Ruby during those times... it was like she didn't need to care about the danger and stress she was put under, she could do anything she had wanted. Weiss knew then that she had to run... before she will never have a chance to see her again. She quickly got off her father's grip and ran upstairs to her room at the very end of the right corridor.
"You will stay in your room until I say so and I will make sure you never this place anymore!" he exclaimed as her daughter ran.
Later that night, Weiss looked at the clock on her dresser next to her bed. Twelve o'clock in the morning. It's time. She thought and got up from her bed slowly. Weiss only thought of one thing during her time in her room. I'm going to run away and take Ruby with me… then live somewhere with her where we don't have to face the troubles that are given to us. She grabbed a bag full of clothing and walked out of her room. The best thing about living so far from her father's office was the fact that he would be living in the opposite corridor from where she is and how far the corridor goes can make her sneak through the halls without trouble. She began walking towards the stairs, her footsteps muffled due to the soft skin of her feet. I'll put my boots on when I'm away from this place.
Weiss had planned this in just a few hours… she was finally going to be free from the hell her father made and she's going to be with the girl she loved. But… the white-haired girl stopped in the middle of the stairs, what if she doesn't like me that way? She shook the thought away and finally made it down to the first floor. The first thing I need is money and I know we have way more than just plenty to sustain our lives. She thought and traveled to the office she will only go in for the last time tonight. Behind the mahogany desk her father sat in front of, there was a hidden safe that had their entire lien and only the family had the clearance to open it. Quickly, she placed her hand on the handprint scanner and once it confirmed that it was her, Weiss didn't hesitate to open it quickly and get as much lien as she could that will last her and Ruby years.
That was when she heard footsteps that were getting louder as it headed for her. Her eyes widened and she cursed herself. The scanner… it sent the info to him right after it confirmed it was me. She thought and hurried herself in getting the lien she needed. Weiss finished a few moments later and ran towards the back door of her home. The sky was a midnight blue and the only light that illuminated it were the moon and the millions of stars that were out there in the great abyss. Weiss didn't care that she was out late at night; she had her multi action dust rapier, Myrtenaster, at the ready in case she ran into the Grimm but she knew that will rarely happen. Behind her, she heard her father shouting in angst at her but she ignored it… not wanting to know what he's saying.
She took a shortcut to Ruby's home by going to the lake they first met, but instead she went the way Ruby would take. The red-haired girl told her one day the directions to her home from the lake in case she needed to talk to her in person and now was a good time to talk to her. Weiss kept hoping that she would agree to her plan until she saw what could be her home. It was not as big as her mansion but it was a decent sized two story home with the colors red and black as the main paint scheme. On the second floor window, she saw a light illuminate in what could be Ruby's room and three silhouettes in there. She recognized Ruby immediately but the other two she had no clue about. Are they her parents? She thought and heard shouting.
"You do not get to control my love life! If I love her then that's final, I will not date anyone else but her!" Ruby exclaimed.
The person that replied first was an older female and she was not happy. "You are a Rose Ruby and we do not go out with people who despise the faunus!"
The next voice probably belonged to a man. "She is also a target of the White Fang if you didn't know that!"
"I don't care! She actually likes faunus, that's her father and I will protect her from the White Fang, even if it puts my life at risk!" Ruby shouted which made Weiss crack a smile. It was true that she indeed has a target painted on her back by the faunus group known as the White Fang but she never thought that Ruby would say something like that, let alone confess her love right there in front of them. They continued arguing until the other two silhouettes left the room with a frustrated sigh. Weiss managed to climb up to the second floor window and she knocked on it, scaring the red-haired girl. "Weiss?" she said in surprise as she opened the window.
Weiss climbed in and noticed tears on her cheeks which signaled her to embrace the younger girl. "It's me… Ruby, I want to tell you something."
"What is it?" Ruby asked, trying not to sob.
"Do you see the things I'm holding right now?" Ruby pulled away and examined her; she noticed a bag on her back and a lot of lien in a pouch.
"Weiss… what are you-"
She interrupted her before she could finish. "I came to tell you that I'm running away from my family."
"But why are you here?"
Weiss sighed. "That's what I was going to say… Ruby… I want you to come with me."
Silver eyes widened at hearing that. "I-I don't know if I can." she replied. "Before I do make a choice… I need to tell you something too."
"What is it?" Weiss asked.
Ruby hesitated at first but a moment later she managed to say it. "Weiss… I-I love you."
Weiss hugged her instantly and buried her face in her chest. "I love you too Ruby, I barely realized it hours ago."
"M-me too." she replied and hugged back. "I'm going with you…"
Weiss began sobbing quietly. "Thank you Ruby."
Ruby pulled away and smiled. "I better get packing." Weiss sat down on her bed as she began packing clothes and other personal belongings. She also holstered what could be her weapon, a red scythe with a barrel of a gun The white-haired girl grew tired and almost dozed off before Ruby broke the silence. "I'm done."
Weiss brushed her sleepiness away, stood up and nodded. "We better get going." She replied and opened the window.
"Are you sure about this Weiss?" Ruby asked suddenly which made her stop.
Weiss thought for a moment. Was she sure about this? I don't even know. She thought and sighed. "We'll just have to find out... together." They both climbed out of the house and took off; not hesitating at all about their decision... they want to stay together and that's how it'll be. Their hands were intertwined as they ran into the city of Vale. Surely they will have hardships later in their new life together but they know that they can live happily ever after.
If this were a multi-chapter story, then I would have done different perspectives but it isn't at the moment and I guess it's kind of better off like this. Anyways, there is my gift to you my fantastic girlfriend and a late gift to the readers for giving me such a great first year of writing. Now I wonder if some of y'all know who my girlfriend is since I keep leaving obvious hints but leave a review on this story and if you guys actually want this to be more than a one-shot... then by all means I will make it happen.
If this does become a multi-chapter story, then I will call this AU, The Runaway Roses AU... anyways stay classy everyone. |
<filename>java/classes/com/baidu/mapapi/http/AsyncHttpClient.java
package com.baidu.mapapi.http;
import android.content.Context;
import android.os.Build.VERSION;
public class AsyncHttpClient
{
Context a;
private int b = 4000;
private int c = 4000;
static
{
if (Build.VERSION.SDK_INT <= 8) {
System.setProperty("http.keepAlive", "false");
}
}
public AsyncHttpClient(Context paramContext)
{
if (paramContext == null) {
throw new IllegalArgumentException("Context cannot be null");
}
this.a = paramContext;
}
public void get(String paramString, HttpClient.ProtoResultCallback paramProtoResultCallback)
{
if (paramString == null) {
throw new IllegalArgumentException("URI cannot be null");
}
new Thread(new a(this, paramProtoResultCallback, paramString)).start();
}
}
/* Location: /Users/gaoht/Downloads/zirom/classes-dex2jar.jar!/com/baidu/mapapi/http/AsyncHttpClient.class
* Java compiler version: 6 (50.0)
* JD-Core Version: 0.7.1
*/ |
<reponame>letatanhhuy/braintree_android
package com.braintreepayments.api;
import android.content.pm.PackageManager;
import com.braintreepayments.api.interfaces.HttpResponseCallback;
import com.braintreepayments.api.interfaces.PreferredPaymentMethodsListener;
import com.braintreepayments.api.internal.BraintreeGraphQLHttpClient;
import com.braintreepayments.api.models.PreferredPaymentMethodsResult;
/**
* Fetches information about which payment methods are preferred on the device.
* Used to determine which payment methods are given preference in your UI,
* not whether they are presented entirely.
* This class is currently in beta and may change in future releases.
*/
public class PreferredPaymentMethods {
private static final String PAYPAL_APP_PACKAGE = "com.paypal.android.p2pmobile";
private static final int NO_FLAGS = 0;
/**
* Fetches information about which payment methods should be given preference in your UI.
* @param fragment The BraintreeFragment
* @param listener A listener that is invoked when preferred payment methods have been fetched.
*/
public static void fetchPreferredPaymentMethods(final BraintreeFragment fragment, final PreferredPaymentMethodsListener listener) {
boolean isPayPalAppInstalled = false;
try {
PackageManager packageManager = fragment.getApplicationContext().getPackageManager();
isPayPalAppInstalled = packageManager.getApplicationInfo(PAYPAL_APP_PACKAGE, NO_FLAGS) != null;
} catch (PackageManager.NameNotFoundException ignored) {
// do nothing
}
if (isPayPalAppInstalled) {
fragment.sendAnalyticsEvent("preferred-payment-methods.paypal.app-installed.true");
listener.onPreferredPaymentMethodsFetched(new PreferredPaymentMethodsResult().isPayPalPreferred(true));
return;
}
BraintreeGraphQLHttpClient graphQLClient = fragment.getGraphQLHttpClient();
if (graphQLClient == null) {
fragment.sendAnalyticsEvent("preferred-payment-methods.api-disabled");
listener.onPreferredPaymentMethodsFetched(new PreferredPaymentMethodsResult().isPayPalPreferred(false));
return;
}
String query = "{ \"query\": \"query ClientConfiguration { clientConfiguration { paypal { preferredPaymentMethod } } }\" }";
graphQLClient.post(query, new HttpResponseCallback() {
@Override
public void success(String responseBody) {
PreferredPaymentMethodsResult preferredPaymentMethodsResult = PreferredPaymentMethodsResult.fromJSON(responseBody);
fragment.sendAnalyticsEvent(String.format("preferred-payment-methods.paypal.api-detected.%b", preferredPaymentMethodsResult.isPayPalPreferred()));
listener.onPreferredPaymentMethodsFetched(preferredPaymentMethodsResult);
}
@Override
public void failure(Exception exception) {
fragment.sendAnalyticsEvent("preferred-payment-methods.api-error");
listener.onPreferredPaymentMethodsFetched(new PreferredPaymentMethodsResult().isPayPalPreferred(false));
}
});
}
}
|
New Jersey Gov. Chris Christie (R) took a shot Saturday at his potential 2016 presidential rival, Sen. Rand Paul Randal (Rand) Howard PaulThe Hill's Morning Report — Emergency declaration to test GOP loyalty to Trump The Hill's 12:30 Report: Trump escalates fight with NY Times The 10 GOP senators who may break with Trump on emergency MORE (R-Ky.), for blocking renewal of the Patriot Act.
In a statement from his political action committee, Christie slammed “misguided ideologues” with “no real world experience in fighting terrorism” for “putting their uninformed beliefs above the safety and security of our citizens.”
ADVERTISEMENT
Christie’s statement is an attempt to burnish his reputation as a national security hawk as he prepares a run for president in 2016.He is drawing a sharp contrast with Paul, who used the Patriot Act debate to cement opposition to government surveillance as a cornerstone of his libertarian presidential campaign.Paul blocked repeated attempts by his Kentucky counterpart, Senate Majority Leader(R), early Saturday to enact a short-term extension of the Patriot Act with the National Security Agency’s controversial bulk data collection program intact.On Wednesday, Paul delivered a lengthy speech he billed as a filibuster of a Patriot Act extension. During the speech, Paul’s campaign blasted out an email to supporters to build support for his effort and collect names and email addresses of potential political backers.Christie, who is seeking to run as a Washington outsider, cast the Patriot Act vote as another example of congressional dysfunction."The Senate's failure to extend the Patriot Act is a failure of the U.S. government to perform its most important function — protecting its citizens from harm,” he said. “This dysfunction is what we have come to expect from Washington, D.C., but usually it does not have such dangerous and severe consequences." |
<gh_stars>0
#include <iostream>
using namespace std;
int main() {
int x, y, z, w;
cin >> x >> y >> z >> w;
int cnt=0;
for(int i=0;i<=1000;i++)
{
for(int j=0;j<=1000;j++)
{
if(x*i+y*j<=w && (w-x*i-y*j)%z==0)
{
cnt++;
}
}
}
cout << cnt;
return 0;
} |
. As part of the development of a quality assurance program (QAP), a high performance thin layer chromatography (HPTLC) analysis unit was installed in the pharmacy department at Gustave-Roussy. The HPTLC-CAMAG consists of: 1) an HPTLC-Vario development chamber for optimization of the mobile phases; 2) TLC Sampler III automated sample applicators; 3) solid teflon migration chambers, i.e., horizontal tanks that enable separation to be carried out either in sandwich or in saturation mode; 4) a TLC Scanner 3 densitometer controlled by CATS 4 software; and 5) a Pentium MMX 233 MHz personal computer with an external backup unit. HPTLC quantitative and qualitative analysis has now reached a remarkably high level of development and performance. The samples (aqueous or non-aqueous solutions) that are to be processed are automatically applied by spraying (50-300 nl) in calibrated bands of a few mm (with up to 64 3-mm bands per 10 x 20 cm plate) on high-performance stationary phases and of wide technological diversity. The chromatogram is obtained in 10 min, and run over a migration pathway of 5-6 cm. The plates are read by absorption-reflection or fluorescence-reflection at an ad hoc wavelength (190-800 nm), then the peak areas which have been scanned are calculated by the trapezoid method. The calibration curves are generated by Michaelis-Menten non-linear regression, and validated by internal quality control. The analytical yield is high, i.e., up to 50 assays and 250 determinations per day. HPTLC analysis covers a wide functional range, and can be used in the following ways: 1) as a teaching tool for separative analysis and GLP; 2) it is an invaluable method for the optimization of mobile phases and for the determination of absorption spectra and absorption maxima, with a view to developing HPLC methods in complex matrices; 3) it provides major support for post-production quality control of prescribed hospital preparations of all types, e.g., those connected with parenteral nutrition, chemotherapy, synthetic narcotic analgesia; and it can also be used for dry dosage analysis; 4) it is useful in pharmaceutical assessment, e.g., in studies on the physico-chemical characteristics of various substances, such as their identity, purity, concentration, stability and compatibility, particularly with regard to generic products; 5) it can contribute to monitoring the safety of medical apparatus and equipment via the analysis of container-content interactions; 6) it provides a qualification system for personnel and procedures for within- and between-center validation of GMP. Setting up such an HPTLC quality control unit requires a basic investment of about 0.9 MF or 70,000 US dollars for a cost of no more than 10 F or 1.5 US dollars (including tax) per routine assay. After 18 months in operation and 16,500 assays, the HPTLC analysis unit has become one of the mainstays of the Gustave-Roussy QAP. |
The Denver Broncos began the season without strongside linebacker Von Miller, and they’ll end it without him, too.
Miller is done for the year after tests Monday revealed a torn anterior cruciate ligament in his right knee, which he injured in the first quarter of Denver’s 37-13 win at Houston over the weekend.
The Broncos (12-3) have hit so many potholes this season it’s a wonder they’re not broken down on the side of the Super Bowl Expressway.
Instead, they can wrap up the AFC’s top seed with a win at Oakland (4-11) on Sunday.
Miller’s injury ended a rough third season for the Broncos star, which began with a six-game drug suspension and included just five sacks and 33 tackles in 10 games.
With 30 sacks in his first two NFL seasons, Miller won the NFL’s Defensive Rookie of the Year award in 2011 and was runner-up for the league’s Defensive Player of the Year honor last season. He set a team record in 2012 with 18 1/2 sacks to go with 28 tackles for loss and six forced fumbles.
In the offseason, Elway called Miller the best football player on the planet. But Miller ran afoul of the NFL’s drug program, was suspended for the first month and a half of the season and never really returned to form in 2013.
Miller worked out at the team’s Dove Valley complex during his banishment, and he bulked up to 270 pounds – 24 pounds more than when he was selected second overall in the 2011 draft out of Texas A&M.
Miller was rusty upon his return. Although his bulkier body enhanced his bull rush, it seemed to sap some of his athleticism that made him such a special pass rusher. He had moments where he’d flash his old form, and last week he spoke about peaking for the playoffs.
Although the Broncos went 6-0 during his suspension, that was before they also lost safety Rahim Moore (leg) and linemen Kevin Vickerson (hip) and Derek Wolfe (illness).
The Broncos find themselves once again scrambling to make up for the loss of a dynamic playmaker teams had to account for even though he wasn’t having his typically disruptive season.
The Broncos will look to replace Miller on the roster with another pass-rusher.
Miller was rushing Matt Schaub when he was blocked cleanly by tight end Ryan Griffin. His right knee buckled and he crumpled to the ground. Miller walked off the field and into the locker room with a team doctor and trainer.
Because he didn’t need to be carted off, there was hope his injury wasn’t season-ending and that he might return in the playoffs.
The best case is that he’s back for training camp after surgery sometime next month. ACL recoveries generally take six to nine months.
The Broncos will once again rely on Nate Irving at strongside linebacker in the base defense and on Shaun Phillips, Robert Ayers and newcomer Jeremy Mincey on passing downs – along with dialing up creative blitzes.
”He’s a special player, no question, one of the best guys I’ve seen on the edge in my career,” Broncos tight end Julius Thomas said. ”What he’s able to do on the field, what he can bring, it will be missed, but most importantly we wish him the best, we wish him a fast recovery. |
Hierarchical control flow matching for source-level simulation of embedded software Source-level simulation (SLS) of embedded software annotates the source code based on the matching of the control flow graphs (CFG) between the source code and the cross-compiled binary code. However, existing SLS approaches still can not guarantee to find a matching for a CFG that is optimized by the compiler. Further, they rely on debug information, which may be unreliable. In this paper, the authors propose a hierarchical CFG matching approach to reduce the influence of compiler optimization and ambiguous debug information. This approach divides the CFGs of the source and binary code into nested regions. Then the matching of those two CFGs is performed for the regions in a top-down manner. In this way, heavy optimization or debug misinformation of certain basic blocks will not have global impact on the matching of other basic blocks. Moreover, optimized loops and branches are matched with respect to the optimization techniques used by the compiler. |
Manifold Based Dynamic Texture Synthesis from Extremely Few Samples In this paper, we present a novel method to synthesize dynamic texture sequences from extremely few samples, e.g., merely two possibly disparate frames, leveraging both Markov Random Fields (MRFs) and manifold learning. Decomposing a textural image as a set of patches, we achieve dynamic texture synthesis by estimating sequences of temporal patches. We select candidates for each temporal patch from spatial patches based on MRFs and regard them as samples from a low-dimensional manifold. After mapping candidates to a low-dimensional latent space, we estimate the sequence of temporal patches by finding an optimal trajectory in the latent space. Guided by some key properties of trajectories of realistic temporal patches, we derive a curvature-based trajectory selection algorithm. In contrast to the methods based on MRFs or dynamic systems that rely on a large amount of samples, our method is able to deal with the case of extremely few samples and requires no training phase. We compare our method with the state of the art and show that our method not only exhibits superior performance on synthesizing textures but it also produces results with pleasing visual effects. |
<filename>model_partial_ner/ner.py
""" input example
<s> O None S
Effects I None S
of I None S
uninephrectomy I None S
and I None S
high I None S
protein I None S
feeding I None S
on I None S
lithium I Chemical S
-induced I None S
chronic I Disease S
renal O Disease S
failure O Disease S
in I None S
rats I None S
. I None S
<eof> I None S
<s> O None S
Fusidic O None D
acid O None D
was O None D
administered I None S
orally I None S
in I None S
a I None S
dose I None S
of I None S
500 I None S
mg O None D
t.d.s O None D
. I None S
<eof> I None S
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
import model_partial_ner.utils as utils
from model_partial_ner.highway import Highway
from utilities.common_utils import get_logger
import logging
logger = get_logger(name=__name__, log_file=None, log_level=logging.DEBUG, log_level_name='')
class NER(nn.Module):
"""
Sequence Labeling model augumented with language model.
Parameters
----------
rnn : ``torch.nn.Module``, required.
The RNN unit..
w_num : ``int`` , required.
The number of words.
w_dim : ``int`` , required.
The dimension of word embedding.
c_num : ``int`` , required.
The number of characters.
c_dim : ``int`` , required.
The dimension of character embedding.
y_dim : ``int`` , required.
The dimension of tags types.
y_num : ``int`` , required.
The number of tags types.
droprate : ``float`` , required
The dropout ratio.
"""
def __init__(self, rnn,
w_num: int,
w_dim: int,
c_num: int,
c_dim: int,
y_dim: int,
y_num: int,
droprate: float):
super(NER, self).__init__()
self.rnn = rnn
self.rnn_outdim = self.rnn.output_dim
self.one_direction_dim = self.rnn_outdim // 2
self.word_embed = nn.Embedding(w_num, w_dim)
self.char_embed = nn.Embedding(c_num, c_dim)
self.drop = nn.Dropout(p=droprate)
self.add_proj = y_dim > 0
self.to_chunk = Highway(self.rnn_outdim)
self.to_type = Highway(self.rnn_outdim)
if self.add_proj:
self.to_chunk_proj = nn.Linear(self.rnn_outdim, y_dim)
self.to_type_proj = nn.Linear(self.rnn_outdim, y_dim)
self.chunk_weight = nn.Linear(y_dim, 1)
self.type_weight = nn.Linear(y_dim, y_num)
self.chunk_layer = nn.Sequential(self.to_chunk, self.drop, self.to_chunk_proj, self.drop, self.chunk_weight)
self.type_layer = nn.Sequential(self.to_type, self.drop, self.to_type_proj, self.drop, self.type_weight)
else:
self.chunk_weight = nn.Linear(self.rnn_outdim, 1)
self.type_weight = nn.Linear(self.rnn_outdim, y_num)
self.chunk_layer = nn.Sequential(self.to_chunk, self.drop, self.chunk_weight)
self.type_layer = nn.Sequential(self.to_type, self.drop, self.type_weight)
def to_params(self):
"""
To parameters.
"""
return {
"model_type": "char-lstm-two-level",
# "rnn_params": self.rnn.to_params(),
"word_embed_num": self.word_embed.num_embeddings,
"word_embed_dim": self.word_embed.embedding_dim,
"char_embed_num": self.char_embed.num_embeddings,
"char_embed_dim": self.char_embed.embedding_dim,
"type_dim": self.type_weight.in_features if self.add_proj else -1,
"type_num": self.type_weight.out_features,
"droprate": self.drop.p,
"label_schema": "tie-or-break"
}
def load_pretrained_word_embedding(self, pre_word_embeddings):
"""
Load pre-trained word embedding.
Parameters
----------
pre_word_embeddings : ``torch.FloatTensor``, required.
pre-trained word embedding
"""
self.word_embed.weight = nn.Parameter(pre_word_embeddings)
def rand_ini(self):
"""
Random initialization.
"""
# self.rnn.rand_ini() # simplify the rnn code
self.to_chunk.rand_ini()
self.to_type.rand_ini()
utils.init_embedding(self.char_embed.weight)
utils.init_linear(self.chunk_weight)
utils.init_linear(self.type_weight)
if self.add_proj:
utils.init_linear(self.to_chunk_proj)
utils.init_linear(self.to_type_proj)
def forward(self, w_in, c_in, char_mask):
"""
Sequence labeling model.
Parameters
----------
w_in : ``torch.LongTensor``, required.
The RNN unit.
c_in : ``torch.LongTensor`` , required.
The number of characters.
char_mask : ``torch.ByteTensor`` , required.
The mask for character-level input.
"""
w_emb = self.word_embed(w_in)
c_emb = self.char_embed(c_in)
emb = self.drop(torch.cat([w_emb, c_emb], 2))
# batch size auto changes, the seq length is char length!
# out torch.Size([115, 27, 300]), mask torch.Size([115, 27])
# out torch.Size([88, 35, 300]), mask torch.Size([88, 35])
out = self.rnn(emb)
# out shape is 2d, the first size is always changed because the mask is always changed.
# out first size: batch_size *seq minus "0 mask char token" number
# out.shape torch.Size([756, 300])
# out.shape torch.Size([419, 300])
char_mask = char_mask.unsqueeze(2).expand_as(out)
out = out.masked_select(char_mask).view(-1, self.rnn_outdim)
return out
def chunking(self, z_in):
"""
no mask, return 1d tensor of chunk break/tie label ids
Parameters
----------
z_in : ``torch.LongTensor``, required.
The output of the character-level lstms.
"""
z_in = self.drop(z_in)
out = self.chunk_layer(z_in).squeeze(1)
return out
def typing(self, z_in, word_mask):
"""
Typing
TODO use mask to filter relative meaning will ignore useful meanings of the following words. I will try to
TODO midify to add the following word info
Parameters
----------
z_in : ``torch.LongTensor``, required.
The output of the character-level lstms.
word_mask : ``torch.bool`` , required.
The mask for word-level input.
"""
word_mask = word_mask.unsqueeze(1).expand_as(z_in)
z_in = z_in.masked_select(word_mask).view(-1, 2, self.one_direction_dim)
# the seq length becomes len-1, the seq length is the same as the word number
z_in = torch.cat([z_in[:-1, 1, :].squeeze(1), z_in[1:, 0, :].squeeze(1)], dim = 1)
z_in = self.drop(z_in)
out = self.type_layer(z_in)
return out
def to_span(self, chunk_label, type_ids, none_idx):
"""
Convert word-level labels to entity spans.
Parameters
----------
chunk_label : ``torch.LongTensor``, required.
The chunk label for one sequence.
type_ids : ``torch.LongTensor`` , required.
The type label for one sequence.
none_idx: ``int``, required.
Label index fot the not-target-type entity.
"""
span_list = list()
pre_idx = -1
cur_idx = 0
type_idx = 0
while cur_idx < len(chunk_label):
if chunk_label[cur_idx].data[0] == 1:
if pre_idx >= 0:
cur_type = type_ids[type_idx].data[0]
if cur_type != none_idx:
span_list.append('('+str(pre_idx)+','+str(cur_idx)+')')
type_idx += 1
pre_idx = cur_idx
cur_idx += 1
assert type_idx == len(type_ids)
return set(span_list)
def to_typed_span(self, chunk_label, type_ids, none_idx, id2label):
""" not batch level, but sentence level
TODO the author actually put word_mask as chunk_label, and it is right here!
TODO merge chunk_label and word_mask
Convert word-level labels to typed entity spans.
Parameters
----------
chunk_label : ``torch.LongTensor``, required.
The output of the character-level lstms.
mask : ``torch.ByteTensor`` , required.
The mask for word-level input.
none_idx: ``int``, required.
Label index fot the not-target-type entity.
"""
span_list = list()
pre_idx = -1
cur_idx = 0
type_idx = 0
while cur_idx < len(chunk_label):
if chunk_label[cur_idx].item() == 1:
if pre_idx >=0:
cur_type_idx = type_ids[type_idx].item()
if cur_type_idx != none_idx:
span_list.append(id2label[cur_type_idx]+'@('+str(pre_idx)+','+str(cur_idx)+')')
type_idx += 1
pre_idx = cur_idx
cur_idx += 1
assert type_idx == len(type_ids)
return set(span_list) |
#include<iostream>
#include<cstring>
#define INF 1000000007
using namespace std;
int e[1005][1005],dis[1005],n;
bool book[1005];
void Dijkstra(int st) //朴素Dijkstra
{
int minn,u;
memset(book,0,sizeof(book));
for(int i = 1; i <= n; i++)
dis[i] = INF;
dis[st] = 0;
for(int i = 1; i <= n; i++)
{
minn = INF;
for(int j = 1; j <= n; j++)
{
if(!book[j] && dis[j] < minn)
{
minn = dis[j];
u = j;
}
}
book[u] = 1;
//更新某些点可能的最小值;
for(int v = 1; v <= n; v++)
{
if(e[u][v] < INF)
{
if(dis[v] > dis[u]+e[u][v])
dis[v] = dis[u]+e[u][v];
}
}
}
}
int main()
{
int m,s,t,a,b,diss[1005],dist[1005],ans = 0;
cin>>n>>m>>s>>t;
for(int i = 1; i <= n; i++)
for(int j = 1; j <= n; j++) //初始化;
{
if(i == j) e[i][j] = 0;
else e[i][j] = INF;
}
for(int i = 0; i < m; i++)
{
cin>>a>>b;
e[a][b] = 1;
e[b][a] = 1;
}
Dijkstra(s);
for(int i = 1; i <= n; i++)
diss[i] = dis[i];
Dijkstra(t);
for(int i = 1; i <= n; i++)
dist[i] = dis[i];
for(int i = 1; i <= n; i++)
for(int j = i+1; j <= n; j++)
{
if(e[i][j] == INF)
{
if(diss[i]+dist[j]+1 >= diss[t] && diss[j]+dist[i]+1 >= dist[s])
ans++;
}
}
cout<<ans<<endl;
return 0;
}
|
def backup(filename):
shutil.copy(filename, filename + '.backup')
try:
yield
except Exception:
shutil.copy(filename + '.backup', filename)
else:
os.remove(filename + '.backup') |
<reponame>tsbertalan/CarND-Kidnapped-Vehicle-Project
/*
* particle_filter.cpp
*
* Created on: Dec 12, 2016
* Author: <NAME>
*/
#include <random>
#include <algorithm>
#include <iostream>
#include <numeric>
#include <math.h>
#include <iostream>
#include <sstream>
#include <string>
#include <iterator>
#include <chrono>
#include "particle_filter.h"
using namespace std;
// Print out lots more information?
//#define VERBOSE
// Record all particles positions to a file?
//#define RECORD_PARTICLES
void ParticleFilter::init(double x, double y, double theta, double std[]) {
// Set the number of particles. Initialize all particles to first position (based on estimates of
// x, y, theta and their uncertainties from GPS) and all weights to 1.
// Add random Gaussian noise to each particle.
// NOTE: Consult particle_filter.h for more information about this method (and others in this file).
num_particles = 100;
unsigned seed = std::chrono::system_clock::now().time_since_epoch().count();
default_random_engine gen(seed);
#ifdef DEBUG_PREDICTION
normal_distribution<double> dist_x(x, 0);
normal_distribution<double> dist_y(y, 0);
normal_distribution<double> dist_t(theta, 0);
#else
normal_distribution<double> dist_x(x, std[0]);
normal_distribution<double> dist_y(y, std[1]);
normal_distribution<double> dist_t(theta, std[2]);
#endif
for(int i=0; i<num_particles; i++) {
Particle p;
p.id = 0;
p.x = dist_x(gen);
p.y = dist_y(gen);
p.theta = dist_t(gen);
p.weight = 1;
#ifdef DEBUG_PREDICTION
if(i > 0) p.weight = 0;
#endif
particles.push_back(p);
}
is_initialized = true;
}
void ParticleFilter::prediction(double delta_t, double std_pos[], double velocity, double yaw_rate) {
// Add measurements to each particle and add random Gaussian noise.
// NOTE: When adding noise you may find std::normal_distribution and std::default_random_engine useful.
// http://en.cppreference.com/w/cpp/numeric/random/normal_distribution
// http://www.cplusplus.com/reference/random/default_random_engine/
unsigned seed = std::chrono::system_clock::now().time_since_epoch().count();
default_random_engine gen(seed);
normal_distribution<double> dist_x(0, std_pos[0]);
normal_distribution<double> dist_y(0, std_pos[1]);
normal_distribution<double> dist_t(0, std_pos[2]);
#ifdef RECORD_PARTICLES
// Record particle locations to a file for external visualization.
std::ofstream f;
f.open("particle_histories.out", std::ios_base::app);
#endif
for(auto& p : particles) {
double xf, yf, tf;
if(fabs(yaw_rate) < .001) {
xf = p.x + velocity * delta_t * cos(p.theta);
yf = p.y + velocity * delta_t * sin(p.theta);
tf = p.theta + yaw_rate * delta_t;
} else {
xf = p.x + velocity / yaw_rate * (sin(p.theta + yaw_rate * delta_t) - sin(p.theta));
yf = p.y + velocity / yaw_rate * (cos(p.theta) - cos(p.theta + yaw_rate * delta_t));
tf = p.theta + yaw_rate * delta_t;
}
p.x = xf;
p.y = yf;
p.theta = tf;
// Add noise.
#ifndef DEBUG_PREDICTION
double nx, ny, nt;
nx = dist_x(gen);
ny = dist_y(gen);
nt = dist_t(gen);
p.x += nx;
p.y += ny;
p.theta += nt;
#endif
#ifdef RECORD_PARTICLES
// Record particle locations.
f << p.describe();
f.flush();
#endif
}
#ifdef RECORD_PARTICLES
f << endl;
f.close();
#endif
}
vector<Deviation> ParticleFilter::dataAssociation(std::vector<LandmarkObs> predicted, std::vector<LandmarkObs>& observations) {
// Find the predicted measurement that is closest to each observed measurement and assign the
// observed measurement to this particular landmark.
// NOTE: this method will NOT be called by the grading code. But you will probably find it useful to
// implement this method and use it as a helper during the updateWeights phase.
// Brute force method.
// It would be better to keep the landmarks in a quadtree,
// and restrict our search only to a small neighborhood of the observation.
vector<Deviation> deviations;
// Loop over all the landmark observations.
for(auto& observation : observations) {
// Find the nearest map landmark.
Deviation deviation, smallest_deviation;
smallest_deviation.dx = INFINITY;
smallest_deviation.dy = INFINITY;
// Loop over all the map landmarks, transformed into car coordinates.
for(auto& prediction : predicted) {
// Find the distance to this map landmark.
deviation.dx = prediction.x - observation.x;
deviation.dy = prediction.y - observation.y;
// Record the closest map landmark for this observed landmark.
double r = deviation.r();
if( r < smallest_deviation.r() ) {
smallest_deviation.dx = deviation.dx;
smallest_deviation.dy = deviation.dy;
observation.id = prediction.id;
}
}
// Save the deviation vector for computing likelihood later.
deviations.push_back(smallest_deviation);
}
return deviations;
}
void homogeneousTransform(double *pair, double x, double y, double xp, double yp, double tp) {
pair[0] = xp + x * cos(tp) - y * sin(tp);
pair[1] = yp + x * sin(tp) + y * cos(tp);
}
Map::single_landmark_s car2map(Particle &car, LandmarkObs &obs) {
Map::single_landmark_s map_obs;
double pair[2];
homogeneousTransform(pair, obs.x, obs.y, car.x, car.y, car.theta);
map_obs.x_f = pair[0];
map_obs.y_f = pair[1];
// map_obs.x_f = car.x + cos(car.theta) * obs.x - sin(car.theta) * obs.y;
// map_obs.y_f = car.y + sin(car.theta) * obs.x + cos(car.theta) * obs.y;
map_obs.id_i = obs.id;
return map_obs;
}
LandmarkObs map2car(Particle& car, Map::single_landmark_s map_obs) {
#ifdef VERBOSE
cout.precision(17);
msg("dict(");
cout << "car_particle = dict(x=" << car.x <<", y=" << car.y << ", t=" <<car.theta <<")," << endl;
cout << "map_landmark = dict(" << "id=" << map_obs.id_i << ", x=" << map_obs.x_f << ", y=" << map_obs.y_f << ")," << endl;
#endif
LandmarkObs car_obs;
double pair[2];
homogeneousTransform(pair, map_obs.x_f, map_obs.y_f, -car.x, -car.y, -car.theta);
car_obs.x = pair[0];
car_obs.y = pair[1];
// From asking Mathematica to invert the given transformation:
// car_obs.x = (map_obs.x_f - car.x) * cos(car.theta) + (map_obs.x_f - car.y) * sin(car.theta);
// car_obs.y = (map_obs.y_f - car.y) * cos(car.theta) - (map_obs.x_f - car.x) * sin(car.theta);
// From plugging in -car.x, -car.y, -car.theta into the transformation manually.
// car_obs.x = map_obs.x_f * cos(car.theta) + map_obs.y_f * sin(car.theta) - car.x;
// car_obs.y = -map_obs.x_f * sin(car.theta) + map_obs.y_f * cos(car.theta) - car.y;
car_obs.id = map_obs.id_i;
#ifdef VERBOSE
cout << "car_obs = dict(" << "id=" << car_obs.id << ", x=" << car_obs.x << ", y=" << car_obs.y << ")," << endl;
Map::single_landmark_s reconstruction = car2map(car, car_obs);
cout << "reconstruction = dict(" << "id=" << reconstruction.id_i << ", x=" << reconstruction.x_f << ", y=" << reconstruction.y_f << ")," << endl;
msg("),");
#endif
return car_obs;
}
void ParticleFilter::updateWeights(double sensor_range, double std_landmark[],
const std::vector<LandmarkObs> &observations, const Map &map_landmarks) {
// Update the weights of each particle using a mult-variate Gaussian distribution. You can read
// more about this distribution here: https://en.wikipedia.org/wiki/Multivariate_normal_distribution
// NOTE: The observations are given in the VEHICLE'S coordinate system. Your particles are located
// according to the MAP'S coordinate system. You will need to transform between the two systems.
// Keep in mind that this transformation requires both rotation AND translation (but no scaling).
// The following is a good resource for the theory:
// https://www.willamette.edu/~gorr/classes/GeneralGraphics/Transforms/transforms2d.htm
// and the following is a good resource for the actual equation to implement (look at equation
// 3.33
// http://planning.cs.uiuc.edu/node99.html
#ifdef DEBUG_PREDICTION
return;
#endif
// For each particle ...
weights.clear();
for(Particle &particle : particles) {
// Do the data association in map space.
// Copy landmarks into a vector of observations.
vector<LandmarkObs> landmarks;
for(auto& map_landmark : map_landmarks.landmark_list) {
landmarks.push_back(LandmarkObs{map_landmark.id_i, map_landmark.x_f, map_landmark.y_f});
}
// Transform observations to map space.
// Others suggested using sensor_range here to restrict our search later,
// but you still need to iterate over all (particle x observation) pairs to do that exclusion,
// so it's not a savings. As mentioned above in dataAssocation, the real way to use
// sensor_range for a speedup would be something like a quadtree.
vector<LandmarkObs> map_observations;
for(auto car_observation : observations) {
Map::single_landmark_s map_observation = car2map(particle, car_observation);
map_observations.push_back(LandmarkObs{map_observation.id_i, map_observation.x_f, map_observation.y_f});
}
// Associate each observation with one landmark.
vector<Deviation> deviations = dataAssociation(landmarks, map_observations);
// Since we did the association now, set it in the particle.
vector<int> associations;
vector<double> sense_x;
vector<double> sense_y;
for(auto& obs : map_observations) {
associations.push_back(obs.id);
sense_x.push_back(obs.x);
sense_y.push_back(obs.y);
}
SetAssociations(particle, associations, sense_x, sense_y);
// With the distances between observations and predicted landmark locations; calculate a likelihood for each.
// Compute the product likelihood for the particle.
double likelihood = 1.0;
double exponent;
#ifdef VERBOSE
cout << "Weight for particle " << &particle << " is... " << endl;
#endif
for(auto& d : deviations) {
double dx, dy;
// What am I supposed to be doing with sensor_range?
// if(d.r() < sensor_range) {
dx = d.dx;
dy = d.dy;
// } else {
// dx = sqrt(pow(sensor_range, 2)/2.0);
// dy = dx;
// }
#ifdef VERBOSE
cout << " (" << dx << "," << dy << ")->";
#endif
exponent =
pow(dx, 2) / 2 / std_landmark[0] / std_landmark[0]
+ pow(dy, 2) / 2 / std_landmark[1] / std_landmark[1];
exponent *= -1;
likelihood *= exp(exponent) / 2 / M_PI / std_landmark[0] / std_landmark[1];
#ifdef VERBOSE
cout << likelihood << endl;
#endif
}
#ifdef VERBOSE
cout << endl;
#endif
// Let the weight for the particle be just the product likelihood.
particle.weight = likelihood;
weights.push_back(likelihood);
}
}
void ParticleFilter::resample() {
// Resample particles with replacement with probability proportional to their weight.
// NOTE: You may find std::discrete_distribution helpful here.
// http://en.cppreference.com/w/cpp/numeric/random/discrete_distribution
#ifdef DEBUG_PREDICTION
return;
#endif
vector<double> weights;
for(auto &particle : particles) {
weights.push_back(particle.weight);
}
unsigned seed = std::chrono::system_clock::now().time_since_epoch().count();
default_random_engine gen(seed);
discrete_distribution<> dist(weights.begin(), weights.end());
vector<Particle> resampled;
for(int _iparticle=0; _iparticle<num_particles; _iparticle++) {
int isample = dist(gen);
resampled.push_back(particles[isample]);
}
particles = resampled;
}
void ParticleFilter::SetAssociations(Particle &particle, const vector<int> associations, const vector<double> sense_x,
const vector<double> sense_y
)
{
//particle: the particle to assign each listed association, and association's (x,y) world coordinates mapping to
// associations: The landmark id that goes along with each listed association
// sense_x: the associations x mapping already converted to world coordinates
// sense_y: the associations y mapping already converted to world coordinates
particle.associations.clear();
particle.sense_x.clear();
particle.sense_y.clear();
particle.associations = associations;
particle.sense_x = sense_x;
particle.sense_y = sense_y;
}
string ParticleFilter::getAssociations(Particle &best)
{
vector<int> v = best.associations;
stringstream ss;
copy( v.begin(), v.end(), ostream_iterator<int>(ss, " "));
string s = ss.str();
s = s.substr(0, s.length()-1); // get rid of the trailing space
return s;
}
string ParticleFilter::getSenseX(Particle &best)
{
vector<double> v = best.sense_x;
stringstream ss;
copy( v.begin(), v.end(), ostream_iterator<float>(ss, " "));
string s = ss.str();
s = s.substr(0, s.length()-1); // get rid of the trailing space
return s;
}
string ParticleFilter::getSenseY(Particle &best)
{
vector<double> v = best.sense_y;
stringstream ss;
copy( v.begin(), v.end(), ostream_iterator<float>(ss, " "));
string s = ss.str();
s = s.substr(0, s.length()-1); // get rid of the trailing space
return s;
}
double Deviation::r() {
return sqrt(pow(dx, 2) + pow(dy, 2));
}
void msg(std::string m) {
std::cout << m << std::endl;
std::cout.flush();
}
std::string Particle::describe() {
std::ostringstream ss;
ss << "(" << x << "," << y << "," << theta << ")";
return ss.str();
}
|
package com.android.skripsi.carikuliner.rest;
import com.android.skripsi.carikuliner.model.GetKategori;
import com.android.skripsi.carikuliner.model.GetDetail;
import com.android.skripsi.carikuliner.model.GetRekomendasi;
import retrofit2.Call;
import retrofit2.http.GET;
import retrofit2.http.Query;
//interface used to achieve some data from server using RESTful web service
public interface ApiInterface {
//get category data from server
@GET("get_category")
Call<GetKategori> getKategori();
//get recommendation data from server
@GET("recommendations")
Call<GetRekomendasi> getRekomendasi(@Query("lat") double lat,
@Query("long") double _long,
@Query("cat") String category,
@Query("weight") String weight);
//get detail of recommendation place chosen by users
@GET("get_datarecommend")
Call<GetDetail> getDetail(@Query("id") String id);
}
|
Fabrication of Flexible Piezoelectric PZT/Fabric Composite Flexible piezoelectric PZT/fabric composite material is pliable and tough in nature which is in a lack of traditional PZT patches. It has great application prospect in improving the sensitivity of sensor/actuator made by piezoelectric materials especially when they are used for curved surfaces or complicated conditions. In this paper, glass fiber cloth was adopted as carrier to grow PZT piezoelectric crystal particles by hydrothermal method, and the optimum conditions were studied. The results showed that the soft glass fiber cloth was an ideal kind of carrier. A large number of cubic-shaped PZT nanocrystallines grew firmly in the carrier with a dense and uniform distribution. The best hydrothermal condition was found to be pH 13, reaction time 24h, and reaction temperature 200°C. Introduction Piezoelectric materials have the ability to convert mechanical energy into electrical energy and have potential applications as smart materials for sensors or actuator devices which require high direction sensitivity in structural health monitoring. Lead zirconate titanate (PZT) is a common piezoelectric material which is commercially used for piezoelectric actuators and sensors. But the monolithic PZT piezoelectric ceramic materials, including PZT patches or wafers, are always very brittle, and their fatigue resistance is also poor, which makes them vulnerable to accidental breakage during handling and bonding procedures, as well as their extremely limited ability to be conformed to curved surfaces. Consequently, it also seriously affects the sensitivity of the sensor or actuator devices. In resolving the inadequacies of monolithic piezoceramic materials, many achievements have been obtained in developing composite piezoelectric materials with high elasticity and piezoelectric properties. Recently, Chen and coworkers fabricated a flexible 1-3 piezo-composite made up by PZT microfibers, the micro-PZT fibers were arranged along one direction within the epoxy resin matrix. Qiu et al. introduced Pb(Nb,Ni)O 3 -Pb(Zr,Ti)O 3 (PNN-PZT) piezoelectric ceramic fibers with a metal core to strengthen the toughness of the ceramic fibers. Piezoelectric fiber composites with elastic coating were employed to prevent the piezoelectric materials from mechanical failure. However, it is still difficult to improve the low fracture toughness and electric fatigue of this delicate type of material. Currently, a big challenge is to obtain PZT composite material with high soft and toughness. In this study, details of fabrication of piezoelectric PZT/fabric composite material with high pliable properties using hydrothermal method were reported. The growth of the PZT piezoelectric crystalline on fabric carrier and the optimum hydrothermal conditions were also studied. Experimental Process According to the measurement of molecular formula Pb(Zr 0.52 Ti 0.48 )O 3, this experiment used tetrabutyl titanate (Ti(OC 4 H 9 ) 4 ), lead nitrate (Pb(NO 3 ) 2 ), and zirconium oxychloride (ZrOCl 2 ⋅8H 2 O) as raw materials. Tetrabutyl titanate was poured into the mixed solution of lead nitrate and zirconium oxychloride slowly. After magnetic mixing for 10 min, sodium hydroxide (NaOH) solution was dripped into the mixed liquor until achieved a certain pH value; then the mixed liquor was transferred into the hydrothermal synthesis reactor, with the filling factor of 60%. 3∼5 tablets of glass fiber cloth carrier with suitable size were placed in the hydrothermal synthesis reactor; then the reactor was put in the oven at the temperature of 200 ∘ C for a certain reaction time. After natural cooling to room temperature, the carriers were washed with deionized water carefully, and 0.1 mol⋅L −1 silver nitrate (AgNO 3 ) solution was used to test the liquid detergent until no chlorideion content was detected. Then dried the carriers at the temperature of 80 ∘ C for 2 h, the PZT/fabric piezoelectric composite material was obtained. Specific reaction conditions are shown in Table 1. Microstructures of the composites and nano-PZT crystal particles were characterized with Scanning Electron Microscopy (SEM, JSM-7001F) and Energy Dispersive X-ray Detector (EDAX) techniques. The Morphology Characteristics of PZT/Fabric Piezoelectric Composite. A variety of fabrics as carriers were tested in the paper, such as cotton cloth, cloth filter, ceramic fiber cloth, and nonwoven fabrics. The results showed that most of the fabric materials were melted completely in the hydrothermal synthesis reactor when under high temperature, high pressure, and strong alkali corrosion conditions. Because ceramic fiber paper is adhered by glue, it cannot retain either. Although cotton cloth, nonwoven fabrics, and glass fiber cloth can remain in the environment of reaction kettle, cotton cloth is badly damaged and becomes ragged, and nonwoven will contract with great charge in dimension. Only the glass fiber cloth can keep the original fiber structure, and the integrity of fiber can be maintained without any breakage and its elasticity still well as shown in Figure 1(a). From Figure 1(a), we can observe that many PZT nanocrystallines grow on the soft glass fiber cloth surface intensively. Grain is cubic shaped, as shown in the zoomed image of the upper right corner of Figure 1(a). As the spectrum analysis from Figure 1(b), the chemical composition of the grain contains Pb, Zr, Ti, and O elements; its composition and proportions are consistent with PZT material. Other elements like Ca and C, which can be seen in the spectrum, come from the composition of glass fiber cloth carrier. pH Value on the Influence of the Nano-PZT Crystal Particles Growth. In different pH value, the morphology of PZT nanocrystallines is different on the glass fiber cloth carrier. Figures 2(a) and 2(b) are the micromorphology of A1 and A2 samples when pH value is 8 and 13, respectively. In Figure 2(a) there appears the hexagon lamellar structure of the PZT nanocrystallines, while some fibrous and cubic-shaped PZT nanocrystallines can be seen in the Figure 2 It's well known that the crystal formation of PZT is perovskite structure, and its basic unit is oxygen octahedral structure. The oxygen octahedral structure can join into PZT crystal in the way of concurrent. If the main growth direction is in one-dimension, fibrous PZT crystal will generate, while the sheet-like or layered PZT crystal will generate if the major growth direction is in two-dimensional. The angular cubicshaped PZT grain will generate if the main growth direction is in tridimensionality. In fact, alkaline condition is conducive to PZT grain crystallization and growth; the pH value affects the combination way of PZT crystal oxygen octahedral seriously. Pb 2+ ions and OH − ions combine in the growth process, and the growth direction of oxygen octahedral is mainly two-dimensional, while the growth direction of oxygen octahedral is exceptionally slow in three-dimensional direction when the pH value is low, thus, the PZT crystallization becomes sheetlike or layered, as shown in Figure 2(a). But when the PH value is higher, because the Na + ion's radius is small, a lot The Scientific World Journal of Na + ions get into the precursor body of the hydrated ion of Zr and Ti, and PZT crystal precursor receive the impact of Na + ions. It results in PZT crystal precursor's fracture and makes oxygen octahedral grow in tridimensional space with different degrees. The growth direction of the crystals is in tridimensional, and the crystals appear fibrous or cubic shapes, as shown in Figure 2(b). Reaction Time on the Influence of the Nano-PZT Crystal Particles Growth. Pb(OH) 2, Zr(OH) 4, and Ti(OH) 4 are the hydrate of lead, zirconium and titanium elements of the raw materials; they will be suspended in the mixture solution because of their different solubility. In alkaline conditions, when all these suspended reactant particles in the solution are partly dissolved in solution, they form supersaturated solution phase. It generates the PZT crystal nucleus with the interaction of supersaturated ion or ionic group, nucleus collides each other in the suspension, small crystal nucleus gradually dissolve, and big crystal nucleus grow up slowly then precipitate out from the solution to a certain degree. In the hydrothermal system, adsorption and sedimentation of the growing basic unit on the crystal surface strongly affect the crystal growth rate and direction. Due to the slow speed of dissolution and precipitation, the reaction time affects the formation of the PZT crystal morphology greatly. When the reaction time is longer, the degree of precipitation will be deeper, and PZT crystal morphology characteristics are more obvious. Sheet-shaped PZT nanocrystallines could generate in fiber glass carrier after the mixture reacted for 10 h, as shown in Figure 3(a), namely, sample A3. While after reaction for 18 h, the quality of sheet nanocrystalline on carrier began to reduce, a large number of cubic-shaped PZT nanocrystallines began to grow out, as shown in Figure 3(b), namely, sample A4. It can be concluded that a longer hydrothermal reaction time makes it easier to get cubic-shaped PZT nanoparticles with dissolving the precursors and absorbing reactive ion compound. Conclusion Soft glass fiber cloth is an ideal kind of carrier, it can keep the original fiber structure and well elasticity, and the integrity of fiber can be maintained without any breakage in the environment of reaction kettle. It is relatively appropriate to prepare piezoelectric composite material, and a lot of PZT nanocrystallines can grow on the carrier firmly and intensively through hydrothermal method. During this process, pH value affects the morphology of the PZT crystal grain. When the pH value is less than 10, the PZT crystal is in lamellar structure, while cubic-shaped PZT crystal can generate when the pH value is 13. Therefore, good morphology of the PZT crystal can be obtained under pH = 13. The reaction time also affects the morphology of the PZT crystal grain seriously, as the reaction time increased, it is easier to get cubic-shaped PZT nanocrystallines. The reaction time influences both the integrity and the size of grain, and the best reaction time is 24 hours. |
Only 39 percent of American voters think black-white relations are improving. And that's the optimists: Thirteen percent of black voters think race relations are getting better. What's the deal, America? Haven't you noticed our black president?
(Or, maybe, the problem is that everyone did notice our black president, but not in a nice way? And then a small but loud group of white people started insinuating that he is actually some kind of foreigner, and emailing each other explicitly racist jokes? And that small but loud group is actually one of our two major political parties? Just speculating, though! It could just be that black folks are intractably pessimistic.)
Oh, America, we had so much hope after the beer summit! Sixty-two percent of American voters thought race relations were improving last July, during the fantastically gripping and important news story of "The Time Barack Obama Had Beer With a Cop and a Harvard Professor." How naive we all were, to have believed that a short meeting over Red Stripe and Blue Moon would set this country back on the path of tolerance and openness from which it has never wavered!
Meanwhile, even fewer voters—21 percent—believe relations between white folks and Hispanic folks are improving, while just 16 percent think relations between blacks and Hispanics are getting better. But a good new immigration law will solve that, right?
[Rasmussen; image via Shutterstock] |
<gh_stars>0
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
// Code generated by Microsoft (R) AutoRest Code Generator.
// Changes may cause incorrect behavior and will be lost if the code is regenerated.
package armhealthcareapis
import (
"context"
"errors"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/arm"
armruntime "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
"net/http"
"net/url"
"strings"
)
// IotConnectorsClient contains the methods for the IotConnectors group.
// Don't use this type directly, use NewIotConnectorsClient() instead.
type IotConnectorsClient struct {
host string
subscriptionID string
pl runtime.Pipeline
}
// NewIotConnectorsClient creates a new instance of IotConnectorsClient with the specified values.
// subscriptionID - The subscription identifier.
// credential - used to authorize requests. Usually a credential from azidentity.
// options - pass nil to accept the default values.
func NewIotConnectorsClient(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*IotConnectorsClient, error) {
if options == nil {
options = &arm.ClientOptions{}
}
ep := cloud.AzurePublic.Services[cloud.ResourceManager].Endpoint
if c, ok := options.Cloud.Services[cloud.ResourceManager]; ok {
ep = c.Endpoint
}
pl, err := armruntime.NewPipeline(moduleName, moduleVersion, credential, runtime.PipelineOptions{}, options)
if err != nil {
return nil, err
}
client := &IotConnectorsClient{
subscriptionID: subscriptionID,
host: ep,
pl: pl,
}
return client, nil
}
// BeginCreateOrUpdate - Creates or updates an IoT Connector resource with the specified parameters.
// If the operation fails it returns an *azcore.ResponseError type.
// Generated from API version 2022-01-31-preview
// resourceGroupName - The name of the resource group that contains the service instance.
// workspaceName - The name of workspace resource.
// iotConnectorName - The name of IoT Connector resource.
// iotConnector - The parameters for creating or updating an IoT Connectors resource.
// options - IotConnectorsClientBeginCreateOrUpdateOptions contains the optional parameters for the IotConnectorsClient.BeginCreateOrUpdate
// method.
func (client *IotConnectorsClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, workspaceName string, iotConnectorName string, iotConnector IotConnector, options *IotConnectorsClientBeginCreateOrUpdateOptions) (*runtime.Poller[IotConnectorsClientCreateOrUpdateResponse], error) {
if options == nil || options.ResumeToken == "" {
resp, err := client.createOrUpdate(ctx, resourceGroupName, workspaceName, iotConnectorName, iotConnector, options)
if err != nil {
return nil, err
}
return runtime.NewPoller[IotConnectorsClientCreateOrUpdateResponse](resp, client.pl, nil)
} else {
return runtime.NewPollerFromResumeToken[IotConnectorsClientCreateOrUpdateResponse](options.ResumeToken, client.pl, nil)
}
}
// CreateOrUpdate - Creates or updates an IoT Connector resource with the specified parameters.
// If the operation fails it returns an *azcore.ResponseError type.
// Generated from API version 2022-01-31-preview
func (client *IotConnectorsClient) createOrUpdate(ctx context.Context, resourceGroupName string, workspaceName string, iotConnectorName string, iotConnector IotConnector, options *IotConnectorsClientBeginCreateOrUpdateOptions) (*http.Response, error) {
req, err := client.createOrUpdateCreateRequest(ctx, resourceGroupName, workspaceName, iotConnectorName, iotConnector, options)
if err != nil {
return nil, err
}
resp, err := client.pl.Do(req)
if err != nil {
return nil, err
}
if !runtime.HasStatusCode(resp, http.StatusOK, http.StatusCreated, http.StatusAccepted) {
return nil, runtime.NewResponseError(resp)
}
return resp, nil
}
// createOrUpdateCreateRequest creates the CreateOrUpdate request.
func (client *IotConnectorsClient) createOrUpdateCreateRequest(ctx context.Context, resourceGroupName string, workspaceName string, iotConnectorName string, iotConnector IotConnector, options *IotConnectorsClientBeginCreateOrUpdateOptions) (*policy.Request, error) {
urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HealthcareApis/workspaces/{workspaceName}/iotconnectors/{iotConnectorName}"
if resourceGroupName == "" {
return nil, errors.New("parameter resourceGroupName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName))
if client.subscriptionID == "" {
return nil, errors.New("parameter client.subscriptionID cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID))
if workspaceName == "" {
return nil, errors.New("parameter workspaceName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{workspaceName}", url.PathEscape(workspaceName))
if iotConnectorName == "" {
return nil, errors.New("parameter iotConnectorName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{iotConnectorName}", url.PathEscape(iotConnectorName))
req, err := runtime.NewRequest(ctx, http.MethodPut, runtime.JoinPaths(client.host, urlPath))
if err != nil {
return nil, err
}
reqQP := req.Raw().URL.Query()
reqQP.Set("api-version", "2022-01-31-preview")
req.Raw().URL.RawQuery = reqQP.Encode()
req.Raw().Header["Accept"] = []string{"application/json"}
return req, runtime.MarshalAsJSON(req, iotConnector)
}
// BeginDelete - Deletes an IoT Connector.
// If the operation fails it returns an *azcore.ResponseError type.
// Generated from API version 2022-01-31-preview
// resourceGroupName - The name of the resource group that contains the service instance.
// iotConnectorName - The name of IoT Connector resource.
// workspaceName - The name of workspace resource.
// options - IotConnectorsClientBeginDeleteOptions contains the optional parameters for the IotConnectorsClient.BeginDelete
// method.
func (client *IotConnectorsClient) BeginDelete(ctx context.Context, resourceGroupName string, iotConnectorName string, workspaceName string, options *IotConnectorsClientBeginDeleteOptions) (*runtime.Poller[IotConnectorsClientDeleteResponse], error) {
if options == nil || options.ResumeToken == "" {
resp, err := client.deleteOperation(ctx, resourceGroupName, iotConnectorName, workspaceName, options)
if err != nil {
return nil, err
}
return runtime.NewPoller[IotConnectorsClientDeleteResponse](resp, client.pl, nil)
} else {
return runtime.NewPollerFromResumeToken[IotConnectorsClientDeleteResponse](options.ResumeToken, client.pl, nil)
}
}
// Delete - Deletes an IoT Connector.
// If the operation fails it returns an *azcore.ResponseError type.
// Generated from API version 2022-01-31-preview
func (client *IotConnectorsClient) deleteOperation(ctx context.Context, resourceGroupName string, iotConnectorName string, workspaceName string, options *IotConnectorsClientBeginDeleteOptions) (*http.Response, error) {
req, err := client.deleteCreateRequest(ctx, resourceGroupName, iotConnectorName, workspaceName, options)
if err != nil {
return nil, err
}
resp, err := client.pl.Do(req)
if err != nil {
return nil, err
}
if !runtime.HasStatusCode(resp, http.StatusOK, http.StatusAccepted, http.StatusNoContent) {
return nil, runtime.NewResponseError(resp)
}
return resp, nil
}
// deleteCreateRequest creates the Delete request.
func (client *IotConnectorsClient) deleteCreateRequest(ctx context.Context, resourceGroupName string, iotConnectorName string, workspaceName string, options *IotConnectorsClientBeginDeleteOptions) (*policy.Request, error) {
urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HealthcareApis/workspaces/{workspaceName}/iotconnectors/{iotConnectorName}"
if client.subscriptionID == "" {
return nil, errors.New("parameter client.subscriptionID cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID))
if resourceGroupName == "" {
return nil, errors.New("parameter resourceGroupName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName))
if iotConnectorName == "" {
return nil, errors.New("parameter iotConnectorName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{iotConnectorName}", url.PathEscape(iotConnectorName))
if workspaceName == "" {
return nil, errors.New("parameter workspaceName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{workspaceName}", url.PathEscape(workspaceName))
req, err := runtime.NewRequest(ctx, http.MethodDelete, runtime.JoinPaths(client.host, urlPath))
if err != nil {
return nil, err
}
reqQP := req.Raw().URL.Query()
reqQP.Set("api-version", "2022-01-31-preview")
req.Raw().URL.RawQuery = reqQP.Encode()
req.Raw().Header["Accept"] = []string{"application/json"}
return req, nil
}
// Get - Gets the properties of the specified IoT Connector.
// If the operation fails it returns an *azcore.ResponseError type.
// Generated from API version 2022-01-31-preview
// resourceGroupName - The name of the resource group that contains the service instance.
// workspaceName - The name of workspace resource.
// iotConnectorName - The name of IoT Connector resource.
// options - IotConnectorsClientGetOptions contains the optional parameters for the IotConnectorsClient.Get method.
func (client *IotConnectorsClient) Get(ctx context.Context, resourceGroupName string, workspaceName string, iotConnectorName string, options *IotConnectorsClientGetOptions) (IotConnectorsClientGetResponse, error) {
req, err := client.getCreateRequest(ctx, resourceGroupName, workspaceName, iotConnectorName, options)
if err != nil {
return IotConnectorsClientGetResponse{}, err
}
resp, err := client.pl.Do(req)
if err != nil {
return IotConnectorsClientGetResponse{}, err
}
if !runtime.HasStatusCode(resp, http.StatusOK) {
return IotConnectorsClientGetResponse{}, runtime.NewResponseError(resp)
}
return client.getHandleResponse(resp)
}
// getCreateRequest creates the Get request.
func (client *IotConnectorsClient) getCreateRequest(ctx context.Context, resourceGroupName string, workspaceName string, iotConnectorName string, options *IotConnectorsClientGetOptions) (*policy.Request, error) {
urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HealthcareApis/workspaces/{workspaceName}/iotconnectors/{iotConnectorName}"
if resourceGroupName == "" {
return nil, errors.New("parameter resourceGroupName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName))
if client.subscriptionID == "" {
return nil, errors.New("parameter client.subscriptionID cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID))
if workspaceName == "" {
return nil, errors.New("parameter workspaceName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{workspaceName}", url.PathEscape(workspaceName))
if iotConnectorName == "" {
return nil, errors.New("parameter iotConnectorName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{iotConnectorName}", url.PathEscape(iotConnectorName))
req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.host, urlPath))
if err != nil {
return nil, err
}
reqQP := req.Raw().URL.Query()
reqQP.Set("api-version", "2022-01-31-preview")
req.Raw().URL.RawQuery = reqQP.Encode()
req.Raw().Header["Accept"] = []string{"application/json"}
return req, nil
}
// getHandleResponse handles the Get response.
func (client *IotConnectorsClient) getHandleResponse(resp *http.Response) (IotConnectorsClientGetResponse, error) {
result := IotConnectorsClientGetResponse{}
if err := runtime.UnmarshalAsJSON(resp, &result.IotConnector); err != nil {
return IotConnectorsClientGetResponse{}, err
}
return result, nil
}
// NewListByWorkspacePager - Lists all IoT Connectors for the given workspace
// If the operation fails it returns an *azcore.ResponseError type.
// Generated from API version 2022-01-31-preview
// resourceGroupName - The name of the resource group that contains the service instance.
// workspaceName - The name of workspace resource.
// options - IotConnectorsClientListByWorkspaceOptions contains the optional parameters for the IotConnectorsClient.ListByWorkspace
// method.
func (client *IotConnectorsClient) NewListByWorkspacePager(resourceGroupName string, workspaceName string, options *IotConnectorsClientListByWorkspaceOptions) *runtime.Pager[IotConnectorsClientListByWorkspaceResponse] {
return runtime.NewPager(runtime.PagingHandler[IotConnectorsClientListByWorkspaceResponse]{
More: func(page IotConnectorsClientListByWorkspaceResponse) bool {
return page.NextLink != nil && len(*page.NextLink) > 0
},
Fetcher: func(ctx context.Context, page *IotConnectorsClientListByWorkspaceResponse) (IotConnectorsClientListByWorkspaceResponse, error) {
var req *policy.Request
var err error
if page == nil {
req, err = client.listByWorkspaceCreateRequest(ctx, resourceGroupName, workspaceName, options)
} else {
req, err = runtime.NewRequest(ctx, http.MethodGet, *page.NextLink)
}
if err != nil {
return IotConnectorsClientListByWorkspaceResponse{}, err
}
resp, err := client.pl.Do(req)
if err != nil {
return IotConnectorsClientListByWorkspaceResponse{}, err
}
if !runtime.HasStatusCode(resp, http.StatusOK) {
return IotConnectorsClientListByWorkspaceResponse{}, runtime.NewResponseError(resp)
}
return client.listByWorkspaceHandleResponse(resp)
},
})
}
// listByWorkspaceCreateRequest creates the ListByWorkspace request.
func (client *IotConnectorsClient) listByWorkspaceCreateRequest(ctx context.Context, resourceGroupName string, workspaceName string, options *IotConnectorsClientListByWorkspaceOptions) (*policy.Request, error) {
urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HealthcareApis/workspaces/{workspaceName}/iotconnectors"
if resourceGroupName == "" {
return nil, errors.New("parameter resourceGroupName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName))
if client.subscriptionID == "" {
return nil, errors.New("parameter client.subscriptionID cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID))
if workspaceName == "" {
return nil, errors.New("parameter workspaceName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{workspaceName}", url.PathEscape(workspaceName))
req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.host, urlPath))
if err != nil {
return nil, err
}
reqQP := req.Raw().URL.Query()
reqQP.Set("api-version", "2022-01-31-preview")
req.Raw().URL.RawQuery = reqQP.Encode()
req.Raw().Header["Accept"] = []string{"application/json"}
return req, nil
}
// listByWorkspaceHandleResponse handles the ListByWorkspace response.
func (client *IotConnectorsClient) listByWorkspaceHandleResponse(resp *http.Response) (IotConnectorsClientListByWorkspaceResponse, error) {
result := IotConnectorsClientListByWorkspaceResponse{}
if err := runtime.UnmarshalAsJSON(resp, &result.IotConnectorCollection); err != nil {
return IotConnectorsClientListByWorkspaceResponse{}, err
}
return result, nil
}
// BeginUpdate - Patch an IoT Connector.
// If the operation fails it returns an *azcore.ResponseError type.
// Generated from API version 2022-01-31-preview
// resourceGroupName - The name of the resource group that contains the service instance.
// iotConnectorName - The name of IoT Connector resource.
// workspaceName - The name of workspace resource.
// iotConnectorPatchResource - The parameters for updating an IoT Connector.
// options - IotConnectorsClientBeginUpdateOptions contains the optional parameters for the IotConnectorsClient.BeginUpdate
// method.
func (client *IotConnectorsClient) BeginUpdate(ctx context.Context, resourceGroupName string, iotConnectorName string, workspaceName string, iotConnectorPatchResource IotConnectorPatchResource, options *IotConnectorsClientBeginUpdateOptions) (*runtime.Poller[IotConnectorsClientUpdateResponse], error) {
if options == nil || options.ResumeToken == "" {
resp, err := client.update(ctx, resourceGroupName, iotConnectorName, workspaceName, iotConnectorPatchResource, options)
if err != nil {
return nil, err
}
return runtime.NewPoller[IotConnectorsClientUpdateResponse](resp, client.pl, nil)
} else {
return runtime.NewPollerFromResumeToken[IotConnectorsClientUpdateResponse](options.ResumeToken, client.pl, nil)
}
}
// Update - Patch an IoT Connector.
// If the operation fails it returns an *azcore.ResponseError type.
// Generated from API version 2022-01-31-preview
func (client *IotConnectorsClient) update(ctx context.Context, resourceGroupName string, iotConnectorName string, workspaceName string, iotConnectorPatchResource IotConnectorPatchResource, options *IotConnectorsClientBeginUpdateOptions) (*http.Response, error) {
req, err := client.updateCreateRequest(ctx, resourceGroupName, iotConnectorName, workspaceName, iotConnectorPatchResource, options)
if err != nil {
return nil, err
}
resp, err := client.pl.Do(req)
if err != nil {
return nil, err
}
if !runtime.HasStatusCode(resp, http.StatusOK, http.StatusAccepted) {
return nil, runtime.NewResponseError(resp)
}
return resp, nil
}
// updateCreateRequest creates the Update request.
func (client *IotConnectorsClient) updateCreateRequest(ctx context.Context, resourceGroupName string, iotConnectorName string, workspaceName string, iotConnectorPatchResource IotConnectorPatchResource, options *IotConnectorsClientBeginUpdateOptions) (*policy.Request, error) {
urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HealthcareApis/workspaces/{workspaceName}/iotconnectors/{iotConnectorName}"
if resourceGroupName == "" {
return nil, errors.New("parameter resourceGroupName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName))
if client.subscriptionID == "" {
return nil, errors.New("parameter client.subscriptionID cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID))
if iotConnectorName == "" {
return nil, errors.New("parameter iotConnectorName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{iotConnectorName}", url.PathEscape(iotConnectorName))
if workspaceName == "" {
return nil, errors.New("parameter workspaceName cannot be empty")
}
urlPath = strings.ReplaceAll(urlPath, "{workspaceName}", url.PathEscape(workspaceName))
req, err := runtime.NewRequest(ctx, http.MethodPatch, runtime.JoinPaths(client.host, urlPath))
if err != nil {
return nil, err
}
reqQP := req.Raw().URL.Query()
reqQP.Set("api-version", "2022-01-31-preview")
req.Raw().URL.RawQuery = reqQP.Encode()
req.Raw().Header["Accept"] = []string{"application/json"}
return req, runtime.MarshalAsJSON(req, iotConnectorPatchResource)
}
|
Volunteers help make the summer meal programs possible. This area is fortunate to have people willing to get involved.
People who want to help Harvesters can go to harvesters.org. |
Robert Heindel
Early life and career
Heindel was born in Toledo, Ohio, the oldest of three adopted sons, to working-class parents Charlotte and Robert Heindel. His mother worked at the local Willy's Jeep factory assembling carburetors. His father was an engineer and piano player. Heindel began painting at an early age, encouraged by his parents, though especially his mother who believed him to be gifted. Heindel enrolled at the Famous Artists School at 16, eventually becoming one of its most celebrated graduates.
Heindel married Rosalie (Rose) Petres in 1959. They had three sons: Toby (born February 28, 1960, died September 20, 1990), Troy (born December 1, 1961) and Todd (born September 7, 1966). Heindel often signed his paintings with the symbol of his wife Rose.
During his 25-year illustration career Heindel was friends with and competed against the best illustrators of the late twentieth century, including Bob Peak, Bernie Fuchs, Mark English, Fred Otnes, and Alan E. Cober. Together, they created an annual educational event called the Illustrator's Workshop to teach young illustrators about the illustration business.
Heindel reached the top of the illustration business in the early 1980s, his work having appeared in nearly every major print magazine such as ‘Sports Illustrated’, ‘TV Guide’, ‘Ladies Home Journal’, ‘Redbook’, the ‘Saturday Evening Post’, and ‘Time’ magazine. Heindel's ‘Time’ magazine cover of Daniel Ellsberg resides in the permanent collection of The Smithsonian National Portrait Gallery. In 2011, he was posthumously inducted into the Illustrators Hall of Fame. While Heindel benefited from advances in photography and publishing, he foresaw the impact technology would ultimately have on the commercial illustration business. Heindel was known to comment to students that they needed to be prepared for a change in the illustration business, comparing 20th century illustrators to West Virginia coal miners. At the height of his illustration career in the 1980s, Heindel pivoted into fine art; instead of painting American football players for ‘Sports Illustrated,’ he began to paint dancers, something he had wanted to do since first seeing the Royal Ballet's Rudolph Nureyev and Margot Fonteyn dance ‘Paradise Lost’ in 1962.
Fine Art, Dance and Set Design
At 44, Heindel was considered too old to be entering the world of fine art, access that was often controlled by established New York galleries. Heindel adopted a strategy of collaboration, study, and contemporaneous exhibition that would gain him access to the best dancers and performing arts companies around the world. Heindel would spend one to two years preparing for a major exhibition. Months before entering the studio to paint, he would partner with a ballet company, working closely with the artistic director, choreographer and principal dancers, observing and photographing rehearsals. Heindel rarely created images from final performances, preferring instead to paint dancers at rehearsal.
After gathering material, Heindel would retreat to his studio in Easton, CT for six to nine months, creating studies and paintings, capturing the major themes of the performance. Exhibition of the paintings, 30 to 100 paintings and drawings, would take place at or near the performance venue at about the same time as the first performance.
Anthony Dowell, artistic director of The Royal Ballet, said of Heindel's method:
Sometimes, during the weary hours of rehearsal, the last thing a dancer needs is an intruder with a sketch book or camera recording all the trials and secret worries that we all have whilst trying to accomplish the impossible. Robert Heindel, apart from being one of the most courteous and charming of men, manages to camouflage himself into the studio setting, somehow hiding his ‘spying’ eyes. Many silent photos later, and after the magic process has taken place in his studio, one is presented with not only a true image of oneself but with a beautiful study and record of the private moments that one thought had been hidden.
Heindel was commissioned in 1987 by Andrew Lloyd Webber to paint impressions from the musicals Cats and The Phantom of the Opera.
In 1996, Heindel had the opportunity to produce paintings from a Kabuki production. The occasion was a performance of the Kabuki dance Yasuna, performed by Onoe Kikugoro VII. The story of Yasuna, who loses his fiancée just before their marriage and then, insane with grief, wanders around the countryside in spring wearing her fiancee's silk kimono, touched the memory of the grief Heindel felt when he lost his son Toby six years before.
"I’m always very interested," Heindel said, "in cultural differences. My involvement with dance and the theater usually means I am driven into some creative activity. In Japanese culture, I am also most interested in dance. However, once I pursue any of the themes that grip us, like love or defeat, life or death, I feel almost no difference between the cultures." Heindel tried to cast the world of Kabuki within the context of the universal themes that humankind shares and that go beyond East or West within the world of his own paintings.
Death
Robert Heindel died at his home in Guildford, Connecticut on July 3, 2005, after a 10-year struggle with emphysema. There were three unfinished dance paintings in his studio. Prior to his death, Heindel had commented that once he could no longer climb the stairs to his studio to paint, it would be time for him to leave |
<filename>chromeos/components/eche_app_ui/feature_status.h<gh_stars>100-1000
// Copyright 2020 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef CHROMEOS_COMPONENTS_ECHE_APP_UI_FEATURE_STATUS_H_
#define CHROMEOS_COMPONENTS_ECHE_APP_UI_FEATURE_STATUS_H_
#include <ostream>
namespace chromeos {
namespace eche_app {
// Enum representing potential status values for the eche feature.
enum class FeatureStatus {
// The user's devices are not eligible for the feature. This means that either
// the Chrome OS device or the user's phone (or both) have not enrolled with
// the requisite feature enum values, OR parent features are not enabled.
kIneligible = 0,
// The feature is disabled, but the user could enable it via settings.
kDisabled = 1,
// The feature is enabled, but currently there is no active connection to
// the phone.
kDisconnected = 2,
// The feature is enabled, and there is an active attempt to connect to the
// phone.
kConnecting = 3,
// The feature is enabled, and there is an active connection with the phone.
kConnected = 4,
};
std::ostream& operator<<(std::ostream& stream, FeatureStatus status);
} // namespace eche_app
} // namespace chromeos
#endif // CHROMEOS_COMPONENTS_ECHE_APP_UI_FEATURE_STATUS_H_
|
/**
* A class used to provide generally-useful Perforce filespec-related
* static methods.
*
*
*/
public class FileSpecBuilder {
/**
* Given a list of file paths (which might include revision or label specs, etc.),
* return a corresponding list of file specs. Returns null if pathList is null; skips
* any null element of the list.
*
* @param pathList list of path strings
* @return possibly-null (or empty) list of filespecs
*/
public static List<IFileSpec> makeFileSpecList(List<String> pathList) {
List<IFileSpec> specList = null;
if (pathList != null) {
specList = new ArrayList<IFileSpec>();
for (String path : pathList) {
if (path != null) {
specList.add(new FileSpec(path));
}
}
}
return specList;
}
/**
* Given an array of file paths (which might include revision or label specs, etc.),
* return a corresponding list of file specs. Returns null if pathArray is null; skips
* any null element of the array.<p>
*
* NOTE: use the 'FileSpecBuilder.makeFileSpecList(List<String> pathList)' method if
* you have a very large amount of file paths. The method with the 'List' parameter
* is more memory efficient, since an array keeps data in a contiguous chunk of memory.
*
* @param pathArray array of path strings
* @return possibly-null (or empty) list of filespecs
*/
public static List<IFileSpec> makeFileSpecList(String[] pathArray) {
List<IFileSpec> specList = null;
if (pathArray != null) {
specList = new ArrayList<IFileSpec>();
for (String path : pathArray) {
if (path != null) {
specList.add(new FileSpec(path));
}
}
}
return specList;
}
/**
* Create a list containing a single file spec created from the specified
* path.
*
* @param path
* @return non-null but possibly empty list of filespecs
*/
public static List<IFileSpec> makeFileSpecList(String path) {
return makeFileSpecList(new String[] { path });
}
/**
* Given a list of file specs, return a list of the valid file specs in that list.
* "Valid" here means a) non-null, and b) getOpStatus() returns VALID.
*
* @param fileSpecs candidate file specs
* @return non-null but possibly-empty list of valid file specs
*/
public static List<IFileSpec> getValidFileSpecs(List<IFileSpec> fileSpecs) {
List <IFileSpec> validList = new ArrayList<IFileSpec>();
if (fileSpecs != null) {
for (IFileSpec fSpec : fileSpecs) {
if ((fSpec != null) && (fSpec.getOpStatus() == FileSpecOpStatus.VALID)) {
validList.add(fSpec);
}
}
}
return validList;
}
/**
* Given a list of file specs, return a list of the invalid file specs in that list.
* "Invalid" here means a) non-null, and b) getOpStatus() returns anything but VALID.
*
* @param fileSpecs candidate file specs
* @return non-null but possibly-empty list of invalid file specs
*/
public static List<IFileSpec> getInvalidFileSpecs(List<IFileSpec> fileSpecs) {
List <IFileSpec> invalidList = new ArrayList<IFileSpec>();
if (fileSpecs != null) {
for (IFileSpec fSpec : fileSpecs) {
if ((fSpec != null) && (fSpec.getOpStatus() != FileSpecOpStatus.VALID)) {
invalidList.add(fSpec);
}
}
}
return invalidList;
}
} |
def _fast_memory_load_pointer(self, addr):
pointer_size = self.project.arch.bits / 8
buf = self._fast_memory_load(addr)
if self.project.arch.memory_endness == 'Iend_LE':
fmt = "<"
else:
fmt = ">"
if pointer_size == 8:
fmt += "Q"
elif pointer_size == 4:
fmt += "I"
else:
raise AngrCFGError("Pointer size of %d is not supported", pointer_size)
ptr_str = self._ffi.unpack(self._ffi.cast('char*', buf), pointer_size)
ptr = struct.unpack(fmt, ptr_str)[0]
return ptr |
The traffic on Dallas’ busy Mockingbird Lane is both on the ground and in the air.
Jets taking off a few blocks away at Love Field and cars lined up on the busy street are a testament to the surge in activity in the district.
“This whole area is undergoing a renaissance,” said developer Jorge Ramirez, who is turning two blocks of old industrial and commercial properties into a new mixed-use development.
His 35-acre, $200 million West Love development is the biggest new project in the Love Field area.
But it’s by far not the only thing happening around Dallas’ close-in airport.
With passenger traffic up more that 50 percent at Love Field and expansions at the nearby medical centers, the area south and west of the airport is attracting new development.
Apartment builder Fairfield Properties just cleared six acres south of Love Field at Denton Drive and Inwood Road where it is building a 347-unit complex.
The Inwood Station apartment project will open next summer.
Fairfield’s project is the latest in a series of apartment developments between Love Field and Parkland Hospital that have added thousands of rental units to the area.
More apartments are coming at the West Love project.
The apartments and townhouses will join two new hotels, an office building and shopping center at the site at Mockingbird and Maple Avenue.
“The retail will be low profile, up against street — very pedestrian-oriented,” Ramirez said. “There have been a ton of new apartments going up in this area but no new gathering places, no restaurants.
Dallas developer KDC has picked a site in West Love where it plans to build an office project. It will be the first large office building in the Love Field area in about 30 years.
KDC is already talking to potential tenants for the building, which will be adjacent to the new apartments and hotels.
“We are going to be in the heart of the site,” said Bill Guthrey, KDC senior vice president.
Guthrey said KDC is talking to businesses interested in moving to the area.
“This is exactly what the neighborhood has been looking for,” he said. “We think this is a great alternative to new office space in Uptown or Preston Center,” where buildings are much more expensive.
KDC’s office building will be at least 150,000 square feet.
“We can do up to 200,000 if someone comes along and needs more square footage,” Guthrey said.
The two hotels under construction on Mockingbird Lane will open late this year, said Perry Molubhoy, CEO of builder Atlantic Hotel Group.
The $49 million, eight-story combination Aloft and Element by Westin hotels will have a combined 244 rooms.
Molubhoy said his firm began looking at the market two years ago when flight restrictions at Love Field went away.
Atlantic Hotels Group, which is partnering with Civitas Capital Group on the project, also has new projects in East and West Dallas.
Closer to the entrance to Love Field, developer and investor Viceroy Investments is about to begin a major overhaul of its 244-room Doubletree Hotel on Mockingbird.
“We are spending $2.5 million on the lobby and public spaces,” said Viceroy CEO Steve Rogers.
Viceroy recently rebranded the hotel and has made other upgrades.
Viceroy has gained control of a nearby restaurant building and old motel property at 3130 and 3140 Mockingbird and will redevelop that property.
His firm also owns the adjacent 6500 Cedar Springs building, which houses a large event venue.
And a space formerly occupied by retailer Wisteria is being remodeled into a facility for automaker Tesla, Dallas building permits show.
He’s also just purchased a 15,000-square-foot commercial building at 3448 Mockingbird that will be converted into office space.
An investment group led by developer and investor Scott Rohrman’s 42 Real Estate LLC is still evaluating plans for a three-story midcentury modern office building it purchased last year at 2626 W. Mockingbird Lane.
The yellow metal-paneled building has been vacant for some time.
“We’ve got two or three people that have talked to us about it,” Rohrman said. “We’ve done a space plan for an office user and we have people talking to us about our parking garage. |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Collection of first-party plugins.
This module exists to isolate tensorboard.program from the potentially
heavyweight build dependencies for first-party plugins. This way people
doing custom builds of TensorBoard have the option to only pay for the
dependencies they want.
This module also grants the flexibility to those doing custom builds, to
automatically inherit the centrally-maintained list of standard plugins,
for less repetition.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import logging
import os
import pkg_resources
from tensorboard.backend import experimental_plugin
from tensorboard.compat import tf
from tensorboard.plugins import base_plugin
from tensorboard.plugins.audio import audio_plugin
from tensorboard.plugins.beholder import beholder_plugin_loader
from tensorboard.plugins.core import core_plugin
from tensorboard.plugins.custom_scalar import custom_scalars_plugin
from tensorboard.plugins.debugger import debugger_plugin_loader
from tensorboard.plugins.debugger_v2 import debugger_v2_plugin
from tensorboard.plugins.distribution import distributions_plugin
from tensorboard.plugins.graph import graphs_plugin
from tensorboard.plugins.histogram import histograms_plugin
from tensorboard.plugins.hparams import hparams_plugin
from tensorboard.plugins.image import images_plugin
from tensorboard.plugins.pr_curve import pr_curves_plugin
from tensorboard.plugins.profile_redirect import profile_redirect_plugin
from tensorboard.plugins.scalar import scalars_plugin
from tensorboard.plugins.text import text_plugin
from tensorboard.plugins.text_v2 import text_v2_plugin
from tensorboard.plugins.mesh import mesh_plugin
from tensorboard.plugins.npmi import npmi_plugin
logger = logging.getLogger(__name__)
class ExperimentalTextV2Plugin(
text_v2_plugin.TextV2Plugin, experimental_plugin.ExperimentalPlugin
):
"""Angular Text Plugin marked as experimental."""
pass
class ExperimentalNpmiPlugin(
npmi_plugin.NpmiPlugin, experimental_plugin.ExperimentalPlugin
):
"""Angular nPMI Plugin marked as experimental."""
pass
# Ordering matters. The order in which these lines appear determines the
# ordering of tabs in TensorBoard's GUI.
_PLUGINS = [
core_plugin.CorePluginLoader,
scalars_plugin.ScalarsPlugin,
custom_scalars_plugin.CustomScalarsPlugin,
images_plugin.ImagesPlugin,
audio_plugin.AudioPlugin,
debugger_plugin_loader.DebuggerPluginLoader,
debugger_v2_plugin.DebuggerV2Plugin,
graphs_plugin.GraphsPlugin,
distributions_plugin.DistributionsPlugin,
histograms_plugin.HistogramsPlugin,
text_plugin.TextPlugin,
pr_curves_plugin.PrCurvesPlugin,
profile_redirect_plugin.ProfileRedirectPluginLoader,
beholder_plugin_loader.BeholderPluginLoader,
hparams_plugin.HParamsPlugin,
mesh_plugin.MeshPlugin,
ExperimentalTextV2Plugin,
ExperimentalNpmiPlugin,
]
def get_plugins():
"""Returns a list specifying all known TensorBoard plugins.
This includes both first-party, statically bundled plugins and
dynamic plugins.
This list can be passed to the `tensorboard.program.TensorBoard` API.
Returns:
The list of default first-party plugins.
"""
return get_static_plugins() + get_dynamic_plugins()
def get_static_plugins():
"""Returns a list specifying TensorBoard's default first-party plugins.
Plugins are specified in this list either via a TBLoader instance to load the
plugin, or the TBPlugin class itself which will be loaded using a BasicLoader.
This list can be passed to the `tensorboard.program.TensorBoard` API.
Returns:
The list of default first-party plugins.
:rtype: list[Type[base_plugin.TBLoader] | Type[base_plugin.TBPlugin]]
"""
return _PLUGINS[:]
def get_dynamic_plugins():
"""Returns a list specifying TensorBoard's dynamically loaded plugins.
A dynamic TensorBoard plugin is specified using entry_points [1] and it is
the robust way to integrate plugins into TensorBoard.
This list can be passed to the `tensorboard.program.TensorBoard` API.
Returns:
The list of dynamic plugins.
:rtype: list[Type[base_plugin.TBLoader] | Type[base_plugin.TBPlugin]]
[1]: https://packaging.python.org/specifications/entry-points/
"""
return [
entry_point.load()
for entry_point in pkg_resources.iter_entry_points(
"tensorboard_plugins"
)
]
|
<filename>windows/feime/kor/ime2k/tip/korimx.h
//
// KORIMX.H : CKorIMX class declaration
//
// History:
// 15-NOV-1999 CSLim Created
#if !defined (__KORIMX_H__INCLUDED_)
#define __KORIMX_H__INCLUDED_
#include "globals.h"
#include "proputil.h"
#include "computil.h"
#include "dap.h"
#include "tes.h"
#include "kes.h"
#include "hauto.h"
#include "candlstx.h"
#include "mscandui.h"
#include "toolbar.h"
#include "editssn.h"
#include "immxutil.h"
#include "softkbd.h"
#include "skbdkor.h"
#include "pad.h"
#include "resource.h"
///////////////////////////////////////////////////////////////////////////////
// Class forward declarations
class CEditSession;
class CICPriv;
class CThreadMgrEventSink;
class CFunctionProvider;
class CHanja;
class CCompositionInsertHelper;
///////////////////////////////////////////////////////////////////////////////
// Edit callback state values
#define ESCB_FINALIZECONVERSION 1
#define ESCB_COMPLETE 2
#define ESCB_INSERT_PAD_STRING 3
#define ESCB_KEYSTROKE 4
#define ESCB_TEXTEVENT 5
//#define ESCB_RANGEBROKEN 6
#define ESCB_CANDUI_CLOSECANDUI 6
#define ESCB_HANJA_CONV 7
#define ESCB_FINALIZERECONVERSION 8
#define ESCB_ONSELECTRECONVERSION 9
#define ESCB_ONCANCELRECONVERSION 10
#define ESCB_RECONV_QUERYRECONV 11
#define ESCB_RECONV_GETRECONV 12
#define ESCB_RECONV_SHOWCAND 13
#define ESCB_INIT_MODEBIAS 14
// Conversion modes(bit field for bit op)
#define TIP_ALPHANUMERIC_MODE 0
#define TIP_HANGUL_MODE 1
#define TIP_JUNJA_MODE 2
#define TIP_NULL_CONV_MODE 0x8000
//
// Text Direction
//
typedef enum
{
TEXTDIRECTION_TOPTOBOTTOM = 1,
TEXTDIRECTION_RIGHTTOLEFT,
TEXTDIRECTION_BOTTOMTOTOP,
TEXTDIRECTION_LEFTTORIGHT,
} TEXTDIRECTION;
///////////////////////////////////////////////////////////////////////////////
// CKorIMX class
class CKorIMX : public ITfTextInputProcessor,
public ITfFnConfigure,
public ITfThreadFocusSink,
public ITfCompositionSink,
public ITfCleanupContextSink,
public ITfCleanupContextDurationSink,
public ITfActiveLanguageProfileNotifySink,
public ITfTextEditSink,
public ITfEditTransactionSink,
public CDisplayAttributeProvider
{
public:
CKorIMX();
~CKorIMX();
static HRESULT CreateInstance(IUnknown *pUnkOuter, REFIID riid, void **ppvObj);
//
// IUnknown methods
//
virtual STDMETHODIMP QueryInterface(REFIID riid, void **ppvObj);
virtual STDMETHODIMP_(ULONG) AddRef(void);
virtual STDMETHODIMP_(ULONG) Release(void);
private:
long m_cRef;
public:
//
// ITfX methods
//
STDMETHODIMP Activate(ITfThreadMgr *ptim, TfClientId tid);
STDMETHODIMP Deactivate();
// ITfThreadFocusSink
STDMETHODIMP OnSetThreadFocus();
STDMETHODIMP OnKillThreadFocus();
// ITfCompositionSink
STDMETHODIMP OnCompositionTerminated(TfEditCookie ecWrite, ITfComposition *pComposition);
// ITfCleanupContextDurationSink
STDMETHODIMP OnStartCleanupContext();
STDMETHODIMP OnEndCleanupContext();
// ITfCleanupContextSink
STDMETHODIMP OnCleanupContext(TfEditCookie ecWrite, ITfContext *pic);
// ITfActiveLanguageProfileNotifySink
STDMETHODIMP OnActivated(REFCLSID clsid, REFGUID guidProfile, BOOL fActivated);
// ITFFnConfigure
STDMETHODIMP GetDisplayName(BSTR *pbstrCand);
STDMETHODIMP Show(HWND hwnd, LANGID langid, REFGUID rguidProfile);
// ITfTextEditSink
STDMETHODIMP OnEndEdit(ITfContext *pic, TfEditCookie ecReadOnly, ITfEditRecord *pEditRecord);
// ITfEditTransactionSink
STDMETHODIMP OnStartEditTransaction(ITfContext *pic);
STDMETHODIMP OnEndEditTransaction(ITfContext *pic);
// Operations
public:
// Get/Set On off status
BOOL IsOn(ITfContext *pic);
void SetOnOff(ITfContext *pic, BOOL fOn);
static HRESULT _EditSessionCallback2(TfEditCookie ec, CEditSession2 *pes);
HRESULT MakeResultString(TfEditCookie ec, ITfContext *pic, ITfRange *pRange);
// REVIEW: IC related functions
ITfContext* GetRootIC(ITfDocumentMgr* pDim = NULL);
static BOOL IsDisabledIC(ITfContext *pic);
static BOOL IsEmptyIC(ITfContext *pic);
static BOOL IsCandidateIC(ITfContext *pic);
static HWND GetAppWnd(ITfContext *pic);
BOOL IsPendingCleanup();
// Get AIMM or not?
static BOOL GetAIMM(ITfContext *pic);
// Get/Set conversion mode
DWORD GetConvMode(ITfContext *pic);
DWORD SetConvMode(ITfContext *pic, DWORD dwConvMode);
// Retun current Automata object
CHangulAutomata* GetAutomata(ITfContext *pic);
// Cand UI functions
void OpenCandidateUI(TfEditCookie ec, ITfContext *pic, ITfRange *pRange, CCandidateListEx *pCandList);
void CloseCandidateUI(ITfContext *pic);
void CancelCandidate(TfEditCookie ec, ITfContext *pic);
// Soft Kbd functions
HRESULT InitializeSoftKbd();
BOOL IsSoftKbdEnabled() { return m_fSoftKbdEnabled; }
void TerminateSoftKbd();
BOOL GetSoftKBDOnOff();
void SetSoftKBDOnOff(BOOL fOn);
DWORD GetSoftKBDLayout();
void SetSoftKBDLayout(DWORD dwKbdLayout);
HRESULT GetSoftKBDPosition(int *xWnd, int *yWnd);
void SetSoftKBDPosition(int xWnd, int yWnd);
SOFTLAYOUT* GetHangulSKbd() { return &m_KbdHangul; }
// Data access functins
ITfDocumentMgr* GetDIM() { return m_pCurrentDim; }
HRESULT GetFocusDIM(ITfDocumentMgr **ppdim);
ITfThreadMgr* GetTIM() { return m_ptim; }
TfClientId GetTID() { return m_tid; }
ITfContext* GetIC();
CThreadMgrEventSink* GetTIMEventSink() { return m_ptimEventSink; }
static CICPriv* GetInputContextPriv(ITfContext *pic);
void OnFocusChange(ITfContext *pic, BOOL fActivate);
// Window object member access functions
HWND GetOwnerWnd() { return m_hOwnerWnd; }
// Get IImeIPoint
IImeIPoint1* GetIPoint (ITfContext *pic);
// KES_CODE_FOCUS set fForeground?
BOOL IsKeyFocus() { return m_fKeyFocus; }
// Get Pad Core
CPadCore* GetPadCore() { return m_pPadCore; }
// Update Toolbar button
void UpdateToolbar(DWORD dwUpdate) { m_pToolBar->Update(dwUpdate); }
static LRESULT CALLBACK _OwnerWndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam);
CFunctionProvider* GetFunctionProvider() { return m_pFuncPrv; }
// Cand UI helper
BOOL IsCandUIOpen() { return m_fCandUIOpen; }
// Get TLS
LIBTHREAD *_GetLibTLS() { return &m_libTLS; }
// Implementation
protected:
// Helper functions
HRESULT SetInputString(TfEditCookie ec, ITfContext *pic, ITfRange *pRange, WCHAR *psz, LANGID langid);
static LANGID GetLangID();
static WCHAR Banja2Junja(WCHAR bChar);
// Cand UI function
void GetCandidateFontInternal(TfEditCookie ec, ITfContext *pic, ITfRange *pRange, LOGFONTW *plf, LONG lFontPoint, BOOL fCandList);
//////////////////////////////////////////////////////////////////////////////
// Internal functions
private:
// Callbacks
static HRESULT _KeyEventCallback(UINT uCode, ITfContext *pic, WPARAM wParam, LPARAM lParam, BOOL *pfEaten, void *pv);
static HRESULT _PreKeyCallback(ITfContext *pic, REFGUID rguid, BOOL *pfEaten, void *pv);
static HRESULT _DIMCallback(UINT uCode, ITfDocumentMgr *pdimNew, ITfDocumentMgr *pdimPrev, void *pv);
static HRESULT _ICCallback(UINT uCode, ITfContext *pic, void *pv);
static HRESULT _CompEventSinkCallback(void *pv, REFGUID rguid);
static void _CleanupCompositionsCallback(TfEditCookie ecWrite, ITfRange *rangeComposition, void *pvPrivate);
void HAutoMata(TfEditCookie ec, ITfContext *pIIC, ITfRange *pIRange, LPBYTE lpbKeyState, WORD wVKey);
BOOL _IsKeyEaten(ITfContext *pic, CKorIMX *pimx, WPARAM wParam, LPARAM lParam, const BYTE abKeyState[256]);
HRESULT _Keystroke(TfEditCookie ec, ITfContext *pic, WPARAM wParam, LPARAM lParam, const BYTE abKeyState[256]);
// IC helpers
HRESULT _InitICPriv(ITfContext *pic);
HRESULT _DeleteICPriv(ITfContext *pic);
// Hanja conversion
HRESULT DoHanjaConversion(TfEditCookie ec, ITfContext *pic, ITfRange *pRange);
HRESULT Reconvert(ITfRange *pSelection);
// Composition
ITfComposition *GetIPComposition(ITfContext *pic);
ITfComposition *CreateIPComposition(TfEditCookie ec, ITfContext *pic, ITfRange* pRangeComp);
void SetIPComposition(ITfContext *pic, ITfComposition *pComposition);
BOOL EndIPComposition(TfEditCookie ec, ITfContext *pic);
// Candidate list
CCandidateListEx *CreateCandidateList(ITfContext *pic, ITfRange *pRange, LPWSTR wchHangul);
TEXTDIRECTION GetTextDirection(TfEditCookie ec, ITfContext *pic, ITfRange *pRange);
CANDUIUIDIRECTION GetCandUIDirection(TfEditCookie ec, ITfContext *pic, ITfRange *pRange);
void CloseCandidateUIProc();
void SelectCandidate(TfEditCookie ec, ITfContext *pic, INT idxCand, BOOL fFinalize);
HMENU CreateCandidateMenu(ITfContext *pic);
static HRESULT CandidateUICallBack(ITfContext *pic, ITfRange *pRange, CCandidateListEx *pCandList, CCandidateStringEx *pCand, TfCandidateResult imcr);
// Cand key
void SetCandidateKeyTable(ITfContext *pic, CANDUIUIDIRECTION dir);
static BOOL IsCandKey(WPARAM wParam, const BYTE abKeyState[256]);
#if 0
// Range functions
void BackupRange(TfEditCookie ec, ITfContext *pic, ITfRange* pRange );
void RestoreRange(TfEditCookie ec, ITfContext *pic );
ITfRange* CreateIPRange(TfEditCookie ec, ITfContext *pic, ITfRange* pRangeOrg);
void SetIPRange(TfEditCookie ec, ITfContext *pic, ITfRange* pRange);
ITfRange* GetIPRange(TfEditCookie ec, ITfContext *pic);
BOOL FlushIPRange(TfEditCookie ec, ITfContext *pic);
#endif
// Modebias(ImmSetConversionStatus() API AIMM compatebility)
BOOL InitializeModeBias(TfEditCookie ec, ITfContext *pic);
void CheckModeBias(ITfContext* pic);
BOOL CheckModeBias(TfEditCookie ec, ITfContext *pic, ITfRange *pSelection);
// SoftKeyboard
//void OnActivatedSoftKbd(BOOl bActivated);
HRESULT ShowSoftKBDWindow(BOOL fShow);
void SoftKbdOnThreadFocusChange(BOOL fSet);
// Helpers
BOOL MySetText(TfEditCookie ec, ITfContext *pic, ITfRange *pRange, const WCHAR *psz, LONG cchText, LANGID langid, GUID *pattr);
BOOL IsKorIMX_GUID_ATOM(TfGuidAtom attr);
///////////////////////////////////////////////////////////////////////////////
// Internal data
private:
ITfDocumentMgr *m_pCurrentDim;
ITfThreadMgr *m_ptim;
TfClientId m_tid;
CThreadMgrEventSink *m_ptimEventSink;
CKeyEventSink *m_pkes;
HWND m_hOwnerWnd;
BOOL m_fKeyFocus;
CPadCore *m_pPadCore;
CToolBar *m_pToolBar;
DWORD m_dwThreadFocusCookie;
DWORD m_dwProfileNotifyCookie;
BOOL m_fPendingCleanup;
CFunctionProvider* m_pFuncPrv;
// For overtyping
CCompositionInsertHelper *m_pInsertHelper;
// Candidate UI
ITfCandidateUI* m_pCandUI;
BOOL m_fCandUIOpen;
// SoftKbd
BOOL m_fSoftKbdEnabled;
ISoftKbd *m_pSoftKbd;
SOFTLAYOUT m_KbdStandard;
SOFTLAYOUT m_KbdHangul;
CSoftKbdWindowEventSink *m_psftkbdwndes;
DWORD m_dwSftKbdwndesCookie;
BOOL m_fSoftKbdOnOffSave;
// Tls for our helper library, we're apt threaded so tls can live in this object
LIBTHREAD m_libTLS;
BOOL m_fNoKorKbd;
};
/*---------------------------------------------------------------------------
CKorIMX::IsPendingCleanup
---------------------------------------------------------------------------*/
inline
BOOL CKorIMX::IsPendingCleanup()
{
return m_fPendingCleanup;
}
/*---------------------------------------------------------------------------
CKorIMX::GetFocusDIM
---------------------------------------------------------------------------*/
inline
HRESULT CKorIMX::GetFocusDIM(ITfDocumentMgr **ppdim)
{
Assert(ppdim);
*ppdim = NULL;
if (m_ptim != NULL)
m_ptim->GetFocus(ppdim);
return *ppdim ? S_OK : E_FAIL;
}
#include "icpriv.h"
/*---------------------------------------------------------------------------
CKorIMX::GetAutomata
---------------------------------------------------------------------------*/
inline
CHangulAutomata* CKorIMX::GetAutomata(ITfContext *pic)
{
CICPriv* picp = GetInputContextPriv(pic);
return (picp) ? GetInputContextPriv(pic)->GetAutomata() : NULL;
}
/*---------------------------------------------------------------------------
CKorIMX::IsOn
---------------------------------------------------------------------------*/
inline
BOOL CKorIMX::IsOn(ITfContext *pic)
{
DWORD dw = 0;
if (pic == NULL)
return fFalse;
GetCompartmentDWORD(GetTIM(), GUID_COMPARTMENT_KEYBOARD_OPENCLOSE, &dw, fFalse);
return dw ? fTrue : fFalse;
}
/*---------------------------------------------------------------------------
CKorIMX::GetConvMode
---------------------------------------------------------------------------*/
inline
DWORD CKorIMX::GetConvMode(ITfContext *pic)
{
DWORD dw = 0;
if (pic == NULL)
return TIP_ALPHANUMERIC_MODE;
GetCompartmentDWORD(GetTIM(), GUID_COMPARTMENT_KORIMX_CONVMODE, &dw, fFalse);
return dw;
}
/*---------------------------------------------------------------------------
CKorIMX::SetOnOff
---------------------------------------------------------------------------*/
inline
void CKorIMX::SetOnOff(ITfContext *pic, BOOL fOn)
{
if (pic)
SetCompartmentDWORD(m_tid, GetTIM(), GUID_COMPARTMENT_KEYBOARD_OPENCLOSE, fOn ? 0x01 : 0x00, fFalse);
}
/*---------------------------------------------------------------------------
CKorIMX::GetLangID
---------------------------------------------------------------------------*/
inline
LANGID CKorIMX::GetLangID()
{
return MAKELANGID(LANG_KOREAN, SUBLANG_DEFAULT);
}
/*---------------------------------------------------------------------------
CKorIMX::IsKorIMX_GUID_ATOM
---------------------------------------------------------------------------*/
inline
BOOL CKorIMX::IsKorIMX_GUID_ATOM(TfGuidAtom attr)
{
if (IsEqualTFGUIDATOM(&m_libTLS, attr, GUID_ATTR_KORIMX_INPUT))
return fTrue;
return fFalse;
}
/////////////////////////////////////////////////////////////////////////////
// S O F T K E Y B O A R D F U N C T I O N S
/////////////////////////////////////////////////////////////////////////////
/*---------------------------------------------------------------------------
CKorIMX::GetSoftKBDOnOff
---------------------------------------------------------------------------*/
inline
BOOL CKorIMX::GetSoftKBDOnOff()
{
DWORD dw;
if (GetTIM() == NULL)
return fFalse;
GetCompartmentDWORD(GetTIM(), GUID_COMPARTMENT_KOR_SOFTKBD_OPENCLOSE , &dw, fFalse);
return dw ? TRUE : fFalse;
}
/*---------------------------------------------------------------------------
CKorIMX::SetSoftKBDOnOff
---------------------------------------------------------------------------*/
inline
void CKorIMX::SetSoftKBDOnOff(BOOL fOn)
{
// check to see if the m_pSoftKbd and soft keyboard related members are initialized.
if (m_fSoftKbdEnabled == fFalse)
InitializeSoftKbd();
if (m_pSoftKbd == NULL || GetTIM() == NULL)
return;
if (fOn == GetSoftKBDOnOff())
return;
SetCompartmentDWORD(GetTID(), GetTIM(), GUID_COMPARTMENT_KOR_SOFTKBD_OPENCLOSE,
fOn ? 0x01 : 0x00 , fFalse);
}
/*---------------------------------------------------------------------------
CKorIMX::GetSoftKBDLayout
---------------------------------------------------------------------------*/
inline
DWORD CKorIMX::GetSoftKBDLayout( )
{
DWORD dw;
if (m_pSoftKbd == NULL || GetTIM() == NULL)
return NON_LAYOUT;
GetCompartmentDWORD(GetTIM(), GUID_COMPARTMENT_SOFTKBD_KBDLAYOUT, &dw, fFalse);
return dw;
}
/*---------------------------------------------------------------------------
CKorIMX::SetSoftKBDLayout
---------------------------------------------------------------------------*/
inline
void CKorIMX::SetSoftKBDLayout(DWORD dwKbdLayout)
{
// check to see if the _SoftKbd and soft keyboard related members are initialized.
if (m_fSoftKbdEnabled == fFalse )
InitializeSoftKbd();
if ((m_pSoftKbd == NULL) || (GetTIM() == NULL))
return;
SetCompartmentDWORD(GetTID(), GetTIM(), GUID_COMPARTMENT_SOFTKBD_KBDLAYOUT,
dwKbdLayout , fFalse);
}
/*---------------------------------------------------------------------------
CKorIMX::GetSoftKBDPosition
---------------------------------------------------------------------------*/
inline
HRESULT CKorIMX::GetSoftKBDPosition(int *xWnd, int *yWnd)
{
DWORD dwPos;
HRESULT hr = S_OK;
if ((m_pSoftKbd == NULL) || (GetTIM() == NULL))
return E_FAIL;
if (!xWnd || !yWnd)
return E_FAIL;
hr = GetCompartmentDWORD(GetTIM(), GUID_COMPARTMENT_SOFTKBD_WNDPOSITION, &dwPos, TRUE);
if (hr == S_OK)
{
*xWnd = dwPos & 0x0000ffff;
*yWnd = (dwPos >> 16) & 0x0000ffff;
hr = S_OK;
}
else
{
*xWnd = 0;
*yWnd = 0;
hr = E_FAIL;
}
return hr;
}
/*---------------------------------------------------------------------------
CKorIMX::SetSoftKBDPosition
---------------------------------------------------------------------------*/
inline
void CKorIMX::SetSoftKBDPosition(int xWnd, int yWnd )
{
DWORD dwPos;
DWORD left, top;
if ((m_pSoftKbd == NULL) || (GetTIM() == NULL))
return;
if (xWnd < 0)
left = 0;
else
left = (WORD)xWnd;
if (yWnd < 0)
top = 0;
else
top = (WORD)yWnd;
dwPos = ((DWORD)top << 16) + left;
SetCompartmentDWORD(GetTID(), GetTIM(), GUID_COMPARTMENT_SOFTKBD_WNDPOSITION,
dwPos, TRUE);
}
/////////////////////////////////////////////////////////////////////////////
// H E L P E R F U N C T I O N S
/////////////////////////////////////////////////////////////////////////////
/*---------------------------------------------------------------------------
SetSelectionBlock
Wrapper for SetSelection that takes only a single range and sets default style values.
---------------------------------------------------------------------------*/
inline
HRESULT SetSelectionBlock(TfEditCookie ec, ITfContext *pic, ITfRange *range)
{
TF_SELECTION sel;
sel.range = range;
sel.style.ase = TF_AE_NONE;
sel.style.fInterimChar = fTrue;
return pic->SetSelection(ec, 1, &sel);
}
/*---------------------------------------------------------------------------
SetThis
---------------------------------------------------------------------------*/
inline
void SetThis(HWND hWnd, LPARAM lParam)
{
SetWindowLongPtr(hWnd, GWLP_USERDATA,
(LONG_PTR)((CREATESTRUCT *)lParam)->lpCreateParams);
}
/*---------------------------------------------------------------------------
GetThis
---------------------------------------------------------------------------*/
inline
CKorIMX *GetThis(HWND hWnd)
{
CKorIMX *p = (CKorIMX *)GetWindowLongPtr(hWnd, GWLP_USERDATA);
Assert(p != NULL);
return p;
}
#endif // __KORIMX_H__INCLUDED_
|
MSNBC’s “Morning Joe” co-host Mika Brzezinski criticized Democratic presidential candidate former Secretary of State Hillary Clinton for her “inconsistent and non-existent” campaign message and using her gender to say, “Look at the bird, I’m a woman” rather than confront the issues about her emails and Wall Street speeches directly on Wednesday.
Brzezinski said, after the topic of dissatisfied Hillary voters crossing over and voting for GOP presidential candidate Ohio Governor John Kasich came up, “And the difference between them? Kasich has a message, a clear message, not a long list, that you keep talking and talking and talking and talking to try and avoid the next question, just a clear message, a confident one.”
Mike Barnicle then added, “I think lot of the Democrats are probably looking at this morning, and they’re looking at the internals and what happened last, at this exit polls…and they’re thinking one thing, and the forensics that are being done within the Clinton campaign right now, it’s not the messaging, it’s not the targeting, it’s the candidate. And they have a real problem with the candidate. She is off message. Can she get on message? And one of the jarring things that occurred here, and it’s sort of been underreported. It’s gone under the radar, and it occurred within the last week, is the ad — the quote from Madeleine Albright and Gloria Steinem, supposedly to get women to support Hillary Clinton only because she was a woman. That does not work.”
Brezinski further argued, “I think she has used the concept of being a woman, the groundbreaking concept of being a woman candidate, at two times so far when she should have actually attacked the issue. First of all, she’s got this FBI investigation hanging over her. She’s got the email thing hanging over her. And you can’t say ‘Look at the bird, I’m a woman.’ And Donald Trump points that out. And then she’s got these Wall Street speeches hanging over her. I hear they may come out. If they do come out, what’s in them? You can’t say ‘I’m a woman, look at the bird.’ You have to talk about what the issues are. You have to talk about what your message is. And I think the bottom line is, New Hampshire proved that her message is inconsistent and non-existent.”
(h/t Washington Free Beacon)
Follow Ian Hanchett on Twitter @IanHanchett |
The ping-pong ball microphone: facilitating speech for a patient with hand burns and a tracheostomy. Patients who have sustained thermal injuries may require tracheostomies as a result of facial burns; these operations may also be required after prolonged intubation for smoke inhalation injury or respiratory failure. For a patient with a temporary tracheostomy, speech may be achieved by occluding the opening of the tracheostomy cannula with the tip of a finger, thereby directing airflow through the vocal cords and allowing phonation to be produced. However, some patients who also have hand burns may not be able to cover the opening of the tube because of the injuries to their fingers and the bulky dressings covering them. A simple tracheal occluder can be made out of a ping-pong ball and a syringe casing. The device presented in this article allows for the restoration of speech in the types of patients described above, and it promotes purposeful movement of their upper extremities. |
r, c = map(int, input().split())
sheet = []
sum_column = [0 for _ in range(c)]
for i in range(r):
row = list(map(int, input().split()))
sheet.append(row)
sheet[i].append(sum(sheet[i]))
sheet.append([sum(col) for col in zip(*tuple(sheet))])
for row in sheet:
print(" ".join(map(str, row)))
|
// WaitForNext is a blocking function that waits for the next available time to
// arrive before returning the names to the caller.
func (s *Scheduler) WaitForNext() []string {
next := s.Next()
if next == nil {
return []string{}
}
if time.Since(*next) > time.Duration(30)*time.Second {
_ = level.Warn(s.logger).Log("msg", "sending past event",
"next", next,
"since", time.Since(*next),
)
return s.NamesForTime(*next)
}
_ = level.Info(s.logger).Log("msg", "scheduler waiting",
"next", time.Until(*next),
"names", strings.Join(s.NamesForTime(*next), ","),
)
ti := time.NewTimer(time.Until(*next))
<-ti.C
return s.NamesForTime(*next)
} |
The ubiquity of computers in business, government, and private establishments has resulted in the availability of massive amounts of information from network-connected sources, such as data stores accessible through the World Wide Web, also called the Internet. The availability of and dependency on such massive amounts of information necessitate effective search techniques in order to accurately find documents containing desired information. In recent years, computer search methods and tools have become widely available. Most computer search tools depend on search engines. Search engines are software components that take as input a query from a user, conduct a search based on the query, and return search results to the user. Internet search engines may be implemented as special sites on the World Wide Web that help users find information stored on other Web sites.
Search engines index information, such as keywords, attributes, text, etc., that the search engines conclude describe the content of documents, including any locally stored documents, files, etc., and network-stored documents, i.e., Web pages. Subsequently, search queries supplied by the user to the search engine are compared against the index to direct the user to documents that likely contain information of interest to the user. As the number of search engine queries and the amount of content indexed by a search engine increase, it becomes more difficult to efficiently and accurately return the results of a search. The acceptability of the results returned by a search engine are highly dependent on the amount of information included in the returned results and how the returned results are presented to a user. Limiting search results can be as important as not missing any relevant results. For example, limiting search results to include only focused information of interest to the user or presenting the results in a way that helps the user more quickly evaluate the results can increase the quality of the results.
In addition to general-purpose search engines, special-purpose search engines and/or indexed information exist to serve special search needs. One example is trademark document clearance searches. Trademark document clearance searches are conducted to determine if potential trademarks (or service marks) have been used in a common law and/or a descriptive manner in documents. Trademark document clearance searches differ from general purpose searches in several respects. Trademark document clearance searches are generally conducted by searching documents to determine if they include any one of a list of potential trademarks or service marks in combination with one or more of a list of common industry terms that describe goods or services with which the potential trademark or service mark is to be used. In addition to the marks, the trademark/service mark list may include visual and/or phonetic equivalents. Various types of queries can be formed from the lists. For example, FIG. 2A illustrates a composite query 200 comprising a sequence of separate queries 202, each including a word chosen from each of the two lists—one from the list containing proposed trademarks and the other from the list containing industry terms. In this example, a user seeking to research a trademark for a new drug called “Bitox”, is searching variations on the Bitox name, namely, Pitox, Bitos, Bittox, etc., all included in the first list, in combination with applicable industry terms, namely, medication, prescription, treatment, etc., all included in the second list. A single query 210 equivalent to the trademark sequence query of FIG. 2A is depicted in FIG. 2B. FIG. 2B illustrates a single query comprising a Boolean logical combination of the two lists. The FIG. 2B query results in the same number of independent queries as the FIG. 2A query.
Trademark document clearance searches include hundreds of both trademark/service mark variations and industry terms that, when combined, may result in tens of thousands, if not millions, of queries. This combinatorial explosion often makes trademark document clearance searches slow and inefficient because each combination of terms is submitted to the search engine as a separate query. Another potential problem in this kind of search is that redundant search results are often returned by search engines because more than one query matched the same documents. Another potential problem is that the search results may not identify which specific query terms caused the match. Another potential shortcoming is that the distance between the two terms in each query is not provided directly to the user. Generally, the closer the two terms are together, in word distance, the more related the terms are. For example, if a the term “Bitox” is one or two words away from the term “medication,” then it can be concluded with a high degree of certainty that Bitox is likely associated with a medication in the related document. Whereas, if the terms “Bitox” and “medication” are separated by several hundred words, the use of Bitox in the document is more likely unrelated to medication.
One way to increase the quality of search results and improve search efficiency is to improve the query process. |
We make that one y’know every 3.5 seconds. Impressive stuff.
Footballers often make for notoriously dull interviewees because of their tendency to repeat the same phrases over again; you can nearly guarantee, for example, that phrases like ‘obviously’ ‘the gaffer’ and ‘the most important thing was to get the three points’ will pop up nearly every time a footballer speaks in front of a camera.
Jermain Defoe reached new levels of ridiculousness in a recent interview in Canada where he was commenting on the process behind his move from Spurs to MLS outfit Toronto FC.
Practically every word, never mind the end of every sentence, was followed with the words ‘y’know’ and although Defoe does admittedly speak pretty fast, that amount of y’knows in such a short space of time would raise suspicions that he was almost doing it on purpose… y’know. |
Cardiovascular effects of alpha-adrenergic agents in the dorsal raphe nucleus of the rat. Local injection of norepinephrine (NE) or phenylephrine into the lateral aspect of the dorsal raphe nucleus results in an increase in blood pressure and heart rate in the urethane-anesthetized rat. The increases in blood pressure and heart rate are blocked by prior injection of phentolamine into the dorsal raphe nucleus and are significantly reduced by i.v. mecamylamine. Selective lesion of serotonergic neurons in the dorsal raphe nucleus by 5,7-dihydroxytryptamine significantly reduces the pressor response to phenylephrine, but does not affect the increase in heart rate in response to phenylephrine. These data are consistent with a central alpha-adrenergic mechanism in the dorsal raphe that elevates blood pressure at least partly by an action on serotonergic neurons and elevates heart rate by a mechanism involving non-serotonergic neurons in the dorsal raphe area. |
Comfort is king when it comes to being a big man, and I’ve recently found that comfort behind the wheel of a 2015 Chevrolet Malibu.
Driving this vehicle has been mostly a pleasure. What the Malibu lacks in off-the-line and highway lane-change power it makes up in with a quiet, smooth and comfortable ride. Yet while it’s comfortable for tall drivers, this car effectively seats three with me behind the wheel.
How a big guy fits in the Malibu
The 2015 Malibu is one of the best fit’s I’ve experienced lately. Its seats give support past mid-thigh. And there’s enough seat range to allow for full leg extension. A high center console provides a well-placed armrest. There’s enough lateral room for the slight bend in your legs without having to rest your shin on console.
This combination gives me the feeling of being cocooned in the driver’s seat. Since picking it up, I’ve made several four or five hour-long trips in the Malibu. If I didn’t have the bladder of a five year-old, I don’t think I’d even want to stop and stretch my legs.
If I have any complaint, it’s that I wish the car offered me just one more inch of head room. If I were much taller than 6’7″, it might be an issue. But seat adjustments could allow you to find the right fit.
How the people in the back fit
While the ride up front is wonderful for a tall driver, it can be a bit tight behind that driver. And it’s the passenger’s feet that are a problem before their knees are. There’s no clearance under the seat and that’s a problem for adults sitting in the rear, particularly if they’re behind the driver.
My likes and gripes
Exterior
With both the Malibu and Impala, I really enjoy the way that Chevrolet has created edges in their design. There’s still an element of swoop that the Malibu shares with other sedans, but the hard lines on the hood and in the rear are a subtle nod to older designs. I also enjoy the tail lights as they have a bit of a retro look. At the end of the day, the Malibu isn’t the flashiest sedan out there. But it’s easy on the eyes.
Ride quality
What has impressed me the most about the Malibu is the ride quality. You don’t exactly expect a Malibu-level vehicle to give you the kind of ride that you’d expect from something much more costly. But the ride in the Malibu is smooth, even on the Michigan roads that nobody will pay to fix. Along with being smooth, the ride is quiet. Engine noise is minimal and only the big bumps create much noise in the cabin.
Interior
Call me juvenile, but there’s something about the cool blue “Tron lights,” as I call them, that line the interior when it’s dark in the cabin. The wood accents are tasteful without calling too much attention to them. Everything in the console is situated in a way that makes everything easy to get to.
Yet not everything is so nice. The plastic interior found on the doors is unattractive, unpleasant to the touch and it seems difficult to get out smudges and other grime. Unless you upgrade to the leather steering wheel wrap, that same material is on the steering wheel as well.
Performance
My biggest gripe about the Malibu is the power. When I’m pulling out into traffic or trying to pass on the highway, I want that responsive oomph when I put the pedal down, and the 2.5L inline 4 engine doesn’t give it to me. However, there is a 2.0L turbocharged Malibu available.
The 2015 Malibu is good for a tall driver
Surprisingly, when I went solely by the numbers, the Malibu didn’t lead the way in front leg room. But the more I drive these vehicles, the more I see that it isn’t always about the measurements, but about how the cockpit itself is arranged.
Tell a friend! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.