content
stringlengths
7
2.61M
package cvManager; import static org.junit.Assert.*; import org.junit.Test; public class CommonAttributesTest { @Test public void testAddSection() { CommonAttributes commonAttributes = new CommonAttributes(""); commonAttributes.addSection("title"); assertEquals("Den mphke swsta ",1,commonAttributes.getSections().size()); } @Test public void testGetSection() { CommonAttributes commonAttributes = new CommonAttributes(""); commonAttributes.addSection("title"); assertNotEquals("Den mphke swsta ",null,commonAttributes.getSection("title")); assertNotEquals("Den mphke swsta ",null,commonAttributes.getSection("section")); } @Test public void testAddParagraph() { CommonAttributes commonAttributes = new CommonAttributes(""); commonAttributes.addParagraph("paragraph", "title"); Section section = commonAttributes.getSection("title"); assertEquals("den mphke swsta ","paragraph",section.getParagraph().getContents()); } @Test public void testAddBulletListItem() { CommonAttributes commonAttributes = new CommonAttributes(""); commonAttributes.addBulletListItem("list item","2015-01-01", "title"); Section section = commonAttributes.getSection("title"); assertEquals("den mphke swsta ",1,section.getBulletList().getBulletListItem().size()); assertEquals("den mphke swsta ","list item",section.getBulletList().getBulletListItem().get(0).getContents()); } @Test public void testGetParagraph() { CommonAttributes commonAttributes = new CommonAttributes(""); commonAttributes.addParagraph("paragraph", "title"); assertEquals("den mphke swsta","paragraph",commonAttributes.getParagraph("title")); } }
Hardcore military shooters aren’t really my thing. I enjoy a challenge in games, but I don’t get the love for spending an hour crawling through the hills looking for an enemy sniper, only to get shot and end up doing it all over again. That isn’t something that I find enjoyable. That’s the main reason that I was skeptical when I heard all the hype surrounding the Arma II mod, DayZ. Despite my disdain for the style of play, I found myself engrossed in the nothingness surrounding me. Just hearing about DayZ isn’t going to get you excited for it. In fact, it’ll probably turn you off. It doesn’t sound fun and there’s really nothing that exciting that you can gather from the description. It just sounds like another hardcore PC mod that only hardcore PC players can enjoy. As it turns out, that’s the furthest from the truth. After spawning on a beach, I tried to get my footing. The controls are weird. Well, they’re different than most first-person shooters, but that has more to do with Arma II than it does DayZ. Once I was able to figure out the basics of movement, I looked around. There wasn’t much there besides a hill, the ocean, and a lighthouse. I started to make my way toward the lighthouse, but I caught the attention of a passing zombie and had to starting running away. It was at this point that I noticed an eye and an ear icon in the upper right corner of the screen, both resting at full bars. I could only assume that at this point I was completely visible and could be heard from quite a distance. I turned around to see if the zombie had stopped following me at this point, but as soon as a stopped, I was struck by three different zombies. Blood squirted from my body in a comic fashion. There was nothing that I could do to fight back. I didn’t have any weapons, I had only spawned with a flashlight, some bandages, and a small backpack to hold it all in. I was hopeless. My screen faded and a green hourglass appeared on screen indicating that I had passed out. While I lay on the ground passed out from my blood loss, another player comes along and shoots all the zombies. It surprised me, as I was told that other players don’t really help you out that often. It’s a kill or be killed situation and most players will just shoot on site (Though that isn’t always the case and makes for some really cool videos like this one, where players commit a highway robbery). Unfortunately, he ran away toward the nearest town while I was still out. I’m not sure why he bothered helping me at all, but then again, I'm not sure about most things in this game. When I finally came to, I noticed that the blood number in the upper right was dropping significantly and that I needed to do something about it quickly before I died. I opened up my backpack and grabbed the bandages, hoping that it would stop the bloodloss. It did, just shy of 6000 blood (whatever that means as a metric). I thought that maybe I had gotten lucky and I could actually work on finding a weapon of some kind now. However, when I tried to move, my character would only move in prone position. I stood him back up and tried again, but he would always revert to the prone position before beginning to move. I couldn’t figure out why, until I saw the snapped bone icon on the right side of the screen. It wasn’t good. One of the zombies had broken my leg and I couldn’t walk while it was broken. I tried more bandages, but nothing helped. I pulled up Steam chat and messaged a friend, hoping that he could give me some clue of how to get walking again. His response gave me a deeper understanding of just how hardcore DayZ was. “You’re going to need some morphine,” he said, “if you don’t have that, you might as well just kill yourself.” The logistics of that were mind-boggling. I couldn’t believe that something like that could happen and the only solution would be to crawl around, hoping to avoid zombies and find some morphine. I couldn’t rely on a fellow player to give me any, why should they waste that one me when they could just kill me and take everything anyway. It was brutal and devastating to me that there was nothing for me to do but restart. Restarting would put me on a completely new area of the map, an area that I would still have no knowledge of in relation to anything else in the world. Where would I find weapons? Would I run into other players on my path? Could I manage to sneak away from zombies until I was able to defend myself? As these questions ran through my mind, I realized why DayZ has been able to reach the audience that it had. It’s thought provoking. It’s real. It isn’t a shooter at all, it’s a survival game. And not in the “It’s a survival game, but here’s a gun five minutes into the game”-type of survival game. It truly is about survival on a deeper level. There’s nothing easy about it. When you die, you lose everything. Things that you’ve spent dozens of hours scouring for, only to have them ripped from you in a split-second’s choice. DayZ is unlike anything I’ve ever played before and it isn’t even a real game. It’s a mod to an expansion pack that has managed to bring a two year-old shooter into the top ten games on Steam. DayZ is the reason that we play games, we want meaningful experiences, and that’s exactly what this is.
Leveraging attention-based visual clue extraction for image classification Deep learning-based approaches have made considerable progress in image classification tasks, but most of the approaches lack interpretability, especially in revealing the decisive information causing the categorization of images. This paper seeks to answer the question of what clues encode the discriminative visual information between image categories and can help improve the classification performance. To this end, an attention-based clue extraction network (ACENet) is introduced to mine the decisive local visual information for image classification. ACENet constructs a clue-attention mechanism, that is global-local attention, between the image and visual clue proposals extracted from it and then introduces a contrastive loss defined over the achieved discrete attention distribution to increase the discriminability of clue proposals. The loss encourages considerable attention to be devoted to discriminative clue proposals, that is those similar within the same category and dissimilar across categories. The experimental results for the Negative Web Image (NWI) dataset and the public ImageNet2012 dataset demonstrate that ACENet can extract true clues to improve the image classification performance and outperforms the baselines and the state-of-the-art methods. INTRODUCTION Image classification aims to categorize a set of unlabelled images into several predefined classes based on their visual content. Image classification has become a critical task in multiple related areas, such as object detection/recognition, visual concept learning and visual knowledge learning. Image classification has made considerable progress in achieving high performance in recent years. However, there are some important issues existing for the task, such as the interpretability of deep neural network-based approaches. The progress in image classification can be partitioned into two stages: human-designed feature-based methods and deep learning-based methods. Regarding the former, people design the features in advance in terms of colour, texture and gradient and apply classifiers to these extracted features. Some representative human-designed features, such as the histogram of oriented gradients (HOG), local binary pattern (LBP), the scale invariant feature transform (SIFT) and speeded up robust feature (SURF), have been proposed. In general, these human-designed features have good interpretability but may have limitations in achieving high image classification accuracy, especially for complicated data. Over recent years, convolutional neural networks (CNNs) have made great success in a variety of visual tasks including image classification. CNNs consist of a stack of convolutional layers interleaved with non-linearities and downsampling, and are capable of capturing hierarchical patterns with global receptive fields as powerful image descriptions. Since the introduction of AlexNet in 2012, a variety of new CNN architectures, including visual geometry group network (VGGNet), Inception net, residual network (ResNet), dense convolutional network (DenseNet) and NASNet, have been proposed. Besides the design of CNN architectures, a number of works have sought to introduce new modules into networks to meet particular requirements. The attention mechanism, which adaptively learns the effect of elements of the input on the output, is an important module for deep learning applications, such as image classification and image captioning. Wang et al. proposed a residual attention network for image classification, in which the spatial attention and channel attention on the feature maps are introduced to enhance meaningful contents and suppress IET Image Process. 2021; 1-11. wileyonlinelibrary.com/iet-ipr In general, most previous approaches solve the image classification problem based on the global visual context or important local visual information. Actually, there are three types of visual information causing classification results: the global content of images widely used in a number of approaches; the discriminative local visual clues appearing in a category while not existing in the others, such as some important objects; and the common visual information that appears across different categories and is indiscriminative, such as some backgrounds. Figure 1 shows an example illustrating the effect of the last two types of visual information on image classification. In this figure, we show a few region proposals representing different contents of the images from two categories that is 'sea lion' and 'red wolf'. In the same category, there are region proposals (marked by red bounding boxes) corresponding to the key objects with the same semantics and reflecting the category information while there are also some other region proposals (marked by yellow and blue bounding boxes) with different semantics that are irrelevant to the category across different images. In addition, we find that these region proposals irrelevant to the category may appear in the images from different categories. Hence, a well-formed image classification approach needs to capture and enhance the discriminative features effectively while suppressing the common visual information across different categories. This paper proposes a novel approach called the attentionbased clue extraction network (ACENet) to the learning of visual clues in image classification and seeks to answer the question of what clues encode the discriminative visual information between image categories and can help improve classification performance. First, we utilize Faster R-CNN and selective search to obtain the visual clue proposals that consist of objects and backgrounds. Second, we present a clue-attention mechanism, called global-local attention, between images and visual clue proposals to adaptively measure their correlation. To extract the true clues from the proposals and reinforce their dis-criminability, we introduce a contrastive loss defined over the attention distribution, which encourages those clue proposals that are similar within the same category while being dissimilar across different categories to be given considerable attention. Finally, the image classification is performed based on the combination of the representations of images and the clues. The experimental results conducted on ImageNet2012 and our NWI dataset demonstrate that our approach is effective in extracting the true visual clues and improving the image classification performance. In summary, our contributions are three-fold: 1. We present the ACENet approach to improve image classification performance by leveraging local visual clues and seek to answer the question of what clues encode the discriminative visual information between image categories. 2. We propose the global-local attention mechanism, which adaptively measures the correlation between visual clue proposals and images in both training and testing and permits the clue proposals to make varying contributions to image classification. 3. A contrastive loss is introduced to reinforce the discriminability of visual clue proposals and sparsify the attention distribution with consideration of cross-sample information for the extraction of true clues. The remainder of this paper is organized as follows. Section 2 presents a brief overview of related work. Section 3 describes the details of our proposed method, Section 4 reports the experimental results for the two datasets, and Section 5 concludes the paper. RELATED WORK At present, CNNs have been proven to be the most effective model for solving the image classification problem. Thus, we review the CNN-based methods for image classification in this section. CNN architectures. Since the introduction of AlexNet in 2012, a variety of representative CNN architectures, including VGGNet, Inception net, ResNet, GoogLeNet, and DenseNet, have been proposed. These variants seek to explore a better model mainly from the following three aspects: different network structures, deeper networks, and high efficiency. For example, Huang et al. designed a new network structure named DenseNet for image classification, which connects each layer to every other layer in a feed-forward fashion and is different from traditional convolutional networks that generally connect each layer and its subsequent layer. Regarding the depth of networks, VGGNet and GoogLeNet demonstrated that a deeper network would extract higher-quality image features. He et al. presented ResNet with a depth of up to 152 layers, a substantially deeper network than those proposed previously, and demonstrated its powerful image classification performance on ImageNet. Convolutional networks are computationally expensive as they become increasingly deeper and wider. With the growing complexity of networks, the issue of how network architectures can be made more efficient has received considerable attention. In particular, group convolutions have been proven to be a popular method to decrease the number of model parameters and reduce the computational complexity. MobileNet and ShuffleNet, which could be viewed as extensions of the grouping operator, constructed efficient network architectures based on the combination of depthwise convolution and pointwise convolution with 1 1 convolutional filters. Tzelepis et al. introduced a generic model compression method that had minimal impacts on accuracy while reducing the inference time and memory footprint of a network. Attention mechanism. Regarding the progress of deep learning for image classification in terms of different network structures introduced above, the introduction of an attention mechanism module is a significant way to improve classification performance and interpretability. In general, attention can be interpreted as a means of biasing the allocation of available computational resources towards the most informative components of a signal [16,. In recent years, spatial attention and channel attention have been widely used for CNNs, especially in visual tasks. Woo et al. proposed a convolutional block attention module (CBAM) for CNNs that combined channel attention and spatial attention for adaptive feature refinement. Hu et al. proposed a network called squeeze-and-excitation network (SENet) and introduced an architecture unit named the 'squeeze-and-excitation' block (SE block) to enhance the representational power of deep networks by explicitly modelling the interdependencies between the channels of the convolutional features. The SE block can be regarded as a self-attention function on channels. Attention mechanisms have been widely applied to the image classification and recognition tasks. Jaderberg et al. introduced a new learnable module called the spatial transformer, which allows networks to select the visual regions of an image that are most relevant. Zheng et al. proposed a multi-attention convolutional neural network to learn more discriminative fine-grained features in the fine-grained image classification task. By contrast, our proposed ACENet seeks to find those visual regions from images that encode the most discriminative visual information for image classification with the attention mechanism and enhance the classification performance. Image classification applications. From the view of applications, image classification has been used in various areas, such as retrieval, surveillance/monitoring and medical analysis. In these areas, researchers focus on natural images and specific-domain images related to traffic, remote sensing, industry, medical imaging and so forth. A variety of issues have been explored in these image classification applications. For example, as an extension of traditional image classification, fined-grained image classification aims to recognize subcategories under some basic-level categories, where the objects of different subcategories are both semantically and visually similar to each other. Image classification has also been used in visual concept learning to mine a large range of concepts from the vast resources of online data. To accurately imple-ment object detection that is widely employed in surveillance applications, He et al. presented a novel algorithm called the Mask R-CNN that can generate a high-quality segmentation mask and label for an object. Based on the Mask R-CNN, Wu et al. introduced an improved Mask R-CNN to mitigate the performance deterioration when samples are reduced for object detection. Framework In the task of C -class image classification, the data are given extracted from x i, where each proposal p m i is represented by a feature vector p m i ∈ ℝ l. Figure 2 illustrates the framework of the proposed ACENet, which includes four main processes: clue proposal extraction, visual representation, the clue-attention network and classification. ACENet first extracts multiple visual regions from an image and takes them as the clue proposals. The convolutional neural network is then employed to obtain the representation of both the original images and clue proposals. To learn the clues that have important effects on recognizing categories of images, an attention network is introduced to learn the distribution of the attention on the clue proposals and highlight those clue proposals helping to improve the classification with the contrastive loss function. Finally, the classification is performed based on the concatenation of the features derived from the image and clue proposals. Clue proposal extraction In this work, the clue proposals are considered typical local regions in images that have fine and rich visual information, such as important objects and background. We extract the clue proposals based on the cooperation of Faster-RCNN and selective search. First, we implement the Faster R-CNN algorithm to produce M r object region proposals:, where p r,i describes the ith proposal with its coordinates and c r,i denotes the confidence (predicted probability) of this proposal containing an object. In this work, the Faster-RCNN is pre-trained on the MSCOCO dataset and does not perform fine-tuning. Although the datasets used for image classification in this work consist of a proportion of categories that do not appear in MSCOCO, the extracted regions also cover many meaningful objects that can be employed as the clue proposals. In the implementation, we empirically choose 5 proposals with the highest probability from the Faster-RCNN. Second, we utilize a selective search algorithm, which performs the computing in terms of texture, colour, size and overlap, to generate a Attention-based visual clue extraction The determination of visual clues for image classification depends on two factors: a specific image and its category, and the discriminability of clue proposals. For the first factor, we present a global-local attention mechanism between an image and the clue proposals extracted from it. Since the category label is unknown for a test image, we do not actually build an attention mechanism between the clue proposals and class labels. Regarding the discriminability, we consider that the true clues for a specific image should be similar to the clues in the images of the same category and dissimilar to the images from the other categories. First, we introduce the global-local attention mechanism between the original image and clue proposals to generate a discrete attention distribution over clue proposals. Generally, the true clues should be given more attention and lead to a high probability at the corresponding entries of the attention distribution. In general, an attention mechanism can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. The general attention mechanism can be formulated accordingly as follows: where Q, K, V and Attention(⋅, ⋅, ⋅) refer to the query, key, value and output, respectively; and sim(⋅, ⋅) denotes a certain function to measure the correlation of queries and keys. In this work, we have the representation of images and the clue proposals, which correspond to the query and key (here, the key and value are the same) in Equation, respectively; and our clue-attention model can be specified as follows: where a m i, corresponding to the function sim(⋅) in Equation, encodes the correlation of x i and p m i ; i denotes the discrete attention distribution on all clue proposals for image x i ; W X, W P ∈ ℝ d l are the transformation matrices that map the visual representation into low-dimensional spaces, w a ∈ ℝ d, and a i = (a m i ) T. The clue context vector associated with the clue proposals is computed by: where m i is the mth entry of i, andp i, corresponding to Attention(⋅, ⋅, ⋅) in Equation, denotes the output of the global-local attention. The final representation of image x i for image classification is achieved by the concatenation of x i and. The global-local attention mechanism formulates the relationship between the global information (i.e. images) and the local information (i.e. clue proposals) extracted from it. We consider that the global-local attention is different from the following widely used attention mechanisms: self-attention, which describes the importance of elements in images or sentences by measuring the relationship of these elements, and encoder-decoder attention, which generally builds the relationship between the elements in outputs and inputs and has been widely used in language translation and image captioning. Encoder-decoder attention can be considered as the alignment of outputs and inputs. Second, we introduce a contrastive loss defined over the clue attention distribution, which encourages considerable attention to be paid to the true visual clues in the training process of models. According to the aforementioned analysis of the discriminability of clue proposals, we should pay considerable attention to the similar clue proposals in images belonging to the same category and little attention to the similar clue proposals across different categories. Thus, the contrastive loss is defined in the following form: where i 1 and i 2 refer to two images belonging to the same category; j 1 and j 2 refer to two images from different categories; T p and T n denote the numbers of image pairs from the same category and different categories, respectively; and W i 1 i 2 (W j 1 j 2 ) is the similarity matrix of clue proposals from the i 1 th ( j 1 th) image and the i 2 th ( j 2 th) image. We define the (m 1, m 2 )-entry of matrix W i 1 i 2 as follows: where ⋅ 2 denotes the L2-norm. W m 1 m 2 j 1 j 2 can also be calculated as Equation. Clearly, W is also large. In this case, little attention will be given to them when we minimize the second term in Equation, and thus those visually different clue proposals are more likely to attract considerable attention. Consequently, minimizing the contrastive loss in Equation encourages the attention being paid to the true clues. The final loss function is given by combining the crossentropy loss and the contrastive loss: where is a balance parameter. L cls denotes the cross-entropy loss defined in the following form: where y i and i indicate the true label vector and the predicted label vector, respectively. In essence, the contrastive loss can be considered a regularizer of the regularization framework or prior knowledge from the viewpoint of Bayesian learning. EXPERIMENTAL RESULTS AND ANALYSIS This section introduces the experimental details, including the datasets, implementation details and main results. In addition, we compare our approach with multiple representative models for image classification. Datasets We conduct the experiments and test the performance of ACENet over a small dataset we built and the public Ima-geNet2012 dataset. The small dataset, called negative web image (NWI) dataset, is crawled from social media websites and consists of the data that are undesirable and sometimes harmful to young people. The NWI dataset has a total of 10,500 images and comprises approximately the same number of 6 negative categories, including pornography, luxury watches, luxury cars, luxury bags, cash, and jewellery; and a category of natural images. In the experiment, the dataset is evenly divided into two parts for training and testing. ImageNet2012 comprises 1.28 million training images and 50K validation images from 1000 categories, and has been widely used in the image classification task. We train the networks on the training set and report the performance on the validation set for ImageNet2012. In the training, we follow standard practices and perform data augmentation with random cropping using scale and aspect ratio to a size of 224 224 pixels (or 299 299 for the models associated with Inception-ResNet-v2) and perform random horizontal flipping. When evaluating the models we apply centre-cropping so that 224 224 pixels are cropped from each image, after its shorter edge is first resized to 256 (299 299 from each image whose shorter edge is first resized to 352 for the models associated with Inception-ResNet-v2). Implementation details For each image, we extract 10 visual regions as clue proposals, that is M p = 10. We employ VGG, for classification on ImageNet (NWI). In the training process, we use the stochastic gradient descent (SGD) optimizer with a momentum of 0.9 and a minibatch size of 256 for NWI and 1024 for Ima-geNet. To reduce the computational load, we randomly choose a fixed amount of image pairs in each minibatch when computing the contrastive loss shown in Equation. Empirically, we choose T p, T n = 4096 for ImageNet and 1024 for NWI. If the number of image pairs belonging to the same category or different categories in a minibatch is less than the fixed amount, we choose all the image pairs for the computation of the contrastive loss. All the experiments are conducted on a platform with 8 Nvidia Titan V GPUs using PyTorch. 4.3.1 Experimental results on the NWI dataset Table 1 shows the classification results of our approach and the compared models for the NWI dataset. From this table, we The illustration of visual clue exaction for the six negative categories of the NWI dataset observe that most of the methods employed here achieve high performance for the small dataset. Compared with the baselines and the state-of-the-art methods, our ACENet achieves the lowest error rates. For example, ACENet obtains a 2.16% top-1 error when using Inception-ResNet-v2 as the backbone, exceeding SENet (2.91%) by 0.75%. ACENet also works well for other backbones: it decreases the top-1 error by 1.48%, 1.22% and 0.88% compared with SENet when using the basic VGG-16, ResNet-50 and ResNet-101 models, respectively. Compared with BoTNet-S1-128, ACENet with the EfficientNet-B7 backbone decreases the top-1 error by 0.22%. Figure 3 illustrates the examples of visual clue exaction for 6 negative categories, where the visual clue proposals with the most attention (i.e. corresponding to the largest m i for the ith image) are marked by red rectangles. For the NWI dataset, we observe that the category of an image is dominantly determined by the important visual objects in it, and thus it is effective to improve the classification performance using visual clue extraction. 4.3.2 Experimental results on the ImageNet dataset Figure 4 shows the change in the top-1 accuracy (i.e. 1 − top-1 error) with the balance parameter in Equation for ACENet on the validation set of ImageNet2012. It is observed that the performance reaches a peak at = 10 −4 and then begins to degrade. For all the experiments in this work, we choose = 10 −4. In this case, we find that L ctr ∕L cls ≈ 0.12 when the training stops, which can be empirically considered reasonable. We report the classification results of ACENet, the baselines and state-of-the-art models for ImageNet in Table 1. From the table, we observe that our ACENet model largely decreases the The top-1 accuracy versus parameter in Equation ResNet-50 baseline (44.7M), respectively. As shown in Table 1, we observe that ACENet with the backbone EfficientNet-B7 achieves a decrease of 0.83% in terms of the top-1 error rate compared with BoTNet-S1-128 although it uses less parameters. In Table 2, we show the results of the ablation study for three configurations: removing the clue-attention network and contrastive loss (i.e. performing classification based on the combination of original images and clue proposals using backbones), keeping only the clue-attention network, and keeping both the clue-attention network and contrastive loss (i.e. the proposed ACENet). The three configurations correspond to the three rows for each backbone in the table. Compared with the baselines in Table 1 that employ only the global image information, we observe that the introduction of clue proposals to the classification (i.e. configuration 1) can enhance the performance. From Table 2, we find that the clue-attention network and contrastive loss can improve the image classification performance. For example, for the backbone ResNet-101, the top-1 error decreases from 23.27% to 22.58% when introducing the clue-attention network only and reaches 22.09% if we further consider the contrastive loss term. The results show that both clue attention and the contrastive loss play important roles in improving the classification performance by strengthening discriminative features and suppressing common visual information across the different categories. In Figure 5, we show the change in the top-1 error rates in the training and testing of our approach and the ResNet-50 and ResNet-101 baselines. From the figure, we can observe that ACENet achieves lower error rates in training and testing processes than the baselines. Figure 6 illustrates the change in clue-attention distribution during training, where the height of a bar reflects the degree of attention given to a clue proposal in an epoch. We observe that, as the training epochs increase, the attention distribution changes and converges to the state that the clue proposals that The change in attention distribution during training for an image of the 'toy terrier' category, where cp1 to cp10 denote 10 clue proposals extracted from the original image FIGURE 7 An example illustration of extracting clues for several categories in the image classification task look more discriminative are given more attention to e.g., 'cp10' in this figure. Clearly, this clue proposal seems to possess good discriminability to classify the image into the 'toy terrier' category. In Figure 7, we show an example illustration extracting clues for several categories in the image classification task. The red, green and blue bounding boxes shown in each image represent the top three clue proposals in descending order of attention. From the figure, we observe that these clue proposals, especially the red proposals, possess good discriminability to determine the category to which images should belong. For example, as shown in the second image of the category 'red wolf', the top clue proposal (red) accurately corresponds to the region of a red wolf that excludes the background and other objects independent of this category. CONCLUSION This paper proposes an attention-based visual clue extraction approach that seeks to determine what clues encode the information of categories and can help improve the image classification performance. In this approach, we first construct a clue-attention mechanism called global-local attention between an image and its clue proposals, which permits the clue proposals to make varying contributions to image classification. Then, we introduce a contrastive loss in the training to encourage considerable attention to be devoted to true visual clues that possess discriminative information. The experimental results show that the proposed approach can effectively extract the visual clues that reflect the category information, and can improve the classification performance compared with the baselines and the state-of-the-art method. The visual clue extraction is an important issue in image classification that can make the results more interpretable, and thus can be used in the scenarios with the requirement of high interpretability, such as medical image classification and negative web image recognition. In addition, the current work in the paper is performed on the standard NWI and ImageNet datasets. Actually, there are a considerable amount of data with noisy labels or without labels on the Internet that are useful to enhance the image classification performance. In the future, it will be addressed to incorporate these non-standard data into ACENet under the weakly-supervised or semi-supervised learning paradigm for online intelligence applications.
# GENERATED BY KOMAND SDK - DO NOT EDIT from .delete.action import Delete from .get.action import Get from .patch.action import Patch from .post.action import Post from .put.action import Put
/** * performs a simple check to see whether the triple <station> a station:WeatherStation exists * @param station * @return */ private boolean stationExists(String station) { boolean exist = false; SelectQuery query = Queries.SELECT(); Variable var = query.var(); query.select(var).where(var.isA(iri(WeatherQueryClient.ontostation+"WeatherStation"))); JSONArray queryResult = storeClient.executeQuery(query.getQueryString()); if (queryResult.length() == 1) { String queriedIRI = queryResult.getJSONObject(0).getString(var.getQueryString().substring(1)); if (queriedIRI.equals(station)) { exist = true; } } return exist; }
Computer-aided design of window functions for FIR filters Linear programming is utilized to determine optimum window coefficients which depend on not only the length of the filter but also the specifications of the desired filter. With an appropriate choice of the prototype filter used to derive the initial infinite impulse response, it is demonstrated experimentally that the method can achieve design specifications with shorter length than can be obtained using the Kaiser window method. It is also shown that the performance of the proposed method is the same as that of the Parks-McClellan algorithm.<<ETX>>
<reponame>doemski/streampipes-pipeline-elements /* * Copyright 2018 FZI Forschungszentrum Informatik * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.streampipes.sinks.databases.jvm.postgresql; import org.streampipes.model.DataSinkType; import org.streampipes.model.graph.DataSinkDescription; import org.streampipes.model.graph.DataSinkInvocation; import org.streampipes.sdk.builder.DataSinkBuilder; import org.streampipes.sdk.builder.StreamRequirementsBuilder; import org.streampipes.sdk.extractor.DataSinkParameterExtractor; import org.streampipes.sdk.helpers.EpRequirements; import org.streampipes.sdk.helpers.Labels; import org.streampipes.sdk.helpers.Locales; import org.streampipes.sdk.utils.Assets; import org.streampipes.wrapper.standalone.ConfiguredEventSink; import org.streampipes.wrapper.standalone.declarer.StandaloneEventSinkDeclarer; public class PostgreSqlController extends StandaloneEventSinkDeclarer<PostgreSqlParameters> { private static final String DATABASE_HOST_KEY = "db_host"; private static final String DATABASE_PORT_KEY = "db_port"; private static final String DATABASE_NAME_KEY = "db_name"; private static final String DATABASE_TABLE_KEY = "db_table"; private static final String DATABASE_USER_KEY = "db_user"; private static final String DATABASE_PASSWORD_KEY = "db_password"; @Override public DataSinkDescription declareModel() { return DataSinkBuilder.create("org.streampipes.sinks.databases.jvm.postgresql") .withLocales(Locales.EN) .withAssets(Assets.DOCUMENTATION, Assets.ICON) .category(DataSinkType.STORAGE) .requiredStream(StreamRequirementsBuilder.create() .requiredProperty(EpRequirements.anyProperty()) .build()) .requiredTextParameter(Labels.withId(DATABASE_HOST_KEY)) .requiredIntegerParameter(Labels.withId(DATABASE_PORT_KEY), 5432) .requiredTextParameter(Labels.withId(DATABASE_NAME_KEY)) .requiredTextParameter(Labels.withId(DATABASE_TABLE_KEY)) .requiredTextParameter(Labels.withId(DATABASE_USER_KEY)) .requiredTextParameter(Labels.withId(DATABASE_PASSWORD_KEY)) .build(); } @Override public ConfiguredEventSink<PostgreSqlParameters> onInvocation(DataSinkInvocation graph, DataSinkParameterExtractor extractor) { String hostname = extractor.singleValueParameter(DATABASE_HOST_KEY, String.class); Integer port = extractor.singleValueParameter(DATABASE_PORT_KEY, Integer.class); String dbName = extractor.singleValueParameter(DATABASE_NAME_KEY, String.class); String tableName = extractor.singleValueParameter(DATABASE_TABLE_KEY, String.class); String user = extractor.singleValueParameter(DATABASE_USER_KEY, String.class); String password = extractor.singleValueParameter(DATABASE_PASSWORD_KEY, String.class); PostgreSqlParameters params = new PostgreSqlParameters(graph, hostname, port, dbName, tableName, user, password); return new ConfiguredEventSink<>(params, PostgreSql::new); } }
The city of Detroit reached an agreement on Tuesday with two banks to end a costly interest-rate swap agreement, a significant step as the city negotiates with creditors to put together a plan to exit the largest municipal bankruptcy in U.S. history. Detroit will pay $165 million, plus up to $4.2 million in costs, to end the interest-rate swap agreements with UBS AG and Bank of America Corp&apos;s Merrill Lynch Capital Services at a 43 percent discount. The new agreement, which was reached after the judge overseeing the case implored the city to negotiate better terms than it first proposed, will save the city about $65 million. As part of the arrangement, Detroit will also take out a $285 million loan from Barclays PLC to pay to end the swaps. It will use $120 million of that toward improvements to services in the city, which is hampered by $18.5 billion in debt. Terms of the agreement were announced by Robert Hertzberg, of the law firm Pepper Hamilton, which represents Detroit, before U.S. District Judge Gerald Rosen, the chief mediator in the bankruptcy case. The deal must still be approved by the U.S. bankruptcy judge overseeing the case, Steven Rhodes. Robert Gordon, an attorney representing the city&apos;s two pension funds, said the funds would continue to oppose the deal even with the changes. "The revised deal is better, but that is not saying a lot," Gordon, of the law firm Clark Hill, wrote in an email. The deal was reached after two days of mediation this week, led by Rosen. "This is - I think it&apos;s the first, I think it&apos;s fair to say, significant agreement in the bankruptcy," Rosen said, according to a court transcript. Detroit had initially secured a $350 million loan from Barclays, of which about $230 million would be used to end the swap agreements with UBS and Merrill Lynch at 75 cents on the dollar. The remainder of the cash was slated to be used to improve city services. The swaps had been intended to hedge interest rate risk for a portion of $1.4 billion of pension debt Detroit sold in 2005 and 2006. A spokesman for Bank of America declined to comment. UBS could not be reached immediately for comment. Rhodes last week encouraged Detroit to negotiate better terms with the banks after he halted a hearing at which the city was seeking approval of the deal. The agreement can be terminated if it is not approved by January 31, 2014. Detroit plans to file a request with Rhodes to approve the deal by Friday, said Hertzberg, the city&apos;s attorney. Detroit Emergency Manager Kevyn Orr, in a statement, called the deal an "important development. This agreement represents a significant reduction from the original deal struck with the banks," Orr said. "The banks and the City, through mediation, and with the mediator&apos;s recommendation, have accepted the reduction in terms."
Happy Manchester Derby weekend! Manchester United Manager Jose Mourinho met the media a few hours ago to preview this weekend’s huge clash across town at Manchester City. We have video of the entire media session for you here below via YouTube. Mourinho would not be drawn into a discussion on the allegations that City have skirted Financial Fair Play rules. “It is difficult for me to answer because I focus on my job, I focus on the four lines, focus on football,” Mourinho said. “If you want to speak about their football potential, we can speak and football potential starts with investment. “After that, of course, there is a quality of the work, of the organization. I think that is untouchable but what is behind (the scenes at the club) I cannot say. United head into this one trailing City by nine points, and the result will obviously have tremendous consequence on United’s Premier League title ambitions this season. Mourinho refuses to interpret the outcome of the game in this manner however. “In this moment I have only to think about football, and to think about football is to think about Manchester City as a football team,” he said. “If we draw (the gap between the two teams) is nine points if we lose it is 12. I don’t think it is the way to look at the match,” the Portugese continued.
Cold Agglutinin Disease Shows Highly Recurrent Gains of Chromosome 3, 12 and 18 Cold agglutinin disease (CAD) is a rare autoimmune hemolytic anemia, caused by a distinct type of B-cell lymphoproliferative disease of the bone marrow. Anemia is mediated by binding of monoclonal cold agglutinin antibodies to the erythrocyte surface I antigen at temperatures below central body temperature, followed by agglutination and complement activation. These antibodies are almost exclusively encoded by IGHV4-34. We recently found recurrent KMT2D and CARD11 gene mutations in CAD (Malecka, et al 2017). We analyzed 12 CAD samples from the CAD-5 study (NCT02689986) (Berentsen, et al 2017) using Affymetrix OncoScan CNV Plus Assay and exome sequencing data in order to detect chromosomal aberrations. All samples were sorted using fluorescence activated cell sorting prior to analysis. Complete or partial gain of chromosome 3 (+3 or +3q) was detected in all samples (Table 1). Additionally, most cases showed either gain of chromosome 12 or 18 (9/12) (Table 1). Additional small regions of recurrent gain or loss were also detected in other chromosomes. The recurrent changes in least 4 samples are: +1p36.31-p36.13, -8p21.3-p21.2, +9q34.2-q34.3, +11q13.1-q13.2, +17q25.3, +21q22.3, +22q13.31-q13.33. Gains and losses of large parts of chromosomes are clearly visible in both Affymetrix OncoScan CNV Plus Assay and exome sequencing data (Figure 1). In contrast smaller changes are often not visible in exome sequencing data and therefore could not be fully validated. Further validation will be required for smaller chromosomal changes. Gain of chromosome 3/3q was characteristic for all CAD cases analyzed. Gain of chromosome 3 in CAD has previously been reported in 9 of 26 patients (Gordon, et al 1990, Michaux, et al 1995). Gain of chromosome 3 is not a highly recurrent finding in most B-cell lymphomas with the exception of marginal zone lymphoma (MZL). Of interest, gain of chromosome 12 and 18, demonstrated in our study, is also a feature of MZL. These findings, together with our previous findings (Malecka, et al 2017), suggest that the CAD-associated lymphoproliferative disease, although exclusively present in the bone marrow, might be related to MZL. The CAD-5 study assessed the efficacy of bendamustine plus rituximab in CAD patients. We explored whether a correlation existed between presence or absence of trisomy 12 and 18 with response to therapy. Although the series is small, we found a trend towards poorer response in patients with trisomy 18 compared to patients with trisomy 12. Furthermore, both patients without trisomy 12 or 18 had the best responses (Table 1). Despite the limited number of cases the Pearson correlation was statistically significant (p=0.02). However, more samples need to be analyzed to confirm these results. A correlation between response to therapy and presence of trisomy 12 or 18 has previously been demonstrated for other B-cell lymphomas. Trisomy 12 is frequent in patients with small lymphocytic lymphoma (28-36%) and in chronic lymphocytic leukemia (10-25%) and in the latter +12 is associated with intermediate prognosis (Autore, et al 2018). Trisomy 3 and 18 are found to be correlated with advanced stage extranodal marginal zone lymphoma (Krugmann, et al 2005). There is evidence from previous studies that trisomy 18 may be associated with upregulation of BCL2, an anti-apoptotic gene located on chromosome 18. Whether BCL2 is upregulated in CAD with trisomy 18 and whether it might be the cause of therapy resistance, needs to be investigated. Such a study is of importance since BCL2 inhibitors might be considered to improve treatment responses. We conclude that gain of chromosome 3 is a highly recurrent finding in CAD-associated lymphoproliferative disease. Further gain of chromosome 12 and 18 might be predictors of therapy outcome. These genetic findings are similar to what has previously been demonstrated in nodal and extranodal MZL, suggesting that CAD-associated lymphoproliferative disease, a disease of the bone marrow, might be related to MZL. Berentsen: Mundipharma: Research Funding; Apellis, Bioverativ (a Sanofi company), Momenta Pharmaceuticals, and True North Therapeutics: Consultancy; Alexion, Apellis, and Janssen-Cilag: Honoraria. Tjnnfjord:Mundipharma: Research Funding.
import java.util.*; import java.io.*; import java.lang.*; public class Problem1 { public static void main(String[] args) { Scanner sc=new Scanner(System.in); int test=sc.nextInt(); while(test--!=0) { int x=sc.nextInt(); int y=sc.nextInt(); String str=sc.next(); int U=0,D=0,L=0,R=0; for(Character ch : str.toCharArray()) { if(ch=='D') D++; else if(ch=='U') U++; else if(ch=='L') L++; else R++; } if(x>0 && R<x || x<0 && L<Math.abs(x) ) { System.out.println("NO"); continue; } else if(y>0 && U<y || y<0 && D<Math.abs(y)) { System.out.println("NO"); continue; } System.out.println("YES"); } } } //Vanya and cubes problem codeforce page-1
Melissa Metzler, of Doylestown, Penn., said lessons from past tragedies may have prevented her from nearly dying when she gave birth to twins six years ago. Nathan Butler lost his wife, Jessica, when she died while pregnant with their daughter, who also died. Jessica Butler is one of thousands of mothers torn from their families in a nation with the highest rate of maternal death in the developed world. Nate now focuses on spending as much time as possible with his son, Max, 9, near their Littleton, Colorado home.
<gh_stars>0  /* SGF - Salvat Game Fabric (https://sourceforge.net/projects/sgfabric/) Copyright (C) 2010-2011 <NAME> <<EMAIL>> This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ // You should have received a copy of the GNU General Public License // along with this program; if not, write to the Free Software // Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. #ifndef _S_G_F_Binary_CHeap_Generic_ #define _S_G_F_Binary_CHeap_Generic_ /* Heaps Outro tipo de árvore binária especial é chamada de heap. A fim de entender o que é um amontoado, é preciso primeiro definir uma árvore binária completa e plena. Em uma árvore binária completa, todos os nós são ou um pai com dois filhos, ou uma folha. Em uma árvore binária completa, todos os níveis, exceto o último deve ser completamente preenchido. No último nível, todos os nós devem ser preenchidos a partir do lado esquerdo, sem espaçamento entre eles, no entanto, não têm de ser preenchidos até o fim. Um heap é uma árvore binária completa, que é parcialmente ordenado ou com o max heap ou heap min-propriedades. Ou seja, se uma pilha é um max-heap, então os filhos de cada nó ter um valor menor do que o nó. Em um heap min-, os filhos de cada nó são maiores do que o próprio nó. Com tal configuração, a raiz principal tem sempre tanto o valor mais alto ou mais baixo na árvore. Para fins de demonstração, vamos mostrar como implementar um max-heap, como também uma parte importante do algoritmo heapsort, que será posteriormente coberto. [É fácil alterar o código para funcionar como um heap min-mudando os operadores relacionais entre os valores de nó]. A max-heap normalmente usado para manter filas de prioridade. Filas de prioridade armazenar valores e liberar o objeto com a mais alta "prioridade" (ou valor), quando necessário. Por exemplo, um valor é associado a uma determinada tarefa em um programa, colocado em uma estrutura deste tipo, e depois executado com base em sua posição. Uma vez que uma pilha deve respeitar a propriedade árvore completa, fórmulas simples podem ser libraryeloped para encontrar a posição lógica dos filhos de um nó pai e dada a posição do próprio nó. Por isso é muito fácil e eficiente de implementar uma pilha usando arrays, e é feito na maioria das vezes, mesmo se alocação dinâmica de memória está disponível. Em uma implementação de matriz, devemos alocar uma certa quantidade de espaço de memória que podem ser usados ​​para o heap. O espaço não pode ser usado para cima, e é, portanto, um desperdício de memória. Outras vezes, talvez seja necessário adicionar mais nós ao monte do que a memória alocada para o permite. No entanto, costumamos atribuir mais espaço do que pensamos pode ser necessária a fim de assegurar a pilha é utilizável. Se temos uma árvore de estruturas muito grandes, este espaço pode ser significativo. No entanto, este é o preço que paga sempre para maior eficiência. Deve observar, porém, que não temos mais três ponteiros para cada nó da árvore (à esquerda, pai, direita), que pegou um monte de espaço na implementação de memória dinâmica. A posição lógica de um nó em uma pilha corresponde ao índice da posição do nó de matriz, tornando assim muito fácil de acessar qualquer nó. A implementação genérica é mostrada abaixo. */ #include "../exceptions/all.h" /** * \class CMaxHeap * * \ingroup SGF_Util * * \brief Heap Binário Genérico * * \ note este código foi adaptado e alterado da seguinte base :http://library.thinkquest.org/C005618/text/heaps.htm * \ note ItemType deve ter implementado o operador assignment para este template funcionar * * \author (last to touch it) $Autor: <NAME> $ * * \version 1.0 $Revision: 1.0 $ * * Contact: <EMAIL> * * Created on: 14 de Janeiro de 2012 */ template <class ItemType> class CMaxHeap { public: CMaxHeap(int Size=0); CMaxHeap(); ~CMaxHeap(); void init(int toSize); void resize(int newSize); int left(int) const; int right(int) const; int parent(int) const; void insert(const ItemType&); ItemType remove_max(); bool IsEmpty() const; bool IsFull() const; int count() const; ItemType value(int) const; ItemType swap(int pos, int new_pos); private: ItemType *_array; int elements; //how many elements are in the CMaxHeap void ReHeap(int); int swap_pos; }; //default constructor - initialize private variables template <class ItemType> CMaxHeap<ItemType>::CMaxHeap(int size): _array(0), swap_pos(0), elements(0) { } template <class ItemType> CMaxHeap<ItemType>::CMaxHeap(int size): elements(0), swap_pos(size+1) { try { _array = new ItemType *[size+1] ; }catch (bad_alloc& xa) { // Catch block, for exceptions throw CGeneralError("Alocation Memory Error",__FUNCTION__,__LINE__); } } template <class ItemType> CMaxHeap<ItemType>::~CMaxHeap() { if (_array) delete [] _array; } /** quando o heap for construido com o construtor default * será presico inicialiar o heap com um determinado tamanho. * este método realiza esta função, inicializando o heap */ template <class ItemType> void CMaxHeap<ItemType>::init(int toSize){ } /** se o heap estiver cheio, pode ter seu tamanho alterado com este método * \note não permite diminuir o heap */ template <class ItemType> void CMaxHeap<ItemType>::resize(int newSize){ if (size==0) { init(newSize); return; }else if ( newSize > size) { ItemType *_new_array; int atualSize=size+1; // adiciona um elemento para swap // cria um novo heap com o tamanho certo try { _new_array = new ItemType *[newSize+1] ; }catch (bad_alloc& xa) { // Catch block, for exceptions throw CGeneralError("Alocation Memory Error",__FUNCTION__,__LINE__); } // copia os elementos do heap anterior para o novo heap for (int i=0; i < atualSize ; i++) { _new_array[i]=_array[i]; } // deleta o heap anterior if(_array) delete [] _array; }else throw CGeneralError("Operation not allowwed to resize the heap with less size",__FUNCTION__,__LINE__); } /** Os métodos left(), right(), and parent() retornam o índice das posíçoes filho e pai. Desde que o índice de cada elemento corresponde a posição lógica do elemento dentro do heap Heap implementado em array If node x is in location i root----------------------> [0] left_child(x) -------------> 2 i + 1 right_child(x) -----------> 2i + 2 parent(x) ----------------> [i-1] / 2 */ template <class ItemType> int CMaxHeap<ItemType>::left(int root) const { if(elements> (root*2)+1) throw CGeneralError("Do not have a Left Children",__FUNCTION__,__LINE__); return (root *2)+1; } template <class ItemType> int CMaxHeap<ItemType>::right(int root) const { if(elements > (root *2)+2) throw CGeneralError("Do not have a right Children",__FUNCTION__,__LINE__); return (root * 2) + 2; } template <class ItemType> int CMaxHeap<ItemType>::parent(int child) const { if(child <= 0)throw CGeneralError("Root node does not have Parent",__FUNCTION__,__LINE__); return (child - 1) / 2; ok } /** * O método swap aceita o dois valores de um item como parâmetro * Ele funciona trocando os itens de posição no heap */ template <class ItemType> void CMaxHeap<ItemType>::swap(int pos, int new_pos) { if(pos <0 || pos > size || new_pos > size || new_pos < 0) CGeneralError("Out of Range swap try",__FUNCTION__,__LINE__); _array[swap_pos]=_array[pos]; _array[pos]=_array[new_pos]; _array[new_pos]=tmp; } /** * O método insert aceita o valor de um novo item como parâmetro * Ele funciona inserindo o novo item no final do heap e trocando de posição * com seu pai, se o pai tiver um valor menor. * O novo item continua subindo, trocando de posições com seus novos pais até que o valor do pai * seja maior que o dele. */ template <class ItemType> void CMaxHeap<ItemType>::insert(const ItemType &item) { if(!IsFull() )throw CGeneralError("Heap is Full ",__FUNCTION__,__LINE__); if( _array ==NULL)throw CGeneralError("Heap is Null",__FUNCTION__,__LINE__); /** elements representa a posição no array depois da última posição ocupada, já que índice inicia no zero */ _array[elements] = item; int new_pos = elements; //! índice do novo item elements++; //! atualiza o nro de elementos no heap //loop enquanto o item não se tornou o root, e enquanto seu valor é menor que o de seus pais while((new_pos > 0) && (_array[new_pos] > _array[parent(new_pos)])) { swap(new_pos,parent(new_pos)); //! troca o valor do item com seu pai new_pos = parent(new_pos); //atualiza a posição do item } } /** o método remove_max() remove o item com maior prioridade e retorna o seu valor. * o item é trocado com o último item, e os elemntos é atualizado para um a menos * O novo root pode não ter a maior prioridade, portanto o método ReHeap() é utilizado para * inserir o novo root na sua devida posição, conservando assim a propriedade de heap * \note O item não é fisicamente deletado, ele permanece como parte do array * \note ele não será parte do Heap porque o número de elementos é atualizado e o heap vai até * elementos -1 */ template <class ItemType> ItemType CMaxHeap<ItemType>::remove_max() { if(!IsEmpty() )throw CGeneralError("Heap is Empty",__FUNCTION__,__LINE__); if( _array ==NULL)throw CGeneralError("Heap not initialized",__FUNCTION__,__LINE__); elements--; //! atualiza a quantidade de elementos no Heap if(elements != 0) //! pra não deletar o root { swap(0,elements); ReHeap(0); } return array[elements]; } /** o método ReHeap verifica se o filho do root é maior que ele, no caso que o filho maior é trocado com o root * O processo continua recursivamente até que o root se torne maior que seus dois filhos */ template <class ItemType> void CMaxHeap<ItemType>::ReHeap(int root) { int child = left(root); //se um filho direito existe, e é maior que o filho esquerdo, ele será utilizado if((array[child] < array[child+1]) && (child < (elements-1))) child++; if(array[root] >= array[child]) //! para se o root é maior que o seu maior filho. return; swap(root,child); //troca o root com seu maior filho ReHeap(child); //continu a o processo no novo filho do root } /** método que retorna o número de elementos no heap */ template <class ItemType> int CMaxHeap<ItemType>::count() const { return elements; } /** método que retorna o valor de um elemento no heap */ template <class ItemType> ItemType CMaxHeap<ItemType>::value(int pos) const { //! pos é um índice válido? if (pos < elements) throw CGeneralError("Out of Range Position ",__FUNCTION__,__LINE__); return array[pos]; } /* método que retorna se o heap está cheio */ template <class ItemType< bool CMaxHeap<ItemType>::IsEmpty() const { return (elements <= 0); } template <class ItemType> bool CMaxHeap<ItemType>::IsFull() const { return (elements >= size); } #endif
package main // Importing package fmt (format) import ( "fmt" ) func main() { fmt.Print("Hello World") }
Features of use of CO2-laser with involutive skin changes depending on morphotype We investigated the effect of combined use of superficial and deep fractional ablation on the different morphotypes of aging of the skin. A study of the condition of the skin in 32 patients, including evaluation of the results with the use of a series of pictures of 3D visualization on the device ANTERA 3D (Ireland), a photographic documentation on the unit FotoFinder (Germany) and scale-questionnaire to assess patient satisfaction with the procedure. Patients were divided into three groups according to morphotypes of involutive changes: wrinkled, deformational and mixed. After the procedure, the following dynamics was determined: according to the results of the study, the following trend was determined: with a mixed morphotype, the satisfaction of patients is higher with the correction of wrinkles and skin quality, and with a deformation morphotype, satisfaction is higher with the texture, skin color and work with pigment disorders.
package si.vicos.annotations.editor; import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import javax.imageio.ImageIO; import org.coffeeshop.cache.DataCache; import org.coffeeshop.io.TempDirectory; /** * The Class ImageCache. */ public class ImageCache extends DataCache<Object, BufferedImage> { /** * Instantiates a new image cache. * * @param memoryLimit * the memory limit * @param totalLimit * the total limit * @param tempDir * the temp dir * @throws IOException * Signals that an I/O exception has occurred. */ public ImageCache(long memoryLimit, long totalLimit, TempDirectory tempDir) throws IOException { super(memoryLimit, totalLimit, tempDir); } /* * (non-Javadoc) * * @see org.coffeeshop.cache.DataCache#getDataLength(java.lang.Object) */ @Override protected long getDataLength(BufferedImage object) { if (object == null) return 0; return object.getWidth() * object.getHeight() * object.getColorModel().getNumComponents(); } /* * (non-Javadoc) * * @see org.coffeeshop.cache.DataCache#readData(java.io.File, long) */ @Override protected BufferedImage readData(File file, long length) throws IOException { return ImageIO.read(file); } /* * (non-Javadoc) * * @see org.coffeeshop.cache.DataCache#writeData(java.io.File, * java.lang.Object) */ @Override protected void writeData(File file, BufferedImage data) throws IOException { ImageIO.write(data, "PNG", file); } }
import unittest from config import getConfig class SettingTestCase(unittest.TestCase): def test_something(self): config = getConfig() for k in [kk for kk in dir(config) if not kk.startswith("__")]: print(k,"=", getattr(config, k)) if __name__ == '__main__': unittest.main()
Insulin-induced lipohypertrophy: clinical and ultrasound characteristics Background: Lipohypertrophy is primary dermal complication of insulin therapy. The data on the prevalence of lipohypertrophy in diabetic subjects are inconsistent, that may be due to the lack of sensitivity and subjectivity of palpation as diagnostic technique. Meanwhile, the reliability of lipohypertrophy detection can be increased by ultrasound. Aims: to compare clinical and ultrasound characteristics and to determine the risk factors of insulin-induced lipohypertrophy in diabetic subjects. Materials and methods: We observed 82 patients, including 26 individuals with type 1 diabetes and 56 subjects with type 2 diabetes. Duration of insulin therapy varied from 3 months to 37 years (median 14 years). The sites of insulin injections were assessed by palpation and ultrasound. Visualization protocol included gray-scale densitometry, strain elastography, and 3D Doppler power ultrasound. Scaled evaluation of ultrasound sings was applied. Insulin injection technique was assessed by questionnaire. Serum levels of insulin antibodies were determined by ELISA. Results: Lipohypertrophy was revealed by palpation and ultrasound in 57 and 80 patients (70% and 98%) respectively. Total lipohypertrophy area, acoustic density and total ultrasound score showed weak positive correlations with daily insulin dose (r=0.3, r=0.3 and r=0.35, respectively, all p0.006). Patients receiving insulin analogues had smaller area of abdominal lipohypertrophy than those on human insulin (p=0.03). A positive correlation was found between abdominal lipohypertrophy area and mean postprandial glucose (r=0.35, p=0.001). A rare needle change and injections in lipohypertrophy sites were the most common deviations in insulin injection technique (70 and 47 subjects, 85% and 53% respectively). The levels of insulin antibodies showed no association with lipohypertrophy parameters. Conclusions: Patients with type 1 and type 2 diabetes demonstrate high prevalence of lipohypertrophy in insulin injection sites. Ultrasonography is more sensitive method of diagnostics of lipohypertrophy compared with palpation. Insulin-induced lipohypertrophy is associated with errors in injection technique and higher insulin doses.
The determination of the free energy of an electrical double layer. A well-known charging process is used to obtain the free energy of an electrical double layer, in which the double layer is built up by a transfer of ions from one phase to another. The present study formulates proofs that this charging process cannot determine the free energy of an electrical double layer.
<reponame>ftalbrecht/dune-gdt // This file is part of the dune-gdt project: // http://users.dune-project.org/projects/dune-gdt // Copyright holders: <NAME> // License: BSD 2-Clause License (http://opensource.org/licenses/BSD-2-Clause) #define DUNE_STUFF_TEST_MAIN_CATCH_EXCEPTIONS 1 // this one has to come first #include <dune/stuff/test/main.hxx> #include "spaces_fv_default.hh" #include "spaces_dg_fem.hh" #include "spaces_cg_pdelab.hh" #include "products_elliptic.hh" typedef testing::Types< SPACE_FV_SGRID(1, 1) , SPACE_FV_SGRID(2, 1) , SPACE_FV_SGRID(3, 1) > ConstantSpaces; TYPED_TEST_CASE(EllipticLocalizableProduct, ConstantSpaces); TYPED_TEST(EllipticLocalizableProduct, fulfills_interface) { this->fulfills_interface(); } TYPED_TEST(EllipticLocalizableProduct, constant_arguments) { this->constant_arguments(); } TYPED_TEST(EllipticLocalizableProduct, linear_arguments) { this->linear_arguments(); } TYPED_TEST(EllipticLocalizableProduct, quadratic_arguments) { this->quadratic_arguments(); } TYPED_TEST_CASE(SimplifiedEllipticLocalizableProduct, ConstantSpaces); TYPED_TEST(SimplifiedEllipticLocalizableProduct, fulfills_interface) { this->fulfills_interface(); } TYPED_TEST(SimplifiedEllipticLocalizableProduct, constant_arguments) { this->constant_arguments(); } TYPED_TEST(SimplifiedEllipticLocalizableProduct, linear_arguments) { this->linear_arguments(); } TYPED_TEST(SimplifiedEllipticLocalizableProduct, quadratic_arguments) { this->quadratic_arguments(); } #if HAVE_DUNE_FEM typedef testing::Types< SPACE_DG_FEM_SGRID(1, 1, 3) , SPACE_DG_FEM_SGRID(2, 1, 3) , SPACE_DG_FEM_SGRID(3, 1, 3) > CubicSpaces; TYPED_TEST_CASE(EllipticAssemblableProduct, CubicSpaces); TYPED_TEST(EllipticAssemblableProduct, fulfills_interface) { this->fulfills_interface(); } TYPED_TEST(EllipticAssemblableProduct, constant_arguments) { this->constant_arguments(); } TYPED_TEST(EllipticAssemblableProduct, linear_arguments) { this->linear_arguments(); } TYPED_TEST(EllipticAssemblableProduct, quadratic_arguments) { this->quadratic_arguments(); } TYPED_TEST_CASE(SimplifiedEllipticAssemblableProduct, CubicSpaces); TYPED_TEST(SimplifiedEllipticAssemblableProduct, fulfills_interface) { this->fulfills_interface(); } TYPED_TEST(SimplifiedEllipticAssemblableProduct, constant_arguments) { this->constant_arguments(); } TYPED_TEST(SimplifiedEllipticAssemblableProduct, linear_arguments) { this->linear_arguments(); } TYPED_TEST(SimplifiedEllipticAssemblableProduct, quadratic_arguments) { this->quadratic_arguments(); } #elif HAVE_DUNE_PDELAB // HAVE_DUNE_FEM typedef testing::Types< SPACE_CG_PDELAB_SGRID(1, 1, 1) , SPACE_CG_PDELAB_SGRID(2, 1, 1) , SPACE_CG_PDELAB_SGRID(3, 1, 1) > LinearSpaces; TYPED_TEST_CASE(EllipticAssemblableProduct, LinearSpaces); TYPED_TEST(EllipticAssemblableProduct, fulfills_interface) { this->fulfills_interface(); } TYPED_TEST(EllipticAssemblableProduct, constant_arguments) { this->constant_arguments(); } TEST(DISABLED_EllipticAssemblableProduct, linear_arguments) {} TEST(DISABLED_EllipticAssemblableProduct, quadratic_arguments) {} TYPED_TEST_CASE(SimplifiedEllipticAssemblableProduct, LinearSpaces); TYPED_TEST(SimplifiedEllipticAssemblableProduct, fulfills_interface) { this->fulfills_interface(); } TYPED_TEST(SimplifiedEllipticAssemblableProduct, constant_arguments) { this->constant_arguments(); } TEST(DISABLED_SimplifiedEllipticAssemblableProduct, linear_arguments) {} TEST(DISABLED_SimplifiedEllipticAssemblableProduct, quadratic_arguments) {} #else // HAVE_DUNE_PDELAB // HAVE_DUNE_FEM TEST(DISABLED_EllipticAssemblableProduct, fulfills_interface) {} TEST(DISABLED_EllipticAssemblableProduct, constant_arguments) {} TEST(DISABLED_EllipticAssemblableProduct, linear_arguments) {} TEST(DISABLED_EllipticAssemblableProduct, quadratic_arguments) {} TEST(DISABLED_SimplifiedEllipticAssemblableProduct, fulfills_interface) {} TEST(DISABLED_SimplifiedEllipticAssemblableProduct, constant_arguments) {} TEST(DISABLED_SimplifiedEllipticAssemblableProduct, linear_arguments) {} TEST(DISABLED_SimplifiedEllipticAssemblableProduct, quadratic_arguments) {} #endif // HAVE_DUNE_PDELAB // HAVE_DUNE_FEM TYPED_TEST_CASE(EllipticProduct, ConstantSpaces); TYPED_TEST(EllipticProduct, fulfills_interface) { this->fulfills_interface(); } TYPED_TEST(EllipticProduct, constant_arguments) { this->constant_arguments(); } TYPED_TEST(EllipticProduct, linear_arguments) { this->linear_arguments(); } TYPED_TEST(EllipticProduct, quadratic_arguments) { this->quadratic_arguments(); } TYPED_TEST_CASE(SimplifiedEllipticProduct, ConstantSpaces); TYPED_TEST(SimplifiedEllipticProduct, fulfills_interface) { this->fulfills_interface(); } TYPED_TEST(SimplifiedEllipticProduct, constant_arguments) { this->constant_arguments(); } TYPED_TEST(SimplifiedEllipticProduct, linear_arguments) { this->linear_arguments(); } TYPED_TEST(SimplifiedEllipticProduct, quadratic_arguments) { this->quadratic_arguments(); }
package com.yhl.spider.task; import com.yhl.convert.Convert; import java.util.Map; /** * @author : yhl * date : 2021-09-06 */ public interface SpiderTask<OUT,IN> { /** * 保存数据 * * @param spiderData */ void save(OUT spiderData); /** * 获取转换器 * * @return */ Convert<OUT> getConvertOutData(); /** * 获取转换器 * * @return * @param out */ Convert<IN> getConvertInData(OUT out); /** * 构建请求参数 * * @param parameter * @return */ RequestData getRequest(Map<String, String> parameter); }
Manuel de Mendiburu Manuel de Mendiburu (1805–1885) was a Peruvian statesman and historian born in Lima. He was educated at the University of San Marcos. When the movement for independence reached Peru, he joined the patriot army as a color sergeant in 1821. As lieutenant he distinguished himself in various battles, and was captured by the Spanish. After the end of the war was made captain (1830) and by 1851 had reached the rank of general. He filled various roles during his long and active political career. In 1831 he was sent on special missions to Brazil and Spain. From 1834 to 1870 he was employed in the government service, filling successively the positions of prefect of various departments; holding the portfolios of Government, Treasury, Foreign Relations, and War and Marine; serving as deputy in congress, Vice President of the Constituent Assembly, and Minister of Peru to England, to Bolivia, and to Chile. In 1870 he was placed in charge of the School of Arts and Trades at Lima. He organized the Archivo Nacional de Lima, which contains his monumental work Diccionario histórico-biográfico del Perú (1874-85). The publication is invaluable for the study of Peruvian history, especially during the colonial epoch. Two digitized volumes are available at the Virtual Library Miguel de Cervantes.
An IoT Environment for Estimating Occupants Thermal Comfort The emerging Internet of things (IoT) environment permits smart building management in a way to enhance occupants thermal comfort, that is directly related to the healthcare of the occupants, while maintaining energy efficiency. Predicted Mean Vote (PMV) model is acknowledged as the most utilized model for estimating occupants thermal comfort in air-conditioned spaces. However, few works deal with assessment of occupants thermal comfort in real time. In this paper, an accurate thermal comfort model for smart building control in real time is derived from an IoT generated building data. Auto-regressive with exogenous variables (ARX) model is used to determine the relation between thermal comfort and different personal, outdoor and indoor attributes. As thermal comfort model may derive from different attributes, a sensitivity analysis using Morris method is used to figure out the best model input parameters that results in lower error and cost. After that, an ARX model using the chosen parameters is trained. Our experiments show that Fangers parameters are the best combination of parameters used to estimate the PMV index. They also demonstrate the efficiency of ARX model in terms of prediction accuracy and complexity. Besides, results show that the ARX model improves the mean absolute error and complexity of the thermal prediction when compared to other machine learning models.
import { useEffect } from 'react'; export const ProtectPrivacy = () => { useEffect(() => { window.location.replace( 'https://www.notion.so/avastar/Avastar-Open-Source-Project-730dbef6c24040d69b4f3a17960979ae' ); }); return <></>; };
The bromodomaincontaining gene BRD2 is regulated at transcription, splicing, and translation levels The human BRD2 gene has been linked and associated with a form of common epilepsy and electroencephalographic abnormalities. Disruption of Brd2 in the mouse revealed that it is essential for embryonic neural development and that viable Brd2+/− heterozygotes show both decreased GABAergic neuron counts and increased susceptibility to seizures. To understand the molecular mechanisms by which misexpression of BRD2 might contribute to epilepsy, we examined its regulation at multiple levels. We discovered that BRD2 expresses distinct tissuespecific transcripts that originate from different promoters and have strikingly different lengths of 5 untranslated regions (5UTR). We also experimentally confirmed the presence of a highly conserved, alternatively spliced exon, inclusion of which would result in a premature termination of translation. Downstream of this alternative exon is a polymorphic microsatellite (GTrepeats). Manipulation of the number of the GTrepeats revealed that the length of the GTrepeats affects the ratio of the two alternative splicing products. In vitro translation and expression in cultured cells revealed that among the four different mRNAs (long and short 5UTR combined with regular and alternative splicing), only the regularly spliced mRNA with the short 5UTR yields fulllength protein. In situ hybridization and immunohistochemical studies showed that although Brd2 mRNA is expressed in both the hippocampus and cerebellum, Brd2 protein only can be detected in the cerebellar Purkinje cells and not in hippocampal cells. These multiple levels of regulation would likely affect the production of functional BRD2 protein during neural development and hence, its role in the etiology of seizure susceptibility. J. Cell. Biochem. 112: 27842793, 2011. © 2011 WileyLiss, Inc.
<reponame>tas9n/OpenSiv3D<gh_stars>1-10 //----------------------------------------------- // // This file is part of the Siv3D Engine. // // Copyright (c) 2008-2022 <NAME> // Copyright (c) 2016-2022 OpenSiv3D Project // // Licensed under the MIT License. // //----------------------------------------------- # include <Siv3D/Physics2D/P2Body.hpp> # include <Siv3D/Physics2D/P2BodyType.hpp> # include <Siv3D/Physics2D/P2Shape.hpp> # include <Siv3D/LineString.hpp> # include <Siv3D/MultiPolygon.hpp> # include "P2BodyDetail.hpp" namespace s3d { P2Body::P2Body() : pImpl{ std::make_shared<P2BodyDetail>() } {} P2Body::P2Body(const std::shared_ptr<detail::P2WorldDetail>& world, const P2BodyID id, const Vec2& center, const P2BodyType bodyType) : pImpl{ std::make_shared<P2BodyDetail>(world, id, center, bodyType) } {} P2BodyID P2Body::id() const noexcept { return pImpl->id(); } bool P2Body::isEmpty() const noexcept { return (pImpl->id() == 0); } void P2Body::release() { if (isEmpty()) { return; } pImpl = std::make_shared<P2BodyDetail>(); } P2Body::operator bool() const noexcept { return (pImpl->id() != 0); } P2Body& P2Body::addLine(const Line& localPos, const OneSided oneSided, const P2Material& material, const P2Filter& filter) { if (isEmpty()) { return *this; } pImpl->addLine(localPos, oneSided, material, filter); return *this; } P2Body& P2Body::addLineString(const LineString& localPos, const OneSided oneSided, const P2Material& material, const P2Filter& filter) { if (isEmpty()) { return *this; } if (localPos.size() < 2) { return *this; } pImpl->addLineString(localPos, CloseRing::No, oneSided, material, filter); return *this; } P2Body& P2Body::addClosedLineString(const LineString& localPos, const OneSided oneSided, const P2Material& material, const P2Filter& filter) { if (isEmpty()) { return *this; } if (localPos.size() < 2) { return *this; } pImpl->addLineString(localPos, CloseRing::Yes, oneSided, material, filter); return *this; } P2Body& P2Body::addCircle(const Circle& localPos, const P2Material& material, const P2Filter& filter) { if (isEmpty()) { return *this; } pImpl->addCircle(localPos, material, filter); return *this; } P2Body& P2Body::addCircleSensor(const Circle& localPos, const P2Filter& filter) { if (isEmpty()) { return *this; } pImpl->addCircleSensor(localPos, filter); return *this; } P2Body& P2Body::addRect(const RectF& localPos, const P2Material& material, const P2Filter& filter) { if (isEmpty()) { return *this; } pImpl->addRect(localPos, material, filter); return *this; } P2Body& P2Body::addTriangle(const Triangle& localPos, const P2Material& material, const P2Filter& filter) { if (isEmpty()) { return *this; } pImpl->addTriangle(localPos, material, filter); return *this; } P2Body& P2Body::addQuad(const Quad& localPos, const P2Material& material, const P2Filter& filter) { if (isEmpty()) { return *this; } pImpl->addQuad(localPos, material, filter); return *this; } P2Body& P2Body::addPolygon(const Polygon& localPos, const P2Material& material, const P2Filter& filter) { if (isEmpty()) { return *this; } pImpl->addPolygon(localPos, material, filter); return *this; } P2Body& P2Body::addPolygons(const MultiPolygon& localPos, const P2Material& material, const P2Filter& filter) { if (isEmpty()) { return *this; } for (const auto& polygon : localPos) { pImpl->addPolygon(polygon, material, filter); } return *this; } P2Body& P2Body::setSleepEnabled(const bool enabled) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().SetSleepingAllowed(enabled); return *this; } bool P2Body::getSleepEnabled() const noexcept { if (isEmpty()) { return true; } return pImpl->getBody().IsSleepingAllowed(); } P2Body& P2Body::setAwake(const bool awake) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().SetAwake(awake); return *this; } bool P2Body::isAwake() const noexcept { if (isEmpty()) { return true; } return pImpl->getBody().IsAwake(); } P2Body& P2Body::setPos(const double x, const double y) noexcept { return setPos(Vec2{ x, y }); } P2Body& P2Body::setPos(const Vec2 pos) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().SetTransform(detail::ToB2Vec2(pos), pImpl->getBody().GetAngle()); return *this; } P2Body& P2Body::moveBy(const double x, const double y) noexcept { return moveBy(Vec2{ x, y }); } P2Body& P2Body::moveBy(const Vec2 v) noexcept { return setPos(getPos() + v); } P2Body& P2Body::setAngle(const double angle) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().SetTransform(pImpl->getBody().GetPosition(), static_cast<float>(angle)); return *this; } P2Body& P2Body::rotateBy(const double angle) noexcept { return setAngle(getAngle() + angle); } P2Body& P2Body::setTransform(const double x, const double y, const double angle) noexcept { return setTransform(Vec2{ x, y }, angle); } P2Body& P2Body::setTransform(const Vec2 pos, const double angle) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().SetTransform(detail::ToB2Vec2(pos), static_cast<float>(angle)); return *this; } P2Body& P2Body::applyForce(const Vec2 force) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().ApplyForceToCenter(detail::ToB2Vec2(force), true); return *this; } P2Body& P2Body::applyForce(const Vec2 force, const Vec2 offset) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().ApplyForce(detail::ToB2Vec2(force), pImpl->getBody().GetWorldCenter() + detail::ToB2Vec2(offset), true); return *this; } P2Body& P2Body::applyForceAt(const Vec2 force, const Vec2 pos) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().ApplyForce(detail::ToB2Vec2(force), detail::ToB2Vec2(pos), true); return *this; } P2Body& P2Body::applyLinearImpulse(const Vec2 force) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().ApplyLinearImpulseToCenter(detail::ToB2Vec2(force), true); return *this; } P2Body& P2Body::applyLinearImpulse(const Vec2 force, const Vec2 offset) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().ApplyLinearImpulse(detail::ToB2Vec2(force), pImpl->getBody().GetWorldCenter() + detail::ToB2Vec2(offset), true); return *this; } P2Body& P2Body::applyLinearImpulseAt(const Vec2 force, const Vec2 pos) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().ApplyLinearImpulse(detail::ToB2Vec2(force), detail::ToB2Vec2(pos), true); return *this; } P2Body& P2Body::applyTorque(const double torque) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().ApplyTorque(static_cast<float>(torque), true); return *this; } P2Body& P2Body::applyAngularImpulse(const double torque) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().ApplyAngularImpulse(static_cast<float>(torque), true); return *this; } Vec2 P2Body::getPos() const noexcept { if (isEmpty()) { return{ 0, 0 }; } return detail::ToVec2(pImpl->getBody().GetPosition()); } double P2Body::getAngle() const noexcept { if (isEmpty()) { return 0.0; } return pImpl->getBody().GetAngle(); } std::pair<Vec2, double> P2Body::getTransform() const noexcept { if (isEmpty()) { return{ Vec2{ 0, 0 }, 0.0 }; } return{ detail::ToVec2(pImpl->getBody().GetPosition()), pImpl->getBody().GetAngle() }; } P2Body& P2Body::setVelocity(const Vec2 v) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().SetLinearVelocity(detail::ToB2Vec2(v)); return *this; } Vec2 P2Body::getVelocity() const noexcept { if (isEmpty()) { return{ 0, 0 }; } return detail::ToVec2(pImpl->getBody().GetLinearVelocity()); } P2Body& P2Body::setAngularVelocity(const double omega) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().SetAngularVelocity(static_cast<float>(omega)); return *this; } double P2Body::getAngularVelocity() const noexcept { if (isEmpty()) { return 0.0; } return pImpl->getBody().GetAngularVelocity(); } P2Body& P2Body::setDamping(const double damping) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().SetLinearDamping(static_cast<float>(damping)); return *this; } double P2Body::getDamping() const noexcept { if (isEmpty()) { return 0.0; } return pImpl->getBody().GetLinearDamping(); } P2Body& P2Body::setAngularDamping(const double damping) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().SetAngularDamping(static_cast<float>(damping)); return *this; } double P2Body::getAngularDamping() const noexcept { if (isEmpty()) { return 0.0; } return pImpl->getBody().GetAngularDamping(); } P2Body& P2Body::setGravityScale(const double scale) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().SetGravityScale(static_cast<float>(scale)); return *this; } double P2Body::getGravityScale() const noexcept { if (isEmpty()) { return 1.0; } return pImpl->getBody().GetGravityScale(); } double P2Body::getMass() const noexcept { if (isEmpty()) { return 0.0; } return pImpl->getBody().GetMass(); } double P2Body::getInertia() const noexcept { if (isEmpty()) { return 0.0; } return pImpl->getBody().GetInertia(); } P2Body& P2Body::setBodyType(const P2BodyType bodyType) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().SetType(static_cast<b2BodyType>(bodyType)); return *this; } P2BodyType P2Body::getBodyType() const noexcept { if (isEmpty()) { return P2BodyType::Static; } return static_cast<P2BodyType>(pImpl->getBody().GetType()); } P2Body& P2Body::setFixedRotation(const bool fixedRotation) noexcept { if (isEmpty()) { return *this; } pImpl->getBody().SetFixedRotation(fixedRotation); return *this; } bool P2Body::isFixedRotation() const noexcept { if (isEmpty()) { return false; } return pImpl->getBody().IsFixedRotation(); } const P2Body& P2Body::draw(const ColorF& color) const { if (isEmpty()) { return *this; } for (const auto& shape : pImpl->getShapes()) { shape->draw(color); } return *this; } const P2Body& P2Body::drawFrame(const double thickness, const ColorF& color) const { if (isEmpty()) { return *this; } for (const auto& shape : pImpl->getShapes()) { shape->drawFrame(thickness, color); } return *this; } const P2Body& P2Body::drawWireframe(const double thickness, const ColorF& color) const { if (isEmpty()) { return *this; } for (const auto& shape : pImpl->getShapes()) { shape->drawWireframe(thickness, color); } return *this; } size_t P2Body::num_shapes() const noexcept { if (isEmpty()) { return 0; } return pImpl->getShapes().size(); } P2Shape& P2Body::shape(const size_t index) { if (num_shapes() <= index) { throw std::out_of_range{ "P2Body::shape(): index out out range" }; } return *pImpl->getShapes()[index]; } const P2Shape& P2Body::shape(const size_t index) const { if (num_shapes() <= index) { throw std::out_of_range{ "P2Body::shape(): index out out range" }; } return *pImpl->getShapes()[index]; } const std::shared_ptr<P2Shape>& P2Body::getPtr(const size_t index) const noexcept { return pImpl->getShapes()[index]; } std::weak_ptr<P2Body::P2BodyDetail> P2Body::getWeakPtr() const { return pImpl; } }
/* * To change this template, choose Tools | Templates * and open the template in the editor. */ package com.threatconnect.sdk.server.response.entity.data; import com.fasterxml.jackson.annotation.JsonIgnore; import com.fasterxml.jackson.databind.annotation.JsonSerialize; import com.threatconnect.sdk.server.entity.Incident; import java.util.List; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlElement; /** * * @author James */ @XmlAccessorType(XmlAccessType.FIELD) public class IncidentListResponseData extends ApiEntityListResponseData<Incident> { @JsonSerialize(include=JsonSerialize.Inclusion.NON_NULL) @XmlElement(name = "Incident", required = false) private List<Incident> incident; public List<Incident> getIncident() { return incident; } public void setIncident(List<Incident> incident) { this.incident = incident; } @Override @JsonIgnore public List<Incident> getData() { return getIncident(); } @Override public void setData(List<Incident> data) { setIncident(data); } }
Primary Pancreatic lymphoma: a rare clinical entity Primary pancreatic lymphoma is a rare clinical entity representing <0.5% of pancreatic cancers and 1% of extranodal lymphomas. Due to the paucity of cases described in the literature, its clinicopathological features, differential diagnosis, optimal therapy and outcomes are not well defined. As the clinical manifestations are often non-specific, it can create a diagnostic pitfall for the unwary physician. Preoperative distinction of adenocarcinoma and primary pancreatic lymphoma is critical since the management and prognosis of these malignancies are mutually exclusive. Due to its rarity, epidemiological studies have been difficult to conduct. Chemotherapy with R-CHOP (rituximab, cyclophosphamide, doxorubicin and vincristine) has proven to be effective. The authors present the case of a 52-year-old man with epigastric pain and obstructive jaundice. Further investigation with a CT of the abdomen and pelvis showed a low attenuation mass in the head of the pancreas measuring 3525mm, suspicious for malignancy. The mass involved the common bile duct distally causing moderate retrograde intrahepatic and extrahepatic biliary tree dilation of 14mm. He underwent endoscopic retrograde cholangiopancreatography, sphincterotomy and insertion of a stent. Core biopsies confirmed the diagnosis of a high-grade B cell pancreas lymphoma. He started treatment with R-CHOP and prednisolone. Due to disease progression, he started treatment with DA-EPOCH-R (etoposide phosphate, prednisone, vincristine sulfate, cyclophosphamide, doxorubicin hydrochloride and rituximab). There was no clinical response, and treatment with RICE (rituximab, ifosfamide, carboplatin and etoposide) was initiated. He showed partial response and was under consideration for chimeric antigen receptor T cell therapy. He deteriorated clinically and succumbed to his disease 5 months following his initial presentation. This paper will provide an overview of the spectrum of haematological malignancies and describe useful features in distinguishing primary lymphoma of the pancreas from an adenocarcinoma, hence avoiding its surgical resection.
<reponame>peerplays-network/bos-dataproxy<filename>dataproxy/utils.py<gh_stars>1-10 import strict_rfc3339 import unicodedata import jsonschema import random import hashlib import string import re import datetime import json import io import os import logging from . import Config from . import datestring try: from bookiesports.normalize import IncidentsNormalizer, NotNormalizableException from bos_incidents.validator import IncidentValidator from bos_incidents.format import incident_to_string except Exception: raise Exception("Please ensure all BOS modules are up to date") # %(name) -30s %(funcName) -15s %(lineno) -5d LOG_FORMAT = ('%(levelname) -10s %(asctime)s: %(message)s') LOG_FOLDER = os.path.join("dump", "z_logs") def log_message(message): return Config.get("logs", "prefix", default="") + message def get_log_file_path(log_file_name=None): if log_file_name: return os.path.join(LOG_FOLDER, log_file_name) else: return LOG_FOLDER def save_string(source): """ Loads str and bytes object properly into json """ safe_return = {str: lambda: source, bytes: lambda: source.decode("utf-8")} return safe_return[type(source)]() def search_in(search_for, in_list): return search_for.lower() in [x.lower() for x in in_list] def save_json_loads(source): return json.loads(save_string(source)) def slugify(value, allow_unicode=False): """ Converts to a file name suitable string Convert to ASCII if 'allow_unicode' is False. Convert spaces to hyphens. Remove characters that aren't alphanumerics, underscores, or hyphens. Convert to lowercase. Also strip leading and trailing whitespace. """ value = str(value) if allow_unicode: value = unicodedata.normalize('NFKC', value) else: value = unicodedata.normalize('NFKD', value).encode( 'ascii', 'ignore').decode('ascii') value = re.sub(r'[^\w\s-]', '', value).strip().lower() return re.sub(r'[\s]+', '-', value) class CommonFormat(object): JSON_SCHEMA_CACHED = None MASK = None @staticmethod def get_mask(): if CommonFormat.MASK is None: mask = Config.get("subscriptions", "mask_providers", default=True) if type(mask) == bool: mask = json.dumps(Config.get("subscriptions", "witnesses")) + json.dumps(Config.get("providers")) CommonFormat.MASK = mask return CommonFormat.MASK @staticmethod def get_masked_provider(provider_info): masked_name = provider_info["name"] + CommonFormat.get_mask() masked_name = hashlib.md5(masked_name.encode()).hexdigest() # masking removes everything besides name and pushed return { "name": masked_name, "pushed": provider_info["pushed"] } def reformat_datetimes(self, formatted_dict): """ checks every value, if date found replace with rfc3339 string """ for (key, value) in formatted_dict.items(): if value: if isinstance(value, dict): self.reformat_datetimes(value) elif type(value) == str and len(value) == 19 and\ re.match('\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d', str(value)): formatted_dict[key] = date_to_string(value) def validate(self, formatted_dict): IncidentValidator().validate_incident(formatted_dict) def get_id_as_string(self, incident_id): return incident_id["start_time"] \ + '-' + incident_id["sport"] \ + '-' + incident_id["event_group_name"] \ + '-' + incident_id["home"] \ + '-' + incident_id["away"] def prepare_for_dump(self, formatted_dict): """ reformats dates, validates the json and creates unique_string identifier """ self.reformat_datetimes(formatted_dict) self.validate(formatted_dict) # get all to ensure they exist! formatted_dict["id"] formatted_dict["call"] formatted_dict["arguments"] # normalize names right away formatted_dict = self.normalize_for_witness(formatted_dict) formatted_dict["unique_string"] = incident_to_string(formatted_dict) return formatted_dict def normalize_for_witness(self, validated_incident): return IncidentsNormalizer().normalize(validated_incident) def date_to_string(date_object=None): return datestring.date_to_string(date_object) def string_to_date(date_string=None): return datestring.string_to_date(date_string)
Prevalence, Awareness, Treatment and Control of Hypertension in Russian Federation (Data of Observational ESSERF-2 Study) Participants of the study ESSE-RF-2 and co-authors: Moscow: Konstantinov V. V., Pokrovskaya M.S., Efimova I.A., Sivakova O.V.; Krasnodar: Alekseenko S.N., Gubarev S.V.; msk: Livzan M.A., Grishechkina I.A., Rozhkova M.Yu.; Republic of Karelia: Vezikova N.N., Skopec I. S.; Ryazan: Filippov E.V., Dobrynina N.V., Nikulina N.N., Pereverzeva K.G., Moseychuk K.A.Aim. Evaluate the prevalence, awareness, treatment, and control of hypertension among people aged 25-64 examined in 4 regions of the Russian Federation. Material and methods. Study materials were the representative selections of non-organized male (n=3000) and female (n=3714) inhabitants of aged 25-64 from 4 regions of the Russian Federation (Krasnodar region, Omsk region, Ryazan region, the Republic of Karelia), response rate>80%. Systematic stratified multilevel random election was formed with locality criteria (Kisch method). All the participants were interviewed using the standard questionnaire. The universal epidemiological methods and evaluation criteria were used. The study was approved by the local ethics Committee of National research center for preventive medicine. Participants signed informed consent. Hypertension was defined as an average systolic blood pressure (SBP)≥140 mmHg and/or average diastolic blood pressure (DBP)≥90 mmHg and/or antihypertensive therapy (AHT). The efficacy of treatment was the achievement of the target BP. Control group patients with BP<140/90 mmHg. Results. Mean SBP and DBP were 128.7±0,3 mmHg and 82.8±0.1 mmHg, respectively, higher BP was detected among male (p<0,001). The prevalence of hypertension was 44.2% that was higher among males than females (49.1% vs 39.9%, 0.0005), the highest hypertension frequency was in the Ryazan region. The awareness of hypertension was higher among females than in males 76.8% vs 69.4%. There were more persons with hypertension grade 1 among those, who were not aware of the hypertension. Medications were taken by 65.5% of females and 41.8% of males.Angiotensin-converting enzyme inhibitors were received by 49.9% of patients, angiotensin II receptor antagonists by 30.9%, beta blockers 29.5%, diuretics 22.7%, calcium antagonists 15.7%, centrally acting drugs 3.3%, others 0.2%. The lack of AHT intake was negatively associated with age, ischemic heart disease, urban life and hypo-HDL especially among males. Heart rate >80 per min in females increased by 1.7 times the probability of absence of AHT. The prevalence of effectively treated was 49.7% of the participants with hypertension. The associations between ineffective treatment and abdominal obesity, ischemic heart disease (males), age, rural type of settlement, obesity (females) were found. Only 24.9% of patients had control of the hypertension.Conclusion. The prevalence of hypertension in Russian Federation remains high. An important task of the medical community is to identify the disease at an earlier stage of its development, before the appearance of complications. This approach can reduce the period from the onset of high blood pressure to a visit to the doctor.
Deaths related to iodinated contrast media reported spontaneously to the U.S. Food and Drug Administration, 1978-1994: effect of the availability of low-osmolality contrast media. PURPOSE To determine whether reports of iodinated contrast medium-related deaths have decreased since low-osmolality contrast media (LOCM) became widely available in the United States. MATERIALS AND METHODS With use of reports to the U.S. Food and Drug Administration Spontaneous Reporting System, data on iodinated contrast medium-related deaths after LOCM became available were compared with data on deaths in the period before and with data on deaths in an even earlier period. RESULTS In 1967-1994, more than 1,000 contrast medium-related deaths were reported, 850 occurring during 1978-1994. Excluding 22 myelography-related deaths, 37% more deaths were reported each year in 1987-1994 than in 1978-1986. Most of this increase was associated with the use of nonionic contrast media. In 1966-1977, 228 deaths were reported; in 1978-1986, 376; and in 1987-1994, 474. In 1987-1994, 220 deaths were associated with use of high-osmolality contrast media alone, 32 with ionic LOCM alone, 214 with nonionic LOCM alone, and eight with combinations of contrast media. CONCLUSION Despite the availability of LOCM in the United States, data for 1978-1994 do not show a marked decrease in contrast medium-related deaths. Since 1990, more deaths have been associated with LOCM than with conventional contrast media. Although these data have substantial limitations, they shed some light on contrast medium use and safety.
<gh_stars>1-10 import {ErrorHandlerFactory} from "../../decorator/ErrorHandler"; import {ok} from "../../module/constants/state"; import {Request} from "express"; import {MySQLManager} from "../mysql/MySQLManager"; interface IRawAward { userId: string, award: string, year: number } class AwardManager { bodyChecker(rawAward: any) { if (typeof rawAward === "undefined" || rawAward === null) { throw new Error("request body should not be null or undefined"); } if (typeof rawAward.userId !== "string") { throw new Error("user id should be string"); } if (typeof rawAward.award !== "string") { throw new Error("award should be string"); } if (typeof rawAward.year !== "number") { throw new Error("year should be number"); } } checkAwardId (rawAwardId: any) { if (isNaN(rawAwardId)) { throw new Error("award id should be number"); } } async addAward(awardDto: IRawAward) { return await MySQLManager.execQuery("insert into award(user_id, award, year)values(?,?,?)", [awardDto.userId, awardDto.award,awardDto.year]); } async updateAward(awardDto: IRawAward, awardId: number) { return await MySQLManager.execQuery("update award set user_id=?, award = ?, year = ? where award_id = ?", [awardDto.userId, awardDto.award, awardDto.year, awardId]); } async deleteAward(awardId: number) { return await MySQLManager.execQuery("delete from award where award_id = ?", [awardId]); } async selectAwardByAwardId(awardId: number) { return await MySQLManager.execQuery("select * from award where award_id = ?", [awardId]); } @ErrorHandlerFactory(ok.okMaker) async addAwardByRequest(req: Request) { const rawAward = req.body; this.bodyChecker(rawAward); await this.addAward(rawAward as IRawAward); } @ErrorHandlerFactory(ok.okMaker) async updateAwardByAwardIdRequest(req: Request) { const awardId = req.params.award_id; const rawAward = req.body; this.checkAwardId(awardId); this.bodyChecker(rawAward); await this.updateAward(rawAward as IRawAward, parseInt(awardId)); } @ErrorHandlerFactory(ok.okMaker) async deleteAwardByAwardIdRequest(req: Request) { const awardId = req.params.award_id; this.checkAwardId(awardId); await this.deleteAward(parseInt(awardId)); } @ErrorHandlerFactory(ok.okMaker) async getAwardByAwardIdRequest(req: Request) { const awardId = req.params.award_id; this.checkAwardId(awardId); return await this.selectAwardByAwardId(parseInt(awardId)); } } export default new AwardManager();
/* * Copyright (C) 2017 Jolla Ltd. * Contact: <NAME> <<EMAIL>> * All rights reserved. * BSD 3-Clause License, see LICENSE. * * This plugin is aimed to provide a high level interface to interact * with Cryto Token USB devices supported PKSC#11 standard. * * Copyright (C) 2017 Open Mobile Platform LLC. * Contact: <NAME> <<EMAIL>> * All rights reserved. */ #ifndef SAILFISHCRYPTO_PLUGIN_CRYPTOKI_H #define SAILFISHCRYPTO_PLUGIN_CRYPTOKI_H #include <memory> #include <QtCore/QObject> #include <QtCore/QByteArray> #include <QtCore/QCryptographicHash> #include <QtCore/QMap> //#include "Crypto/extensionplugins.h" #include "libloader.h" #include "tk26.h" class CipherSessionData; class QTimer; namespace Sailfish { namespace Crypto { namespace Daemon { namespace Plugins { class Q_DECL_EXPORT CryptokiPlugin : public QObject, public Sailfish::Crypto::CryptoPlugin { Q_OBJECT Q_PLUGIN_METADATA(IID Sailfish_Crypto_CryptoPlugin_IID) Q_INTERFACES(Sailfish::Crypto::CryptoPlugin) public: CryptokiPlugin(QObject *parent = Q_NULLPTR); ~CryptokiPlugin(); QString name() const Q_DECL_OVERRIDE { #ifdef SAILFISHCRYPTO_TESTPLUGIN return QLatin1String("org.sailfishos.crypto.plugin.crypto.token.test"); #else return QLatin1String("org.sailfishos.crypto.plugin.crypto.token"); #endif } bool canStoreKeys() const Q_DECL_OVERRIDE { return false; } Sailfish::Crypto::CryptoPlugin::EncryptionType encryptionType() const Q_DECL_OVERRIDE { return Sailfish::Crypto::CryptoPlugin::SoftwareEncryption; } QVector<Sailfish::Crypto::CryptoManager::Algorithm> supportedAlgorithms() const Q_DECL_OVERRIDE; QMap<Sailfish::Crypto::CryptoManager::Algorithm, QVector<Sailfish::Crypto::CryptoManager::BlockMode> > supportedBlockModes() const Q_DECL_OVERRIDE; QMap<Sailfish::Crypto::CryptoManager::Algorithm, QVector<Sailfish::Crypto::CryptoManager::EncryptionPadding> > supportedEncryptionPaddings() const Q_DECL_OVERRIDE; QMap<Sailfish::Crypto::CryptoManager::Algorithm, QVector<Sailfish::Crypto::CryptoManager::SignaturePadding> > supportedSignaturePaddings() const Q_DECL_OVERRIDE; QMap<Sailfish::Crypto::CryptoManager::Algorithm, QVector<Sailfish::Crypto::CryptoManager::DigestFunction> > supportedDigests() const Q_DECL_OVERRIDE; QMap<Sailfish::Crypto::CryptoManager::Algorithm, QVector<Sailfish::Crypto::CryptoManager::MessageAuthenticationCode> > supportedMessageAuthenticationCodes() const Q_DECL_OVERRIDE; QMap<Sailfish::Crypto::CryptoManager::Algorithm, QVector<Sailfish::Crypto::CryptoManager::KeyDerivationFunction> > supportedKeyDerivationFunctions() const Q_DECL_OVERRIDE; QMap<Sailfish::Crypto::CryptoManager::Algorithm, Sailfish::Crypto::CryptoManager::Operations> supportedOperations() const Q_DECL_OVERRIDE; Sailfish::Crypto::Result generateRandomData( quint64 callerIdent, const QString &csprngEngineName, quint64 numberBytes, QByteArray *randomData) Q_DECL_OVERRIDE; Sailfish::Crypto::Result seedRandomDataGenerator( quint64 callerIdent, const QString &csprngEngineName, const QByteArray &seedData, double entropyEstimate) Q_DECL_OVERRIDE; Sailfish::Crypto::Result validateCertificateChain( const QVector<Sailfish::Crypto::Certificate> &chain, bool *validated) Q_DECL_OVERRIDE; Sailfish::Crypto::Result generateKey( const Sailfish::Crypto::Key &keyTemplate, const Sailfish::Crypto::KeyPairGenerationParameters &kpgParams, const Sailfish::Crypto::KeyDerivationParameters &skdfParams, Sailfish::Crypto::Key *key) Q_DECL_OVERRIDE; Sailfish::Crypto::Result generateAndStoreKey( const Sailfish::Crypto::Key &keyTemplate, const Sailfish::Crypto::KeyPairGenerationParameters &kpgParams, const Sailfish::Crypto::KeyDerivationParameters &skdfParams, Sailfish::Crypto::Key *keyMetadata) Q_DECL_OVERRIDE; Sailfish::Crypto::Result storedKey( const Sailfish::Crypto::Key::Identifier &identifier, Sailfish::Crypto::Key::Components keyComponents, Sailfish::Crypto::Key *key) Q_DECL_OVERRIDE; Sailfish::Crypto::Result storedKeyIdentifiers( QVector<Sailfish::Crypto::Key::Identifier> *identifiers) Q_DECL_OVERRIDE; Sailfish::Crypto::Result sign( const QByteArray &data, const Sailfish::Crypto::Key &key, Sailfish::Crypto::CryptoManager::SignaturePadding padding, Sailfish::Crypto::CryptoManager::DigestFunction digest, QByteArray *signature) Q_DECL_OVERRIDE; Sailfish::Crypto::Result verify( const QByteArray &signature, const QByteArray &data, const Sailfish::Crypto::Key &key, Sailfish::Crypto::CryptoManager::SignaturePadding padding, Sailfish::Crypto::CryptoManager::DigestFunction digestFunction, bool *verified) Q_DECL_OVERRIDE; Sailfish::Crypto::Result encrypt( const QByteArray &data, const QByteArray &iv, const Sailfish::Crypto::Key &key, Sailfish::Crypto::CryptoManager::BlockMode blockMode, Sailfish::Crypto::CryptoManager::EncryptionPadding padding, QByteArray *encrypted) Q_DECL_OVERRIDE; Sailfish::Crypto::Result decrypt( const QByteArray &data, const QByteArray &iv, const Sailfish::Crypto::Key &key, // or keyreference, i.e. Key(keyName) Sailfish::Crypto::CryptoManager::BlockMode blockMode, Sailfish::Crypto::CryptoManager::EncryptionPadding padding, QByteArray *decrypted) Q_DECL_OVERRIDE; Sailfish::Crypto::Result calculateDigest( const QByteArray &data, Sailfish::Crypto::CryptoManager::SignaturePadding padding, Sailfish::Crypto::CryptoManager::DigestFunction digestFunction, QByteArray *digest) Q_DECL_OVERRIDE; Sailfish::Crypto::Result initialiseCipherSession( quint64 clientId, const QByteArray &iv, const Sailfish::Crypto::Key &key, // or keyreference, i.e. Key(keyName) Sailfish::Crypto::CryptoManager::Operation operation, Sailfish::Crypto::CryptoManager::BlockMode blockMode, Sailfish::Crypto::CryptoManager::EncryptionPadding encryptionPadding, Sailfish::Crypto::CryptoManager::SignaturePadding signaturePadding, Sailfish::Crypto::CryptoManager::DigestFunction digest, quint32 *cipherSessionToken, QByteArray *generatedIV) Q_DECL_OVERRIDE; Sailfish::Crypto::Result updateCipherSessionAuthentication( quint64 clientId, const QByteArray &authenticationData, quint32 cipherSessionToken) Q_DECL_OVERRIDE; Sailfish::Crypto::Result updateCipherSession( quint64 clientId, const QByteArray &data, quint32 cipherSessionToken, QByteArray *generatedData) Q_DECL_OVERRIDE; Sailfish::Crypto::Result finaliseCipherSession( quint64 clientId, const QByteArray &data, quint32 cipherSessionToken, QByteArray *generatedData, bool *verified) Q_DECL_OVERRIDE; bool supportsLocking() const Q_DECL_OVERRIDE; bool isLocked() const Q_DECL_OVERRIDE; bool lock() Q_DECL_OVERRIDE; bool unlock(const QByteArray &lockCode) Q_DECL_OVERRIDE; bool setLockCode(const QByteArray &oldLockCode, const QByteArray &newLockCode) Q_DECL_OVERRIDE; private: Sailfish::Crypto::Key getFullKey(const Sailfish::Crypto::Key &key); QMap<quint64, QMap<quint32, CipherSessionData*> > m_cipherSessions; // clientId to token to data struct CipherSessionLookup { CipherSessionData *csd = 0; quint32 sessionToken = 0; quint64 clientId = 0; }; QMap<QTimer *, CipherSessionLookup> m_cipherSessionTimeouts; std::unique_ptr<LibLoader> loader; }; } // namespace Plugins } // namespace Daemon } // namespace Crypto } // namespace Sailfish #endif // SAILFISHCRYPTO_PLUGIN_CRYPTOKI_H
Foreign Support, Miscalculation, and Conflict Escalation: Iraqi Kurdish Self-Determination in Perspective Abstract How does foreign support for separatists influence conflict escalation with the central government? What types of miscalculations over foreign support encourage separatists to take risky gambles and lead to surprising losses? Existing research indicates that armed non-state actors may initiate or escalate conflict with the central government when existing or anticipated gains in foreign support favorably alters their likelihood for success. This article sheds light on an additional but equally important catalyst for conflict escalation: rebel, or separatist, beliefs about net losses of foreign support in their gamble for more autonomy. Even if groups have perfect information on potential gains in foreign support, miscalculations over potential losses can also lead to risky gambles. To illustrate the distinction between separatist miscalculations over gains and losses in foreign support, this paper compares two episodes of Iraqi Kurdish escalation: the 1991 uprising against Saddam Hussein in the aftermath of the Gulf War, and the 2017 independence referendum after three years of war against the Islamic State. While the former case is a classic example of escalation based on miscalculating gains in foreign support, the latter case represents a miscalculation over potential losses of foreign support in response to the vote.
/** * Copyright 2019 rpc0027 * * Licensed under the Apache License, Version 2.0 (the "License") * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package es.ubu.alu.model; import java.io.Serializable; import javax.faces.bean.ManagedBean; import javax.faces.bean.SessionScoped; /** * Bean used to model MSG commands. * * @author RPC * @version 1.0 */ @ManagedBean(name = "msgCommandBean") @SessionScoped public class MSGCommandBean implements Serializable { /** Randomly generated identifier. */ private static final long serialVersionUID = 1L + (long) (Math.random() * (100000000000L - 1L)); /** * Name of the command to show a string in one of the lines of the LCD. */ public static final String COMMAND_MSG = "msg"; /** * Character used as a separator between the command and its arguments. */ private static final String SEPARATOR = ":"; /** * Command for 1st row. */ private String command1st = COMMAND_MSG + SEPARATOR + "0" + SEPARATOR; /** * Command for 2nd row. */ private String command2nd = COMMAND_MSG + SEPARATOR + "1" + SEPARATOR; /** Default constructor. */ public MSGCommandBean() { } /** * @return the command1st */ public String getCommand1st() { return command1st; } /** * @param command1st the command1st to set */ public void setCommand1st(String command1st) { this.command1st = command1st; } /** * @return the command2nd */ public String getCommand2nd() { return command2nd; } /** * @param command2nd the command2nd to set */ public void setCommand2nd(String command2nd) { this.command2nd = command2nd; } }
A mini-documentary. I need to start off by saying that I didn’t do a mini-documentary about the cherry harvest process because I felt the world had a need for such information. I did it as an exercise, as practice using my video camera and Final Cut Pro. I wanted to see if I had the ability to put together a documentary. This 5-minute video is the result. This was my second summer experiencing the harvest process at one of the orchards I dry. The Schroeders are great people, friendly and a pleasure to work with. I dried their orchard four times this year. Being present for part of the harvest gave me an opportunity to see whether the work I’d done made a difference. It did. The Schroeders were kind enough to let me walk the orchard and packing shed area with my Sony Handycam for a total of about 8 hours over two days. I also stopped in around sunset one evening to take some of the establishing shots with the soft “golden hour” light. They and their workers explained the process to me. I shot a total of about an hour of video footage. That that was barely enough. I still wish I’d gotten better shots of some parts of the process. I found the cherry harvest fascinating — and I think you might, too. We’re all spoiled — we go into the supermarket in the summertime and find cherries waiting in the produce section, already bagged and ready to take home. But how many of us consider how the cherries get from the tree to the supermarket? It’s a complex process that requires hundreds of people and specialized equipment. This video shows part of the story, following the cherries from the trees in one orchard as they’re picked, gathered, chilled, and packed into a refrigerator truck. Take a moment to see for yourself: Done? Not bad for a first serious effort. From this point, the cherries go to the processing plant in Wenatchee, WA. They’re run through more cold water and lots of custom equipment before they’re picked through by several lines of people who toss out the bad ones. Then they’re sorted by size, run through more clean water, and eventually bagged and boxed up by even more people for shipment. I was fortunate enough to get a tour of that facility (and five more pounds of fresh cherries) a few days after I shot the video for this one. I may do a video of that facility and its process next year. The amazing part of all this: the cherries are normally ready to ship to stores the same day they are picked. More amazing stuff: the cherries I saw at the packing facility were headed for Korea and would be there within 18 hours of my tour. Whoa. The point of all this is that there’s a lot that goes into getting fresh food into stores. Cherries are unlike many fruits — they have a very short shelf life. With proper care, they might last a week. That’s why everything is rushed and why so much effort is put into keeping them cool as soon as they’re picked. I hope you enjoyed this. Comments are welcome.
// Copyright 2020 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. #ifndef FUSION_POWER_VIDEO_H_ #define FUSION_POWER_VIDEO_H_ #include <stdint.h> #include <condition_variable> #include <functional> #include <mutex> #include <queue> #include <thread> #include <vector> namespace fpvc { // Helper function to convert 16-bit image frames to/from the raw file format. // Converts 16-bit frame back to the original raw file format. void UnextractFrame(const uint16_t* img, size_t xsize, size_t ysize, int shift, bool big_endian, uint8_t* out); // Streaming decoder class StreamingDecoder { public: /* Decodes frames in a streaming fashing. Appends the given bytes to the input buffer. Calls the callback function for all decoded frames that could be decoded so far. The payload is an optional parameter to pass on to the callback. */ void Decode(const uint8_t* bytes, size_t size, std::function<void(bool ok, uint16_t* frame, size_t xsize, size_t ysize, void* payload)> callback, void* payload = nullptr); private: size_t xsize; size_t ysize; size_t id = 0; std::vector<uint16_t> delta_frame; std::vector<uint8_t> buffer; }; enum FrameState { EMPTY = 0, RAW = 1, PREVIEW_GENERATED = 2, DELTA_PREDICTED = 4, CG_PREDICTED = 8, COMPRESSED = 16, }; enum FrameFlags { NONE = 0, USE_DELTA = 1, USE_CG = 2, NO_LOW_BYTES = 4, }; class Frame { size_t xsize_ = 0; size_t ysize_ = 0; size_t size_ = 0; uint8_t flags_ = FrameFlags::NONE; // FrameFlags uint8_t state_ = FrameState::EMPTY; // FrameState int64_t timestamp_; protected: std::vector<uint8_t> preview_; std::vector<uint8_t> high_; std::vector<uint8_t> low_; public: static Frame EMPTY; size_t xsize() const { return xsize_; } size_t ysize() const { return ysize_; } uint8_t flags() const { return flags_; } uint8_t state() const { return state_; } int64_t timestamp() const { return timestamp_; } const std::vector<uint8_t> &high() { return high_; } const std::vector<uint8_t> &low() { return low_; } const std::vector<uint8_t> &preview() { return preview_; } std::vector<uint8_t> &&MoveOutHigh() { return std::move(high_); } std::vector<uint8_t> &&MoveOutLow() { return std::move(low_); } std::vector<uint8_t> &&MoveOutPreview() { return std::move(preview_); } Frame(size_t xsize = 0, size_t ysize = 0, const uint16_t* image = nullptr, int shift_to_left_align = 0, bool big_endian = false, int64_t timestamp = -1); Frame(size_t xsize, size_t ysize, const uint8_t* image, int64_t timestamp = -1); Frame(size_t xsize, size_t ysize, uint8_t flags, uint8_t state, std::vector<uint8_t> &&high, std::vector<uint8_t> &&low, std::vector<uint8_t> &&preview, int64_t timestamp = -1); static size_t MaxCompressedPlaneSize(size_t xsize, size_t ysize); static size_t MaxCompressedPreviewSize(size_t xsize, size_t ysize); size_t MaxCompressedPlaneSize(); size_t MaxCompressedPreviewSize(); void Compress(Frame &delta_frame = EMPTY); void Uncompress(Frame &delta_frame = EMPTY); void Predict(Frame &delta_frame = EMPTY); void CompressPredicted(size_t* encoded_high_size, uint8_t* encoded_high_buffer, size_t* encoded_low_size, uint8_t* encoded_low_buffer, size_t* encoded_preview_size, uint8_t* encoded_preview_buffer, bool parallel = true); void OutputCore(std::vector<uint8_t> *out); void OutputFull(std::vector<uint8_t> *out); private: void GeneratePreview(); void OptionallyApplyDeltaPrediction(Frame &delta_frame); void OptionallyApplyClampedGradientPrediction(); void ApplyBrotliCompression(); void ApplyBrotliCompression(size_t* encoded_high_size, uint8_t* encoded_high_buffer, size_t* encoded_low_size, uint8_t* encoded_low_buffer, size_t* encoded_preview_size, uint8_t* encoded_preview_buffer, bool parallel = true); void UnapplyBrotliCompression(); void OptionallyUnapplyDeltaPrediction(Frame &delta_frame); void OptionallyUnapplyClampedGradientPrediction(); }; // Rnadom access decoder: requires random access to the entire data file, // can decode any frame in any order. class RandomAccessDecoder { public: // Parses the header and footer, must be called once with the full data // before using DecodeFrame, xsize, ysize or numframes. bool Init(const uint8_t* data, size_t size); // Decodes the frame with the given index. The index must be smaller than // numframes. The output frame must have xsize * ysize values. bool DecodeFrame(size_t index, uint16_t* frame) const; bool DecodePreview(size_t index, uint8_t* preview) const; size_t xsize() const { return xsize_; } size_t ysize() const { return ysize_; } // Returns the dimensions of preview images size_t preview_xsize() const { return xsize_ / 4; } size_t preview_ysize() const { return ysize_ / 4; } // Returns amount of frames in the full file. size_t numframes() const { return frame_offsets.size(); } private: size_t xsize_ = 0; size_t ysize_ = 0; std::vector<uint16_t> delta_frame; std::vector<size_t> frame_offsets; const uint8_t* data_ = nullptr; size_t size_ = 0; }; // Multithreaded encoder. class Encoder { public: // Uses num_threads worker threads, or disables multithreading if num_threads // is 0. Encoder(size_t num_threads = 8, int shift_to_left_align = 0, bool big_endian = false); // The payload is an optional argument to pass from calls to the callback. typedef std::function<void(const uint8_t* compressed, size_t size, void* payload)> Callback; /* Initializes before the first frame, and writes the header bytes by outputting them to the callback. The delta_frame must have xsize * ysize pixels. */ void Init(const uint16_t* delta_frame, size_t xsize, size_t ysize, Callback callback, void* payload); /* Queues a single 16-bit grayscale frame for encoding. The frame should be in the format extracted from the raw data using using ExtractFrame. Calls the callback function when finished compressing, asynchronously but guarded and guaranteed in the correct order. The payload can optionally be used to bind an extra argument to pass to the callback. User must manage memory of img: it must exist until the callback for this frame is called. There can exist up to MaxQueued() tasks at the same time so at least that many seperate img memory buffers have to exist at the same time. Init must be called before compressing the first frame, and Finish must be called after the last frame was queued.*/ void CompressFrame(const uint16_t* img, Callback callback, void* payload); /* Waits and finishes all threads, and writes the footer bytes by outputting them to the callback. */ void Finish(Callback callback, void* payload); /* Returns the max amount of frames that can be queued and/or being processed at the same time for multithreaded processing. This could be larger than the amount of worker threads. */ size_t MaxQueued() const; private: struct Task { const uint16_t* frame; size_t id; Callback callback; void* payload; }; void RunThread(); std::vector<uint8_t> RunTask(const Task& task); // Finalize a task, unlike RunTask this is guaranteed to run in sequential // order and guarded. void FinishTask(const Task& task, std::vector<uint8_t>* compressed); void WriteFrameIndex(std::vector<uint8_t>* compressed) const; std::vector<std::thread*> threads; std::mutex m; std::condition_variable cv_in; // for incoming frames std::queue<Task> q_in; std::condition_variable cv_out; // for outputting compressed frames std::queue<Task> q_out; std::condition_variable cv_main; // for the main thread bool finish = false; size_t id = 0; // Unique frame id. size_t xsize_; size_t ysize_; Frame delta_frame_; std::vector<size_t> frame_offsets; size_t bytes_written = 0; int shift_to_left_align_ = 0; bool big_endian_ = false; }; } // namespace fpvc #endif // FUSION_POWER_VIDEO_H_
/* * @adonisjs/ace * * (c) <NAME> <<EMAIL>> * * For the full copyright and license information, please view the LICENSE * file that was distributed with this source code. */ import { snakeCase } from 'change-case' import { CommandArg, CommandFlag, KernelContract, CommandSettings, CommandContract } from '@tensei/common' /** * Abstract base class other classes must extend */ export abstract class BaseCommand implements CommandContract { options = { async handler() {}, async completed() { return false }, async prepare() {} } public argValues = {} public flagValues = {} /** * Reference to the exit handler */ protected exitHandler?: () => void | Promise<void> public kernel: KernelContract = {} as any /** * Command arguments */ public args: CommandArg[] = [] /** * Command aliases */ public aliases: string[] = [] /** * Command flags */ public flags: CommandFlag[] = [] /** * Command name. The command will be registered using this name only. Make * sure their aren't any spaces inside the command name. */ public name: string = '' public commandName: string = '' /** * The description of the command displayed on the help screen. * A good command will always have some description. */ public description: string = '' /** * Any settings a command wants to have. Helpful for third party * tools to read the settings in lifecycle hooks and make * certain decisions */ public settings: CommandSettings = { loadApp: false, stayAlive: false, environment: 'dev' } public describe(description: string) { this.description = description return this } public stayAlive() { this.settings.stayAlive = true return this } public setName(name: string) { this.commandName = name this.name = name return this } public arg(arg: Omit<CommandArg, 'propertyName'>) { this.args.push({ type: arg.type || 'string', propertyName: arg.name, name: arg.name, required: arg.required === false ? false : true }) return this } public flag(flag: Omit<CommandFlag, 'propertyName'>) { this.flags.push({ type: flag.type || 'string', propertyName: flag.name, name: flag.name || flag.name }) return this } /** * Define an argument directly on the command without using the decorator */ public $addArgument(options: Partial<CommandArg>) { if (!options.propertyName) { const { Exception } = require('@poppinss/utils') throw new Exception( '"propertyName" is required to register a command argument', 500, 'E_MISSING_ARGUMENT_NAME' ) } const arg: CommandArg = Object.assign( { type: options.type || 'string', propertyName: options.propertyName, name: options.name || options.propertyName, required: options.required === false ? false : true }, options ) this.args.push(arg) } /** * Define a flag directly on the command without using the decorator */ public $addFlag(options: Partial<CommandFlag>) { if (!options.propertyName) { const { Exception } = require('@poppinss/utils') throw new Exception( '"propertyName" is required to register command flag', 500, 'E_MISSING_FLAG_NAME' ) } const flag: CommandFlag = Object.assign( { name: options.name || snakeCase(options.propertyName).replace(/_/g, '-'), propertyName: options.propertyName, type: options.type || 'boolean' }, options ) this.flags.push(flag) } /** * Reference to cli ui */ public ui = (() => { const { instantiate } = require('@poppinss/cliui/build/api') return instantiate(process.env.NODE_ENV === 'test') })() /** * Parsed options on the command. They only exist when the command * is executed via kernel. */ public parsed?: import('getopts').ParsedOptions /** * The prompt for the command */ public prompt: | import('@poppinss/prompts').Prompt | import('@poppinss/prompts').FakePrompt = (() => { const { FakePrompt, Prompt } = require('@poppinss/prompts') return process.env.NODE_ENV === 'test' ? new FakePrompt() : new Prompt() })() /** * Returns the instance of logger to log messages */ public logger = this.ui.logger /** * Reference to the colors */ public colors: ReturnType< typeof import('@poppinss/cliui/build/api')['instantiate'] >['logger']['colors'] = this.logger.colors /** * Error raised by the command */ public error?: any /** * Command exit code */ public exitCode?: number public run(callback: () => Promise<any>) { this.options.handler = callback.bind(this) return this } public prepare(callback: () => Promise<any>) { this.options.prepare = callback return this } public completed(callback: () => Promise<boolean>) { this.options.completed = callback return this } /** * Execute the command */ public async exec() { const run = this.options.handler let commandResult: any /** * Run command and catch any raised exceptions */ try { /** * Run prepare method when exists on the command instance */ if (typeof this.options.prepare === 'function') { await this.options.prepare() } /** * Execute the command handle or run method */ commandResult = await run() // Todo: Call command. } catch (error) { this.error = error } let errorHandled = false /** * Run completed method when exists */ if (typeof this.options.completed === 'function') { errorHandled = await this.options.completed() } /** * Throw error when error exists and the completed method didn't * handled it */ if (this.error && !errorHandled) { throw this.error } return commandResult } /** * Register an onExit handler */ public onExit(handler: () => void | Promise<void>) { this.exitHandler = handler return this } /** * Trigger exit */ public async exit() { if (typeof this.exitHandler === 'function') { await this.exitHandler() } await this.kernel.exit(this) } /** * Must be defined by the parent class */ // @depreciated public async handle?(...args: any[]): Promise<any> }
<reponame>ch00066953/spring-boot-stock-v0.1 package com.ch.component; import java.io.File; import java.io.IOException; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import org.springframework.stereotype.Component; import download.Path; @Component public class HttpCellReader { protected String text; public void readHtmlCell(Path p,String para) throws IOException { readHtmlCell(p.getM(),para); } public void readHtmlCell(String url,String para) throws IOException { // url = "http://basic.10jqka.com.cn/600887/equity.html#astockchange"; Document doc; if(url.startsWith("http")) doc = Jsoup.connect(url).get(); else{ File f = new File(url); doc = Jsoup.parse(f, "gbk", ""); } text = doc.select(para).text(); } public String getText() { return text; } }
<reponame>Alexander89/imxrt-iomuxc //! I2C pin implementations use super::pads::{gpio_ad_b0::*, gpio_ad_b1::*, gpio_sd_b0::*}; use crate::{ consts::*, lpi2c::{Pin, Scl, Sda}, Daisy, }; // // I2C1 // i2c!(module: U1, alt: 3, pad: GPIO_AD_B1_00, signal: Scl, daisy: DAISY_LPI2C1_SCL_GPIO_AD_B1_00); i2c!(module: U1, alt: 3, pad: GPIO_AD_B1_01, signal: Sda, daisy: DAISY_LPI2C1_SDA_GPIO_AD_B1_01); // // I2C2 // // TODO // // I2C3 // i2c!(module: U3, alt: 1, pad: GPIO_AD_B1_07, signal: Scl, daisy: DAISY_LPI2C3_SCL_GPIO_AD_B1_07); i2c!(module: U3, alt: 1, pad: GPIO_AD_B1_06, signal: Sda, daisy: DAISY_LPI2C3_SDA_GPIO_AD_B1_06); i2c!(module: U3, alt: 2, pad: GPIO_SD_B0_00, signal: Scl, daisy: DAISY_LPI2C3_SCL_GPIO_SD_B0_00); i2c!(module: U3, alt: 2, pad: GPIO_SD_B0_01, signal: Sda, daisy: DAISY_LPI2C3_SDA_GPIO_SD_B0_01); // // I2C4 // i2c!(module: U4, alt: 0, pad: GPIO_AD_B0_12, signal: Scl, daisy: DAISY_LPI2C4_SCL_GPIO_AD_B0_12); i2c!(module: U4, alt: 0, pad: GPIO_AD_B0_13, signal: Sda, daisy: DAISY_LPI2C4_SDA_GPIO_AD_B0_13); /// Auto-generated Daisy constants mod daisy { #![allow(unused)] use super::Daisy; pub const DAISY_LPI2C1_SCL_GPIO_SD_B1_04: Daisy = Daisy::new(0x401f84cc as *mut u32, 0); pub const DAISY_LPI2C1_SCL_GPIO_AD_B1_00: Daisy = Daisy::new(0x401f84cc as *mut u32, 1); pub const DAISY_LPI2C1_SDA_GPIO_SD_B1_05: Daisy = Daisy::new(0x401f84d0 as *mut u32, 0); pub const DAISY_LPI2C1_SDA_GPIO_AD_B1_01: Daisy = Daisy::new(0x401f84d0 as *mut u32, 1); pub const DAISY_LPI2C2_SCL_GPIO_SD_B1_11: Daisy = Daisy::new(0x401f84d4 as *mut u32, 0); pub const DAISY_LPI2C2_SCL_GPIO_B0_04: Daisy = Daisy::new(0x401f84d4 as *mut u32, 1); pub const DAISY_LPI2C2_SDA_GPIO_SD_B1_10: Daisy = Daisy::new(0x401f84d8 as *mut u32, 0); pub const DAISY_LPI2C2_SDA_GPIO_B0_05: Daisy = Daisy::new(0x401f84d8 as *mut u32, 1); pub const DAISY_LPI2C3_SCL_GPIO_EMC_22: Daisy = Daisy::new(0x401f84dc as *mut u32, 0); pub const DAISY_LPI2C3_SCL_GPIO_SD_B0_00: Daisy = Daisy::new(0x401f84dc as *mut u32, 1); pub const DAISY_LPI2C3_SCL_GPIO_AD_B1_07: Daisy = Daisy::new(0x401f84dc as *mut u32, 2); pub const DAISY_LPI2C3_SDA_GPIO_EMC_21: Daisy = Daisy::new(0x401f84e0 as *mut u32, 0); pub const DAISY_LPI2C3_SDA_GPIO_SD_B0_01: Daisy = Daisy::new(0x401f84e0 as *mut u32, 1); pub const DAISY_LPI2C3_SDA_GPIO_AD_B1_06: Daisy = Daisy::new(0x401f84e0 as *mut u32, 2); pub const DAISY_LPI2C4_SCL_GPIO_EMC_12: Daisy = Daisy::new(0x401f84e4 as *mut u32, 0); pub const DAISY_LPI2C4_SCL_GPIO_AD_B0_12: Daisy = Daisy::new(0x401f84e4 as *mut u32, 1); pub const DAISY_LPI2C4_SDA_GPIO_EMC_11: Daisy = Daisy::new(0x401f84e8 as *mut u32, 0); pub const DAISY_LPI2C4_SDA_GPIO_AD_B0_13: Daisy = Daisy::new(0x401f84e8 as *mut u32, 1); } use daisy::*;
Morocco raised its terror alert to its highest level Friday, citing a "serious threat of a terrorist act" and ordering stepped-up security nationwide, the Interior Ministry said. The North African nation's top police, intelligence and security officials met Friday to discuss terror threats, and decided to raise the alert level to "maximum," the state news agency MAP said, citing a ministry statement. The "maximum" alert level "indicates a serious threat of a terrorist act and demands extreme mobilization by the bodies concerned," the statement said, according to MAP. No details about the threat were reported. A security official confirmed to The Associated Press that the meeting took place and the alert level was raised, but would give no other details because he was not authorized to speak to the media. French President Nicolas Sarkozy called off a trip to Morocco scheduled for next week. The trip was put off at the request of Moroccan authorities "for scheduling reasons," Sarkozy spokesman David Martinon said Friday. No mention was made of security risks. Moroccan authorities last raised security alert levels in April after suicide bombings in Casablanca and larger suicide attacks in neighboring Algeria. Authorities also raised security alert levels in February. This time, Interior Minister Mohamed Benaissa, citing "viable intelligence" about terrorist threats, urged security services to heighten their vigilance, MAP said. The minister also announced a long-term plan for boosting anti-terrorist agencies and the number of personnel. Suicide bombings in 2003 in Casablanca killed 45 people and stunned this relatively moderate Muslim country, a popular vacation spot. Since then, Moroccan authorities have cracked down on suspected terrorist activity, making regular arrests. In March, a suicide bomber blew himself up in a Casablanca cyber cafe, and investigators later uncovered an alleged plot targeting tourist sites across Morocco. Police later cornered four suspects, shooting one dead and prompting the other three to blow themselves up to avoid capture. The blasts killed a policeman and injured 21 other people. The government has downplayed potential links to international terrorist networks, but security analysts place the recent Casablanca violence within a wave of resurgent Islamic extremism in North Africa.
def make_metric_putter_decorator(metric_putter: MetricPutter) -> PutMetricDecorator: def decorator(f): @functools.wraps(f) def put_metric_returner_wrapper(*args, **kwargs) -> float: value = f(*args, **kwargs) try: metric_putter(value) except botocore.exceptions.ClientError as ce: logger.error(ce) return value return put_metric_returner_wrapper return decorator
Nuclear magnetic resonance assessment of controlled hydrolysis of the capsular polysaccharides of Streptococcus pneumoniae serotypes 19A and 19F ABSTRACT Several glycoconjugate-based vaccines are inserted nowadays in infant vaccination schedules of most countries. In an initial stage, polysaccharides are modified under controlled conditions. The assessment of polysaccharide regions available for conjugation process after the hydrolysis is crucial to predict the conjugation results for achieving the final active pharmaceutical ingredient. Two hydrolyzed capsular polysaccharides were analyzed by a heteronuclear single-quantum correlation experiment. It was adjusted for assessing 3JP-H to check the position of phosphate mono-ester residue after the hydrolysis process. All evaluated polysaccharides have been shown to remain with the anomeric position available for subsequent conjugation. The adjusted spectroscopic experiment was shown to be effective to study the hydrolysis of phosphate di-ester linked to capsular polysaccharides.
When she was a student at Allentown Central Catholic High School, everyone told Christine Taylor she looked like Marcia Brady of "The Brady Bunch," the popular television show. "All of my friends -- dating back to grammar school -- they knew that I was such a Brady fan, and they'd say, 'You look like Marcia,' " Taylor said. Taylor, a Wescosville native living in Los Angeles, is now Marcia Brady in "The Brady Bunch Movie," which opens nationwide on Friday. The movie stars Shelley Long as Carol Brady and Gary Cole as Mike Brady. Also starring, in addition to Taylor, as Brady siblings: Christopher Daniel Barnes (Greg), Paul Sutera (Peter), Jennifer Elise Cox (Jan), Jesse Lee (Bobby) and Olivia Hack (Cindy). Henriette Mantel plays Alice, the Brady maid. The movie is directed by Betty Thomas, who won an Emmy for playing Lucy Bates on "Hill Street Blues" and Emmy and CableACE awards for directing the comedy, "Dream On." Now in syndication, "The Brady Bunch" television series has been subject of several trivia books. It even inspired the hit stage production, "The Real Live Brady Bunch," which used scripts from the TV series. After opening in Chicago, productions have toured nationally and internationally. Taylor took over the role of Marcia Brady in "The Real Live Brady Bunch" for several months toward the end of the show's run in 1992 at Westwood Playhouse in Los Angeles. "When I first saw the young people that were cast for the film, I was amazed," said Sherwood Schwartz, who produced "The Brady Bunch," an ABC hit from 1969-74, and produced the movie along with his son Lloyd and David Kirkpatrick. Alan Ladd Jr. is executive producer.. "I was especially taken aback by Christine Taylor, who plays Marcia," Schwartz continued. "Not only does she look like the original who was played by Maureen McCormick, she sounds just like her. It's uncanny." Taylor's role in "The Brady Bunch Movie" is arguably the biggest motion picture role ever for a Lehigh Valley native, and certainly the biggest for her. Taylor readily agreed: "Absolutely. Since I've been in L.A., I've managed to work consistently. But this is the first part that's getting so much attention." Entertainment Weekly ran a photo of the Brady movie family, awarding Taylor "The Dead Ringer" award as best look-alike to those in the original television series. Taylor has also been included in articles in Premiere and US magazines and was chatted up on "The Jon Stewart Show" Thursday night. However, landing the part didn't come easy for Taylor, who starred in Nickelodeon's "Hey Dude" television series and began her acting career at Civic Little Theatre in Allentown. To become Marcia Brady in the Paramount Pictures release, Taylor had to survive California calamities: earthquakes, floods, riots, fires, a carjacking -- not to mention -- callbacks. Taylor, 23, is a daughter of Joan and Albert "Skip" Taylor III. Dad owns Protect Alarms, Allentown. Mom is a housewife. A brother, Brian, 21, attends the University of Virginia. Taylor's interest in acting began at St. Thomas More Church, Salisbury Township. JoAnn Wilchek, a fifth-grade teacher at St. Thomas and vice president the Civic Little Theatre board, directed her in CLT shows, including "The Wizard of Oz" and "Fiddler on the Roof." Lois Miller, owner of Star Talent Management of Allentown, launched Taylor on her professional career, acting as her personal manager. "I was doing lots of community theater in high school," Taylor said. "Lois (Miller) was the one who, after a show, took me aside and asked me if I was interested in doing this professionally. At the time, she meant commercial (television ads) work." Taylor demurred, citing her busy scholastic and extracurricular schedule, including her participation on the tennis team. "That summer they were looking for the new Burger King girl. And I talked it over with my parents." Taylor's parents, not expecting her to land the commercial, nonetheless thought the audition would be a good experience for her, Taylor remembered. "I hadn't even been into New York that much," Taylor said. "We went in and sort of made it a day, with shopping and lunch, and I ended up getting the part, and from then it just snowballed and I ended up getting more commercials." Taylor did the "Milk -- it does a body good" ad for the National Dairy Board, a Kodak commercial with Bill Cosby, three more Burger King commercials and a McDonald's spot. Soon, Taylor said, "a couple of the agents started submitting me for, 'quote, unquote,' more legitimate stuff. My first audition was for a soap opera. And my second audition was for 'Hey Dude' and I ended up getting it." Taylor was cast for "Hey Dude" during her senior year. After doing the pilot in November, the show was picked up in March. "We had to sit down with the guidance counselor because they ('Hey Dude' producers) wanted me to leave right away. And Central was so great with working everything out. I got to go home for my graduation." Taylor was accepted at New York University and planned to attend, but "Hey Dude" was renewed. The show continued for two years. Taylor completed 65 episodes of "Hey Dude," which airs in syndication on Nickelodeon. "It was Nickelodeon's first self-produced sitcom. And I think they sort of think of it as their baby so they don't want to take it off. It was good luck for them," Taylor recalled, not without a trace of fondness. After finishing "Hey Dude," Taylor decided to stay in L.A., landing guest appearances on "Life Goes on," "Saved by the Bell," "Blossom" and the final episode of "Dallas." "This (getting the Marcia Brady role) goes back all the way to doing 'The Real Live Brady Bunch.' The people who cast me -- Faith and Jill Soloway, who are sisters -- created 'The Real Live Brady Bunch.' In doing so they had to be in touch with Sherwood and Lloyd Schwartz, creators of the sitcom. "By the time the show got to Los Angeles, Sherwood and his son Lloyd had written a script and were talking with Paramount. When they met me they said, 'Oh, my gosh! If you're playing the character true to age then this is perfect.' The Chicago people (in the live show) were older and just put on wigs. I was the youngest of the whole cast. "I had been such a Brady fan, so that before I even auditioned I started watching tapes and studying Maureen McCormick's mannerisms, which are very specific." Taylor, who has a copy of "The Brady Bunch" CD, Barry Williams' "Growing Up Brady" book and every book published about the Bradys, said, "I will admit that I was the person who read them before I got the part. It was fun to know what was going on in their minds and growing up as teen idols. "One day, out of the blue, I got a call on my answering machine from Sherwood Schwartz, which was really overwhelming to me since I was such a Brady fan. I went to his house and met his son and they were really overwhelmed with how much I looked like Maureen, the original Marcia. "This was over two years ago. But I never cut my hair and I never changed my look. I think it was subconscious. You'd read that it (the movie) was still in development. When I heard it was definitely going ahead with Paramount and Alan Ladd Jr. and his company, I got back in touch with Sherwood and dropped him a little note and sent him my picture. And he called me back right away. "I thought I could bypass a lot of the audition phases -- which wasn't the case. I met with casting people, the director and producers, and was matched up with Jan and Cindy. I still had to go through an equal auditioning process." Taylor described "The Brady Bunch Movie" as "an affectionate satire" of the television show. "We, the people who play the Bradys, are the same family, but we're living in the '90s. But we dress the same way. We're sort of trapped in our early '70s time warp." Following auditions last spring, filming began in July and concluded the first week in September. It was a large lead cast -- nine principals every day -- and a huge supporting cast, with cameos by several stars from "The Brady Bunch" television show, including Florence Henderson (Carol Brady) and Barry Williams (Greg). The first five weeks were spent on location around Los Angeles, mostly in the San Fernando Valley for exteriors. A facade was built around the home used for the original "Brady house" exterior, which had been altered. Interiors were filmed at Paramount Studios on Soundstage 5 where the television show was made. "It was very, very exciting," Taylor recalled of her first day on the set. "We were all in costume for a photo shoot," she said, adding, "Because the '70s look is so in, the wardrobe designers made it more over the top. "Of the kids, Jan and Marcia have sort of the meatiest stuff to do. I don't think it has anything to do with me, Christine Taylor. I think it's what people remember and I think the writers kept that in mind. "It was just such an amazing group of people. To watch the veteran actors, like Shelley Long and Gary Cole -- this was an honor for me. And to work with Betty Thomas, the director. Being a female director in Hollywood is unusual, but she has a way of taking control of the male crew and actually having people like her." The movie's soundtrack is a mix of pop songs, including the new Bradys doing a version of their cheese-melt hit "It's a Sunshine Day"; a guest appearance by Davy Jones of The Monkees singing, "Girl," and some punk rock such as Suicidal Tendencies' "Institutionalized." "When we're in the Brady house and having Brady moments, it's all the music that you remember from the TV show -- the sad Brady music and the upbeat Brady music," Taylor noted. "I think people who were Brady fans will go into the movie maybe doubting what they're going to see, but they won't be disappointed. For real Brady trivia fans there's a lot of stuff that they will pick up on." Taylor had just moved into a house in Laurel Canyon when the Northridge earthquake hit last January. "No pictures were hung. A lot of stuff was still in boxes. I was sleeping and was awakened by the jolt. I had never been in an earthquake," Taylor said. "My dad was actually visiting. He was staying in a hotel. This was when you guys were having the really horrible weather back East. My dad said he would take that any day. After all the sirens, you start hearing car alarms and dogs. You just don't know. It's a very unnerving feeling." Then there was the carjacking. It happened at gunpoint in a residential West Hollywood area at 3 p.m. "Thank God, they just took my car and my purse and not me," said Taylor. "I had read a lot of literature. Right before it happened my aunt had sent me an article about being a young woman in a big city. And it said, 'Do everything they tell you except getting into the car with them. Risk throwing your purse or running, because the statistics are (that) then you will be raped or killed. "But at that point I really would have gotten into the car because I was completely numb with a gun pointed into my back." Taylor was driving a red BMW 325 convertible -- "a gift to myself. It definitely attracted attention." Taylor was parking the BMW in front of an apartment building when two men drove up in another car. One had a gun. The other man drove off. The man with the gun commandeered Taylor's car. She was ordered to unlock The Club on the steering wheel. "That was sort of the scariest moment," said Taylor. 'Oh, God, is he going to ask me to get in the car?' Everything is replaceable -- the purse, the credit cards -- just make a phone call. And I had insurance. "Ever since then I feel like I live life differently. Not any drastic change other than being aware I know how easily something can change your life. Now there's never a time when I'm not looking over my shoulder," she said. Taylor's car wasn't recovered. She looked at police mug shots but couldn't identify the carjacker. "The police got there 45 minutes after I called 911," she said. "The car was either stripped or halfway to Mexico already." It happened on Nov. 15, 1993. "The irony of that is one year later, around the same time, my dad's car was stolen in Allentown, right out of our garage. So I guess crime is everywhere and you need to be always aware." "It's just a nice feeling after being here for four years, that you're on the cover of the L.A. Times, or are in US or Premiere (magazines). It's just exciting," Taylor admitted. Has her role as Marcia Brady in "The Brady Bunch Movie" put her in demand? "Yes, I think so. If nothing else, the visibility will open a few doors. A lot of casting directors are huge Brady fans. They just want to meet me for general meetings." For her own celebrityhood? "In a way," she said. "It's the first time in my career that I've ever been in this position because, who knows? This business is too unpredictable, so I just want to enjoy it and make the most of it." Is she concerned that her role as the big screen Marcia Brady might stereotype her as happened with the actors who portrayed the Bradys on the television show? "There's always something to be said for being pigeonholed, but I think because it's a movie and it's one movie -- although there is talk of doing a sequel (with the leads already signed for two more) if this one is successful -- it's just two hours. "Kids are most interested in 'Hey Dude.' How's Melody? (they ask of the role Taylor played). I think it's very easy on a TV show because people tune in and they want to keep tuning." Taylor doesn't know what her next project will be. "I'm doing the same thing that everyone else is, which is going out on auditions." As for long-term plans, Taylor said, "Theater is my true love. That is my passion. If I was ever able to do something on Broadway, I would. That has always been my longtime dream."
<reponame>Udinei/pedido-venda package com.algawork.pedidovenda.pesquisa; import java.io.Serializable; import java.util.ArrayList; import java.util.List; import javax.faces.bean.ManagedBean; import javax.faces.bean.RequestScoped; @ManagedBean @RequestScoped public class PesquisarGruposBean implements Serializable { private static final long serialVersionUID = 1L; private List<Integer> gruposFiltrados; public PesquisarGruposBean(){ gruposFiltrados = new ArrayList(); for(int i =0; i < 50; i++){ this.gruposFiltrados.add(i); } } public List<Integer> getGruposFiltrados(){ return gruposFiltrados; } }
Peptide Nanostructure-Mediated Antibiotic Delivery by Exploiting H2S-Rich Environment in Clinically Relevant Bacterial Cultures. Stimuli-responsive self-destructing soft structures serve as versatile hosts for the encapsulation of guest molecules. A new paradigm for H2S-responsive structures, based on a modified tripeptide construct, is presented along with microscopy evidence of its time-dependent rupture. As a medicinally interesting application, we employed these commercial antibiotic-loaded soft structures for successful drug release and inhibition of clinically relevant, drug-susceptible, and methicillin-resistant Staphylococcus aureus.
Estimation of predictive uncertainties in flood wave propagation in a river channel using adjoint sensitivity analysis This paper applies adjoint sensitivity analysis to flash flood wave propagation in a river channel. A numerical model, based on the StVenant equations and the corresponding adjoint equations, determines the sensitivities of predicted water levels to uncertainties in key controls such as inflow hydrograph, channel topography, frictional resistance and infiltration rate. Sensitivities are calculated in terms of a measuring function that quantifies water levels greater than certain safe threshold levels along the channel. The adjoint model has been verified by means of an identical twin experiment. The method is applied to a simulated flash flood in a river channel. The sensitivities to key controls are evaluated and ranked and the effects of individual and combined uncertainties on the predicted flood impact are also quantified. Copyright © 2008 John Wiley & Sons, Ltd.
from anytree import NodeMixin, PostOrderIter, RenderTree, ContStyle __all__ = ["ScheduleTree", "NodeSection", "NodeIteration", "NodeConditional", "NodeExprs", "NodeHalo"] class ScheduleTree(NodeMixin): is_Section = False is_Iteration = False is_Conditional = False is_Exprs = False is_Halo = False def __init__(self, parent=None): self.parent = parent def __repr__(self): return render(self) def visit(self): for i in PostOrderIter(self): yield i @property def last(self): return self.children[-1] if self.children else None class NodeSection(ScheduleTree): is_Section = True @property def __repr_render__(self): return "<S>" class NodeIteration(ScheduleTree): is_Iteration = True def __init__(self, ispace, parent=None): super(NodeIteration, self).__init__(parent) self.ispace = ispace @property def interval(self): return self.ispace.intervals[0] @property def dim(self): return self.interval.dim @property def limits(self): return self.interval.limits @property def direction(self): return self.ispace.directions[self.dim] @property def sub_iterators(self): return self.ispace.sub_iterators.get(self.dim, []) @property def __repr_render__(self): return "%s%s" % (self.dim, self.direction) class NodeConditional(ScheduleTree): is_Conditional = True def __init__(self, guard, parent=None): super(NodeConditional, self).__init__(parent) self.guard = guard @property def __repr_render__(self): return "If" class NodeExprs(ScheduleTree): is_Exprs = True def __init__(self, exprs, ispace, dspace, shape, ops, traffic, parent=None): super(NodeExprs, self).__init__(parent) self.exprs = exprs self.ispace = ispace self.dspace = dspace self.shape = shape self.ops = ops self.traffic = traffic @property def __repr_render__(self): ths = 2 n = len(self.exprs) ret = ",".join("Eq" for i in range(min(n, ths))) ret = ("%s,..." % ret) if n > ths else ret return "[%s]" % ret class NodeHalo(ScheduleTree): is_Halo = True def __init__(self, halo_scheme): self.halo_scheme = halo_scheme @property def __repr_render__(self): return "<Halo>" def insert(node, parent, children): """ Insert ``node`` between ``parent`` and ``children``, where ``children`` are a subset of nodes in ``parent.children``. """ processed = [] for n in list(parent.children): if n in children: n.parent = node if node not in processed: processed.append(node) else: processed.append(n) parent.children = processed def render(stree): return RenderTree(stree, style=ContStyle()).by_attr('__repr_render__')
/** * The appearance of "#PCDATA" within a group signifying a * mixed content model. This method will be the first called * following the content model's <code>startGroup()</code>. * * @param augs Additional information that may include infoset * augmentations. * * @throws XNIException Thrown by handler to signal an error. * * @see #startGroup */ public void pcdata(Augmentations augs) { fMixed = true; if(fDTDGrammar != null) fDTDGrammar.pcdata(augs); if (fDTDContentModelHandler != null) { fDTDContentModelHandler.pcdata(augs); } }
#include <iostream> #include <chrono> #include <random> #include "mcmc.h" #define N_GRAPH 1000 #define N_STEPS 4000000 #define Q_MIN 4 #define Q_MAX 4 #define C_MIN 5 #define C_MAX 10 using namespace std; int main(int argc, char** argv) { // Initialize the RNG typedef std::chrono::high_resolution_clock myclock; myclock::time_point beginning = myclock::now(); for (int i = 0 ; i < 1000 ; i++) ; myclock::duration d = myclock::now() - beginning; unsigned seed = d.count(); std::default_random_engine generator(seed); MCMC *chain; for (int q = Q_MIN ; q <= Q_MAX ; q += 3) { for (int c = C_MIN ; c <= C_MAX ; c += 1) { // Create the MCMC chain = new MCMC(N_GRAPH, c, q, generator); chain->run(N_STEPS); cout << q << " " << c << " " << chain->get_energy() << endl; delete chain; } } chain->save(); return 0; }
An Integrated Weighting-based Modified WASPAS Methodology for Assessing Patient Satisfaction Due to the increased growth and high competition in the health care sector, it is important for the health care service provider to provide superior service experience to the patients. The improvement in the patient experience enhances their satisfaction level. In the context of patient satisfaction, it is crucial to identify the essential factors and assign weights to them. We propose an integrated weighting approach to allocate weights to the factors by integrating the weights obtained from the different objective weighting methods. It will overcome the issue of relying on a single specific weighting method. A modified weighted aggregated sum product assessment method (MWASPAS) is used to determine the patient satisfaction score by using the weights obtained from the integrated weighting method under the weighted product model (WPM) and weighted sum model (WSM). WASPAS method is used over other MCDMs method in our study due to the absence of any conflicting variable and complex calculation. Then, we determine the single patient satisfaction score by using the patient satisfaction score of the WSM and WPM method along with their respective weights. We apply the proposed methodology to the real-life data collected from the health care provider in Kolkata, India. The results indicate the vaiation in the weights assigned to the factors and final scores across the weighting methods. Also, the score obtained from WASPAS method gives a balance between the score determined by WSM and WPM method by taking into account the effects of both the methods.
AaMYB3 interacts with AabHLH1 to regulate proanthocyanidin accumulation in Anthurium andraeanum (Hort.)another strategy to modulate pigmentation Proanthocyanidins (PAs), also known as condensed tannins, are colorless metabolites produced through the flavonoid pathway that are involved in stress resistance in plants. Because PAs are involved in the anthocyanin biosynthetic pathway, they play a role in the modification of pigmentation conferred by anthocyanins in ornamental organs. In this study, we isolated the gene and functionally characterized an R2R3-MYB transcription factor (TF), AaMYB3, and a basic helix-loop-helix TF, AabHLH1, from Anthurium andraeanum (Hort.), a typical tropical flower. AaMYB3 is primarily expressed in the spathe and negatively correlates with anthocyanin accumulation. A complementation test in an Arabidopsis tt8 mutant showed that AabHLH1 successfully restores the PA-deficient seed coat phenotype. The ectopic overexpression of AaMYB3 alone or its coexpression with AabHLH1 in transgenic tobacco resulted in light pink or even pale-pink corolla limbs by reducing their anthocyanin levels and greatly enhancing their accumulation of PAs. This overexpression of the anthurium TF genes upregulated the late anthocyanin enzyme-encoding genes (NtDFR and NtANS) and the key PA genes (NtLAR and NtANR) in transgenic tobacco. The interaction between AaMYB3 and the AabHLH1 protein was confirmed using yeast two-hybrid (Y2H) and bimolecular fluorescence complementation (BiFC) assays. In the developing red spathes of the cultivars Vitara and Tropical, the expression of AaMYB3 was closely linked to PA accumulation, and AaMYB3 was coexpressed with AaCHS, AaF3H, AaDFR, AaANS, AaLAR, and AaANR. The expression pattern of AabHLH1 was similar to that of AaF3H. Our results suggest that AaMYB3 and AabHLH1 are involved in the regulation of PA biosynthesis in anthurium and could potentially be used to metabolically engineer PA biosynthesis in plants. Introduction Proanthocyanidins (PAs), the end products of the flavonoid biosynthetic pathway, occur in the fruits, bark, leaves, and seeds of many plants 1. PAs have strong antioxidant properties, and their primary function in plants is to defend against pathogens, insects, diseases, and larger herbivores 1,2. PAs, also called "condensed tannins", confer astringency upon plants that originally served as forage and function as herbivore feeding deterrents 1,3. The dietary PAs in wine, fruit juices, teas, and cocoa contribute to their taste and health benefits through their antioxidant and radical-scavenging functions and their anti-inflammatory activities 4. Therefore, there is strong interest in the molecular biosynthesis and metabolic engineering of PAs in crops and fruits 5,6. PA biosynthesis shares the same upstream pathway with anthocyanins, although they are subsequently synthesized as the oligomeric or polymeric end products of one of several branches of the flavonoid pathway in the final catalytic steps 4. The formation of the flavan-3-ols begins with the dihydroflavanol 4-reductase (DFR)mediated reduction of dihydroflavonols to leucoanthocyanidins, which are then reduced by leucoanthocyanidin reductase (LAR) to catechin. The other way that flavan-3ols (epicatechin) are formed by the reduction of anthocyanidin, which is converted from leucoanthocyanidins by anthocyanidin synthase (ANS), which is catalyzed by anthocyanidin reductase (ANR). Flavonoid biosynthesis is primarily controlled by transcription factors (TFs) that regulate the expression of the genes encoding the biosynthetic enzymes in the associated pathways. In most plants, the process is conservatively regulated by the MBW protein complex formed by the combination of R2R3-MYB, basic helix-loop-helix protein (bHLH) and the WD40 repeat-containing protein (WDR) 7. The regulation of PA biosynthesis by the MBW complex has been well characterized in Arabidopsis with the TT2 (MYB)-TT8 (bHLH)-TTG1 (WDR) model 8. The ternary transcription protein activates the late anthocyanin and PA-specific genes, including DFR, ANS, and BAN (also known as ANR). The homologs of TT2, TT8, and TTG1 in other plants have since been discovered. For example, in strawberry, FaMYB9/FaMYB11, FabHLH3, and FaTTG1 form a complex that upregulates the expression of ANS and LAR, therefore increasing the levels of PAs 11. In persimmon, DkMYB2 and DkMYB4 interact with both DkMYC1 (bHLH) and DkWDR1 to regulate PA accumulation in the fruit 12. The WDR proteins in the MBW complex are thought to confer a docking platform for the MYB-bHLH interaction. Many studies have suggested that R2R3-MYB is a key factor that determines the activity that induces or represses the transcription of the PA biosynthetic genes. For example, in grapevine, VvMYBPA1 13, VvMYBPA2 14, and VvMYBPAR 15 promote PA accumulation, and VvMYBC2-L1 16 negatively regulates PA accumulation by downregulating the expression of the PA genes. Therefore, much attention has been paid to the role of the MYB TF in the PA transcriptional regulation process. Many MYBs have been identified as PA regulators, such as poplar PtMYB134 17, Trifolium arvense TaMYB14 18, apple MdMYB9 19, peach PpMYB7 20, and even in the ornamental plants coleus (SsMYB3 21 ) and Malus crabapple (MdMYB12b 22 ). Many of these genes are considered suitable candidates to metabolically engineer PA biosynthesis in plants, because MYBs regulate multiple key enzyme-encoding genes, and therefore have advantages over single key enzymes in such genebased strategies 21. Anthurium andraeanum (Hort.) is a well-known tropical flower with a colorful spathe and spadix and is produced commercially as a cut flower or potted plant 23. Anthurium has a long horticultural history and is very important in worldwide trade. The color of the spathe and spadix is one of the most important traits of anthuriums. The anthocyanins cyanidin, pelargonidin and peonidin, in the form of anthocyanidin 3-rutinoside, are primarily responsible for the red, purple, pink, orange, and coral coloration of the anthuriums 24. The biosynthetic pathways and the expression pattern of the key enzyme genes in anthocyanin biosynthesis have been characterized in anthurium 23,25,26. However, in the MBW complex, only the MYB genes have so far been identified in anthurium by their ectopic expression. AaMYB1 27 is involved in the positive regulation of anthocyanin biosynthesis, and AnAN2 may act as a negative regulator of anthocyanin production 28. We previously isolated the R2R3-MYB gene AaMYB2 and demonstrated that AaMYB2 expression is closely related to anthocyanin accumulation and that AaMYB2 primarily contributes to the regulation of anthocyanin biosynthesis in the anthurium spathes and leaves 29. However, the regulatory mechanism of the biosynthesis of flavonoids, including anthocyanins and PAs, in anthurium is still unclear. PAs are colorless flavonoid polymers that are later pathway products downstream from anthocyanins. The ectopic expression of ANR in Nicotiana tabacum (tobacco) flowers and Arabidopsis leaves resulted in the loss of anthocyanins and the accumulation of PAs, suggesting that the excessive accumulation of PAs causes a reduction in anthocyanin 30,31. Similarly, the expression pattern of ANR correlates negatively with anthocyanin accumulation in the anthurium spathe 32. Therefore, PA accumulation in these ornamental plant organs dilutes the pigmentation contributed by anthocyanins, thus playing an important role in coloration. However, very few studies have reported the key structural and regulatory genes involved in the PA biosynthetic pathway in ornamental plants. Therefore, the isolation and characterization of the genes that regulate this pathway are necessary to understand the molecular regulation of PA and anthocyanin biosynthesis in this species 21. In this study, we present the isolation of the first gene encoding a bHLH protein and one gene encoding an R2R3-MYB TF in anthurium, designated AabHLH1 and AaMYB3, respectively. A phylogenetic analysis indicated that AaMYB3 and AabHLH1 are homologous to the Arabidopsis PA regulators AtTT2 and AtTT8, respectively. The functions of AaMYB3 and AabHLH1 were demonstrated with a complementation test in an Arabidopsis mutant and their exogenous expression in tobacco. Our results demonstrate that AaMYB3 interacts with AabHLH1 and that they are involved in PA biosynthesis in anthurium. Based on these results and our previous study, we propose a model of the regulation of anthocyanin and PA accumulation in the anthurium spathe, which extends our understanding of the whole flavonoid metabolic pathway in anthurium. These two TFs, AaMYB3 and AabHLH1, may also be useful in the metabolic engineering of PA biosynthesis in plants. Plant materials Mature 6-year-old plants of seven A. andraeanum (Hort.) cultivars maintained in a shade greenhouse at the Tropical Flower Resource Garden, Tropical Crops Genetic Resources Institute, Chinese Academy of Tropical Agricultural Sciences (Danzhou, Hainan province, China) were used in this study. The cultivars were "Tropical" and "Vitara" (red-spathed), "Pink Champion" (pink), "Cheers" (light pink), "Rapido" (purple), "Acropolis" (white), and "Midori" (green). Floral tissue samples, including the spathe and the spadix, were collected between 9 A.M. and 10 A.M. in November 2016 from the developmental stages 1 to 5 of the spathe as described by Li et al. 29. They were stage 1, the flower (including the spathe and spadix) fully protruded from the protection sheath; stage 2, the floral peduncle elongated but the spathe tightly furled; stage 3, the spathe was half unfurled; stage 4, the spathe was newly fully expanded, and stage 5, the color at the lower two-thirds of spadix became shallow. Simultaneously, the newly fully expanded brown leaves and green mature leaves and the peduncle of flower at stage 4 of "Tropical" were also obtained. Three biological replicates (three distinct spathes or spadix or leaves) within five random plants were selected and used for the subsequent metabolite and RNA analyses. Nicotiana tabacum cv. Wisconsin 38 was used in the gene overexpression experiments. All the tobacco plants were grown in a 50% shade greenhouse under natural sunlight. The corolla limbs of the T 1 transgenic tobacco plants were collected between 9 A.M. and 10 A. M. in April 2017. Three biological replicates (three distinct tobacco flowers) within five plants were selected and used for the subsequent metabolite and RNA analyses. Arabidopsis thaliana ecotype "Columbia" was used as the wild type control. In addition, the tt2 and tt8 Arabidopsis mutants (SALK 005260 and SALK 063334, respectively) obtained from the Arabidopsis Biological Resource Center, were used as the background for the genetic transformation and the negative control. All the Arabidopsis plants were grown in 14-h days and 10-h nights at 23°C in a growth chamber. Anthocyanin, flavonoid, and polyphenol contents measurement The extraction and the content measurement of the anthocyanins were performed as described by Li et al. 29. The total flavonoids and polyphenols were extracted and measured as described previously 33 with slight modifications. The extracts were prepared using fresh tissue samples extracted with methanol. The experiment was repeated three times for each sample. PA extraction and determination The dimethylaminocinnamaldehyde (DMACA) stain method was used to evaluate the PA content. Soluble PAs from fresh anthurium tissues and tobacco corolla limbs were extracted and measured as described previously 34. Epicatechin was used as a standard for PA quantification 35. The experiment was repeated three times for each sample. RNA isolation and cDNA synthesis The total RNA was extracted from anthurium tissues using an RNAprep Pure Plant Kit (Polysaccharides & Polyphenolics-rich) (TIANGEN, Beijing, China). The total RNA from the Arabidopsis leaves and tobacco corolla limbs was extracted using a Plant Total RNA Isolation Kit (FOREGENE, Chengdu, China). The cDNA was synthesized according to the manufacturer's instructions for the RevertAid First Strand cDNA Synthesis Kit (Thermo Scientific, Waltham, MA, USA). Isolation of the full-length cDNA of AaMYB3 and AabHLH1 and their sequence analysis One MYB unigene and one bHLH unigene were selected from the mixed floral and foliar transcriptome database of anthurium 29, which were annotated as AtTT2-like and AtTT8-like transcription factors, respectively. The unigenes were designated as AaMYB3 (GenBank accession no. MH349476) and AabHLH1 (accession no. MH349477). The full-length cDNA of AaMYB3 and AabHLH1 was isolated from the "Tropical" spathe with reverse transcription (RT)-PCR using primers designed according to the transcriptome data: forward 5-AT GGGCAGGAGACCCTGTT-3, reverse 5-CGCCATTA CTTCACCCATTC-3 for AaMYB3, and forward 5-AGG AGGGGTAGTTGAGCAGGT-3, reverse 5-TCATGCT CTAAGCATGTCACGA-3 for AabHLH1. PCR-amplified products were cloned into the T/A cloning vector pMD18-T (TaKaRa, Dalian, China) and sequenced. The full length of the amino acid sequences deduced for AaMYB3 and AabHLH1 were constructed using the ClustalX2 and the MEGA 5.05 programs (the Neighbor-Joining method with 1000 bootstrap replications) for sequence alignment and phylogenetic analysis, respectively. Quantitative real-time PCR (q-PCR) analysis The q-PCR analysis was conducted as previously described 29. The relative expression of the anthurium genes was normalized to the expression of the cyclophilin gene (AaCYP) and ubiquitin family protein gene (AaUBQ5) as described by Gopaulchan et al. 36. The AaANR (GenBank accession no. MH349478) and AaLAR (MH349479) nucleotide sequences were obtained from our transcriptome database 29, which shared extremely high identity in the open reading frame (ORF) (99.4% and 99.2%) with the related species Anthurium amnicola ANR (GDJX01012972.1) and LAR (GDJX01014607.1), respectively. The relative expression of the tobacco genes was normalized to the expression of the actin gene (NbACT) and the ribosomal protein L25 gene (NtL25) as described by Prez-Daz et al. 37. The relative expression of the Arabidopsis genes was normalized to the expression of AtActin and AtUBQ1 as described by Matsui et al. 38. and Han et al. 39. The sequences of all the primers used in q-PCR were listed in Table S1. Triplicates of each reaction were performed. Overexpression vector construction and plant transformation The fragments of AaMYB3 and AabHLH1 that contain the ORF were separately cloned into the plant binary vector pCXSN (T-Vector) for constitutive gene expression as described by Chen et al. 40. The resulting vector pCXSN-AaMYB3 and pCXSN-AabHLH1 were transferred into Agrobacterium tumefaciens strain EHA105 using the freeze-thaw method, which was used in plant transformation. The genetic transformation of the Arabidopsis plants was based on the floral dip method 41. The Agrobacterium strain EHA105 harboring the recombinant plasmids pCXSN-AaMYB3 was used to transform the Arabidopsis tt2 mutant plants, and AabHLH1 was transformed into the Arabidopsis tt8 mutant plants. Seeds harvested from the transformed plants (T 0 ) were germinated on 1/2 MS media containing 35 mg/L hygromycin. Positive homozygous progenies were selected and confirmed using genomic PCR. The seeds of the T 3 progeny Arabidopsis plants were observed. The genetic transformation of tobacco with AaMYB3 and AabHLH1 was accomplished using Agrobacteriummediated transformation 42. The transgenic plants were selected by hygromycin resistance and once rooted were transferred to soil and grown in the greenhouse. Positive transgenic T 1 progeny tobacco plants identified by genomic PCR and q-PCR were used for further studies. Transgenic tobacco plants (T 0 progeny, three lines of each gene transformation) separately expressing AaMYB3 and AabHLH1 were crossed in both the ♀ ♂ and ♂ ♀ directions. The F 1 seeds of the AaMYB3 AabHLH1 plants were harvested and germinated. The seedlings were selected using genomic PCR and q-PCR, and a total of 49 plants expressing both AaMYB3 and AabHLH1 were obtained for further studies. Yeast two-hybrid (Y2H) assay The Y2H assay to investigate the interactions between AaMYBs and AabHLH1 was performed as described by Nakatsuka et al. 43. The full-length CDS of AaMYB3 and AaMYB2 or AabHLH1 were cloned into the pGADT7 vector or pGBKT7 vector (Clontech, Mountain View, US), respectively. The yeast transformation and clones selection were performed as previously described 44. The interaction of AtMYB75 and AtTT8 from Arabidopsis was employed as a positive control 45. Bimolecular fluorescence complementation (BiFC) assay The full-length CDS of AaMYB3 and AaMYB2 or AabHLH1 were cloned into the binary yellow fluorescent protein (YFP) BiFC vectors pXY106 (nYFP) or pXY104 (cYFP) 46, respectively, resulting in the recombinant plasmids AaMYB3-nYFP, AaMYB2-nYFP, and AabHLH1-cYFP. The empty vectors and the recombinant plasmids were transferred into A. tumefaciens GV3101 using the freeze-thaw method and were transiently expressed in Nicotiana benthamiana leaf by agroinfiltration. Two days after transformation, the YFP signals were examined in the transfected cells using a Zeiss780 confocal microscope (Zeiss, Jena, Germany). The gene AT1G16610 (encoding SR45) from Arabidopsis was coexpressed as a fusion to the red fluorescent protein (RFP) with the target genes in N. benthamiana leaf cells and was used as a nuclear localization marker 47. The interaction of AtMYB75 and AtTT8 from Arabidopsis was employed as a positive control 45. Sequences of AaMYB3 and AabHLH1 and a phylogenetic analysis Several unigenes associated with the transcriptional control of the flavonoids and PA biosynthesis were selected from our transcriptome data according to annotations determined with BLASTx searches in the National Center for Biotechnology Information (NCBI) Nr and Swiss-Prot protein databases (data not shown). Among these, one unigene with a length of 1042 bp containing an 891-bp ORF matched AtTT2, which is specific to the PA pathways in Arabidopsis, and one unigene with a length of 2694 bp containing a 2115-bp ORF matched a bHLH activator of PA synthesis in Arabidopsis, AtTT8. Therefore, these two candidate proteins may play roles in the modulation of PA biosynthesis in anthurium. The ORFs of the MYB and bHLH TF genes were amplified using RT-PCR from the spathe of the cultivar "Tropical" sampled at developmental stage 2. The two genes were designated AaMYB3 and AabHLH1, respectively, and encoded proteins of 297 and 705 amino acids, respectively. An analysis of the deduced amino acid sequence of AaMYB3 suggested that an N-terminal R2R3 repeat corresponds to the DNA-binding (MYB) domain. Similarly to other related MYBs, the bHLH interaction motif Lx2x3Lx6Lx3R was identified in the highly conserved N-terminal R2R3 region of AaMYB3. In an alignment analysis with MYBs known to function in the regulation of PA biosynthesis and an additional two anthurium anthocyanin MYB regulators, we identified the C-terminal motif conserved in the AtTT2-like MYBs, VITKAx 1 RC, in AaMYB3. That indicates that AaMYB3 is very closely related to VvMYBPA2, AtTT2, and DkMYB2 and belongs to the PA-clade 2 MYB regulators (Fig. 1a) : In a phylogenetic analysis, AaMYB3 clustered with AtTT2, VvMYBPA2, and PtMYB134 in PA-clade 2, and AaMYB1 and AaMYB2 from anthurium clustered with the anthocyanin regulatory MYBs AtMYB75 and VvMYBA1 (Fig. 1b). These sequence and phylogenetic analyses suggest that AaMYB3 plays a role in PA biosynthesis. A sequence analysis of AabHLH1 showed the following conserved motifs: the MYB interaction region at the N-terminal; the bHLH domain in the C-terminal region, and the transactivation (ACT) domain (Fig. S1). In a phylogenetic analysis, AabHLH1 clustered with AtTT8 and NtAn1a/NtAn1b, which are involved in the biosynthesis of flavonoids, such as PAs and anthocyanins (Fig. 1c). These results suggest that AabHLH1 participates in the biosynthesis of the PAs or other flavonoids in anthurium. AaMYB3 is predominately expressed in the spathe and negatively correlates with anthocyanin accumulation in the spathes of cultivars with different color phenotypes To explore the transcript profiles of the new MYB and bHLH genes, cDNA samples from different tissues of anthurium were analyzed using q-PCR. We detected the transcripts of AaMYB3 and AabHLH1 in various anthurium tissues with no obvious tissue specificity. The AaMYB3 transcripts were predominantly detected in the spathe, followed by the leaves and then the spadix (Fig. 2a). The order of the AabHLH1 expression from the highest to lowest was as follows: peduncle > spathe > mature leaf and young leaf > spadix (Fig. 2b). Factoring in the PA and anthocyanin contents of various tissues, the relative expression levels of AaMYB3 were high in the spathes and leaves where PA primarily accumulates (Fig. 2c). The AaMYB3 expression is associated with anthocyanin accumulation in various "Tropical" tissues (Pearson Correlation Coefficient, r = 0.832 **, P < 0.01) (Fig. 2d). However, the expression pattern of AabHLH1 was not related to the accumulation pattern of the PA or the anthocyanin (Fig. 2c, d). To further investigate the correlation between the expression of AaMYB3 and AabHLH1 and the amounts of flavonoid present, the expression levels of the two TF genes and the PA and anthocyanin contents were determined in spathes with red, pink, purple, pale-pink, white, and green phenotypes. AaMYB3 and AabHLH1 were expressed in the spathes of all the cultivars tested where PAs also accumulated ( Fig. 2e-g). Notably, anthocyanins accumulated primarily in the red spathe (Tropical) and then in the purple (Rapido) and pink spathes (Pink Champion) (Fig. 2h). These results further suggest that the expression of AaMYB3 is negatively related to anthocyanin accumulation in the spathe of anthurium cultivars with various color phenotypes (r = -0.669 **, p < 0.01). Complementation of the Arabidopsis tt8 and tt2 mutants by the ectopic expression of AabHLH1 and AaMYB3 To test whether AabHLH1 and AaMYB3 function as PA regulators, the two genes were introduced into the tt8 and tt2 mutants, respectively, and expressed under the control of the cauliflower mosaic virus 35S (CaMV35S) promoters (Fig. 3a). The transgenic T 3 seeds of the AabHLH1-tt8 line had a brown seed coat, which was stained black by the DMACA reagent such as the wild type (Fig. 3b). This result demonstrates that the overexpression of AabHLH1 complemented the tt8 mutant seed coat phenotype. The color of the T 3 seed coat in the AaMYB3-tt2 line was slightly darker than that of the tt2 mutant before and after DMACA staining (Fig. 3c). Although the restoration of the seed coat phenotype by AaMYB3 complementation was not as obvious as during the AabHLH1 complementation in the tt8 mutant, the expression of the PA-specific enzyme gene AtANR was clearly upregulated in all the transgenic AaMYB3-tt2 lines (Fig. S2). Effects of AaMYB3 and AabHLH1 overexpression in tobacco To investigate whether the ectopic expression of AaMYB3 and AabHLH1 affects PA biosynthesis, we analyzed transgenic tobacco plants that independently overexpressed AaMYB3 or AabHLH1 and simultaneously overexpressed both AaMYB3 and AabHLH1 (AaMYB3 + AabHLH1). Two independent T 1 transgenic lines from each construct were selected based on their levels of transgene expression (Fig. 4b). Under the same growing conditions, the corolla limb colors of the T 1 AaMYB3overexpressing (ox) lines changed to light pink compared with the control plants transformed with the empty vector (EV). The corolla limb color of the T 1 AabHLH1-ox lines showed no obvious phenotypic changes relative to the EV control. Interestingly, the corolla limbs of the T 1 AaMYB3 + AabHLH1-ox plants were much paler pink than either the AaMYB3-ox plants or the EV control plants (Fig. 4a). An analysis of the anthocyanin contents showed changes that paralleled those in the color phenotype. The levels of total anthocyanins from the lowest to highest were as follows: AaMYB3 + AabHLH1-ox lines < AaMYB3-ox lines < EV or AabHLH1-ox lines. The PA contents in the tobacco corolla limbs correlated negatively with the anthocyanin content. The AaMYB3 + AabHLH1-ox lines showed the highest PA content, and the AaMYB3-ox lines also showed markedly higher PA content in the corolla limbs than the EV or AabHLH1-ox Fig. 2 The expression pattern of AaMYB3 and AabHLH1, as well as the content of total proanthocyanidins and anthocyanins, in anthurium. The expression of AaMYB3 (a) and AabHLH1 (b), the content of total proanthocyanidins (c) and anthocyanins (d) in various tissues of the cultivar "Tropical"; The expression level of AaMYB3 (e) and AabHLH1 (f), the content of total proanthocyanidins (g) and anthocyanins (h) in the spathes of different cultivars with various color phenotypes. The data were presented as the mean ± SD (n = 3). Values with different letters are significantly different according to Duncan's multiple range tests at the 5% level lines (Fig. 4c). The total flavonoid and polyphenol contents were also measured. Only the AaMYB3 + AabHLH1-ox lines showed higher total flavonoid and polyphenol contents than the other lines (Fig. 4d). These results suggest that the overexpression of AaMYB3 significantly enhanced the accumulation of the PAs leading to a pale pigmentation in the corolla limb of the transgenic plants. The PA regulatory function of AaMYB3 was increased synergistically by AabHLH1. Simple AabHLH1 overexpression had no significant effect on the E V A a M Y B 3 -o x N o.1 A a M Y B 3 -o x N o.2 A a b H L H 1 -o x N o.1 A a b H L H 1 -o x N o.2 A a M Y B 3 + A a b H L H 1 -o x N o.1 A a M Y B 3 + A a b H L H 1 -o x N o.2 Flavonoid and polyphenol concentration (mg/g FW) anthocyanin or PA in the transgenic tobacco corolla limbs. A q-PCR analysis was performed to further analyze the effects of the ectopic expression of AaMYB3 and AaMYB3 + AabHLH1 on the target enzyme genes involved in the biosynthesis of flavonoids in tobacco. The overexpression of AaMYB3 alone affected the expression of the anthocyanin-and PA-related genes in the transgenic lines, and in particular, clearly upregulated the expression of the late anthocyanin enzyme genes NtDFR and NtANS and the key PA biosynthetic genes NtLAR and NtANR (Fig. 4e). In the AaMYB3 + AabHLH1-ox lines, the expression pattern of the genes in the flavonoid biosynthesis pathway was similar to that in the AaMYB3-ox lines (Fig. 4e). These data indicate that AaMYB3 alone or together with AabHLH1 upregulates or activates the expression of key anthocyanin and PA biosynthetic key genes, ultimately promoting PA accumulation in transgenic tobacco. Interaction of AabHLH1 with different AaMYBs partners The MBW ternary transcription complex is usually formed to regulate flavonoid biosynthesis in plants. The interaction of AabHLH1 with AaMYB3 and the previously reported anthocyanin regulator AaMYB2 was investigated using a Y2H assay. The autoactivation of pGBK-AaMYB3 or pGBK-AaMYB2 was observed on SD/ -Trp/-His/-Ade media containing 5-bromo-4-chloro-3indolyl--D-galactopyranoside (X--Gal). Therefore, AaMYB3 or AaMYB2 was fused to the GAL4 DNAactivating domain, and AabHLH1 was fused to the GAL4 DNA-binding domain. As shown in Fig. 5a, the protein-protein interactions between AabHLH1 and AaMYB3 or AaMYB2 were demonstrated by the growth of colonies containing both the pGAD-AaMYB3 and pGBK-AabHLH1 vectors or both the pGAD-AaMYB2 and pGBK-AabHLH1 vectors on SD/-Leu/-Trp/-His/-Ade + X--Gal media. Thus, the Y2H assay suggested that AabHLH1 interacts with both AaMYBs and forms a transcriptional complex. c Total anthocyanin and proanthocyanidin contents in the corolla limbs of the EV and transgenic tobacco lines. d Total flavonoid and polyphenol contents in the corolla limbs of the EV and transgenic tobacco lines. e The relative expression analysis of the flavonoid biosynthetic genes in the tobacco corolla limbs of AaMYB3-ox lines and AaMYB3 + AabHLH1-ox lines. Color bar: Log 2 (fold changes). The data are presented as the mean ± SD (n = 3). Values with different letters are significantly different according to Duncan's multiple range tests at the 5% level Fig. 5 Interactions between AabHLH1 and AaMYB2 and AaMYB3 detected using the Y2H and BiFC assays. a Y2HGold yeast cells containing plasmids pGADT7 + pGBK-AabHLH1, pGAD-AaMYB2 + pGBK-AabHLH1 or pGAD-AaMYB3 + pGBK-AabHLH1were grown on double-and quadrupleselection media, and pGAD-AtMYB75 + pGBK-AtTT8 was used as the positive control. The X--gal assay was performed to confirm the positive interactions. b Bimolecular fluorescence complementation visualization of the AabHLH1 and AaMYB2 and AaMYB3 interaction in the N. benthamiana leaf epidermal cells. The AtMYB75-AtTT8 interaction was used as a positive control. YFP yellow fluorescent protein field, RFP red fluorescent protein, which indicated the nuclear localization of Arabidopsis SR45; BF bright field. Merged, overlay of the YFP, RFP, and BF field. Bars, 100 m The interaction of AabHLH1 with AaMYBs was confirmed using a BiFC assay. AabHLH1 tagged with the split cYFP fragment (YC) and AaMYB3 or AaMYB2 tagged with the split nYFP fragment (YN) were transiently coexpressed in N. benthamiana leaves. As shown in Fig. 5b, the YFP fluorescent signal was detected in tobacco epidermal cells expressing both the AaMYB3-YN and AabHLH1-YC fusion proteins and the AaMYB2-YN and AabHLH1-YC fusion proteins, and they successfully merged with the RFP fluorescent signals, while no YFP fluorescent signal was detected in the epidermal cells expressing AabHLH1-YC with only YN. The BiFC assay demonstrated the interaction between AabHLH1 and AaMYB2 and AaMYB3 in vivo, and their protein-protein complexes were localized in the nucleus. Expression trends in AaMYB3 and AabHLH1 are coincident with those of several anthocyanin and PA biosynthetic genes The ectopic expression AaMYB3 alone or the coexpression of AaMYB3 and AabHLH1 in tobacco stimulated PA production by activating a number of anthocyanin and PA biosynthetic genes. Therefore, AaMYB3 and AabHLH1 are hypothesized to act as PA activators in anthurium. To investigate the possible regulatory roles of AaMYB3 and AabHLH1, a gene coexpression analysis was performed in the developing spathes and spadices of the cultivars "Vitara" and "Tropical", because they are the primary ornamental organs, and AaMYB3 is predominantly expressed in the spathe. The spathes of both cultivars simultaneously contained anthocyanins and PAs in all the developmental stages investigated (Fig. 6a, b). As shown in Fig. 6a, the expression of AaMYB3 decreased progressively during development in the spathe of the cultivar "Vitara", and the expression of AabHLH1 decreased in stage 2 and then increased again and peaked in stage 5. The expression pattern of AaMYB3 appeared to be related to PA accumulation (r = 0.840 **, p < 0.01) and negatively related to anthocyanin accumulation (r = -0.741 **, p < 0.01) (Fig. 6a). Of the anthocyanin and PA biosynthetic genes, the early flavonoid synthetic genes AaCHS and AaF3H, the anthocyanin-specific genes AaDFR and AaANS and the PA-specific genes AaLAR and AaANR were coordinately expressed with AaMYB3 (Fig. 6a). The correlation coefficients are shown in Table S2. However, unlike AaMYB3, the expression of AabHLH1 showed no obvious relationship with either PA or anthocyanin accumulation and was only strongly coexpressed with AaF3H (r = 0.952 **, p < 0.01). In the spathe of the cultivar "Tropical", the expression of AaMYB3 peaked at stage 2 before subsequently decreasing, reaching its lowest level in stage 4 and then increased again in stage 5 (Fig. 6b). The expression of AabHLH1 increased gradually and peaked in stage 4 after which it decreased slightly (Fig. 6b). The expression patterns of AaMYB3 and AabHLH1 in the 'Tropical' spathe were not associated with anthocyanin or PA accumulation (Fig. 6a, b). The correlation analysis suggests that AaMYB3 coexpressed with AaDFR, AaLAR, and AaANR and that AabHLH1 is coexpressed with AaCHS, AaF3H, AaF3H, and AaANS in the "Tropical" spathe (Fig. 6b, Table S2). The spadix of cultivar "Vitara" accumulated both anthocyanins and PAs at all the development stages investigated (Fig. S3a), while the "Tropical" spadix accumulated only PAs during its development (Fig. S3b). The expression pattern of AaMYB3 showed no obvious correlation with either anthocyanin or PA accumulation in the spadix of "Vitara" (Fig. S3a). Similar to that in the spathe, AaMYB3 was coexpressed with most of the genes in the anthocyanin and PA biosynthetic pathways, such as AaCHS, AaCHI, AaF3H, AaDFR, AaANS, and AaLAR (Fig. S3a, Table S2). AabHLH1 expression positively correlated with that of AaUFGT and AaANR (Fig. S3a, Table S2). In the "Tropical" spadix, AaCHS, AaF3H, AaDFR, AaANS, AaLAR, and AaANR showed a similar expression pattern to that of AaMYB3, and AaCHI, AaF3 H, and AaUFGT were coexpressed with AabHLH1 (Fig. S3b). These results suggest that AaMYB3 may target several genes in the anthocyanin and PA biosynthetic pathways and activate PA biosynthesis in anthurium. AabHLH1 may play a role in the whole flavonoid biosynthetic pathway and is not specific to PA biosynthesis. Discussion PAs are colorless flavonoid polymers that accumulate in plants and are usually controlled by the MBW TF complex 5. In this study, we report the isolation and analysis of the R2R3-MYB gene AaMYB3 and bHLH TF gene AabHLH1 from anthurium. The AaMYB3 protein is similar to the known PA regulators grape VvMYBPA2, Arabidopsis AtTT2, poplar PtMYB134, and persimmon DkMyb2, which share a short conserved VI TKAx 1 RC motif 10. According to Zhu et al. 21, MYB PA regulators can be categorized into two subgroups: PAclade 1 includes VvMYBPA1 and DkMyb4, and PA-clade 2 includes AtTT2, VvMYB2, PtMYB134, and DkMyb2. The two types of PA regulators display different expression patterns and recognize different types of motifs in the promoter regions of the PA pathway genes 21. According to our phylogenetic analysis, AaMYB3 clustered in the PA-clade 2 subgroup 21, clearly distinct from the previously reported anthocyanin regulators AaMYB1 27 and AaMYB2 29. AabHLH1 is homologous with Arabidopsis AtTT8 and tobacco NtAn1, which are members of the bHLH TF subgroup IIIf, which is involved in the regulation of PA and flavonoid biosynthesis 8,49,50. Our amino acid sequence analysis suggested that AaMYB3 contains Fig. 6 Analysis of the function of AaMYB3 and AabHLH1 in anthurium. The detection for the content of total proanthocyanidins and anthocyanins, as well as the expression of AaMYB3, AabHLH1, and the flavonoid biosynthetic genes in the developmental spathe of cultivars "Vitara" (a) and "Tropical" (b). The data were presented as the mean ± SD (n = 3). Values with different letters are significantly different according to Duncan's multiple range tests at the 5% level. c A proposed model for the regulation of proanthocyanin and anthocyanin biosynthesis by AaMYB2, AaMYB3, and AabHLH1 in the anthurium spathe the bHLH interaction motif and that AabHLH1 contains the MYB interaction region. Therefore, it is possible that the two proteins physically interact. These results imply that these two TF are involved in the PA biosynthetic pathway. In a previous study, we showed that the expression pattern of AaMYB2 correlates strongly with anthocyanin accumulation 29. The expression pattern of AaMYB3 in the different tissues of anthurium cultivar "Tropical" was similar to that of the anthocyanin regulator gene AaMYB2, which was predominantly expressed in the spathe and exhibited a positive correlation with anthocyanin accumulation. However, unlike AaMYB2, the expression of AaMYB3 exhibited a negative correlation with anthocyanin accumulation in the spathes of different anthurium cultivars with various color phenotypes. It should be noted that AaMYB3 is more strongly expressed in the green and white spathes where proanthocyanin accumulates, and no anthocyanin was detected. This indicated that the expression pattern of AaMYB3 in anthurium may be associated with PA accumulation. This was confirmed in the developing "Vitara" spathes in which the expression pattern of AaMYB3 was exactly consistent with the accumulation of PA and correlated negatively with anthocyanin accumulation. Similar results have been described for PA regulators in grapevine (VvMYBPA1 13 ) and coleus (SsMYB3 21 ). Because AaMYB3 is homologous to AtTT2, a complementation test of the Arabidopsis tt2 mutant was used to functionally characterize AaMYB3. In the Arabidopsis tt2 and tt8 mutants, the AtTT2-AtTT8-AtTTG1 ternary transcription complex fails to form, and the expression of the PA biosynthesis gene AtANR is not induced 8. The lack of PAs in the two mutants causes a yellow seed phenotype, and the overexpression of a homologous gene restored the seed color. Although AaMYB3 clustered in PA-clade 2, the overexpression of AaMYB3 caused only a slight recovery of the tt2 mutant seed coat phenotype. This was not as marked as the effects of other PA-clade 2 MYBs, such as AtTT2 10 and VvMYBPA1 13. To analyze the function of AaMYB3 in the tt2 mutant, the expression of the genes in the PA pathway was detected in the leaves of the T 3 transgenic AaMYB3-tt2 plants. The expression of the PA-specific AtANR was significantly upregulated compared with that in the tt2 mutant control. This implied that AaMYB3 activates the expression of AtANR to some degree. However, this degree of activation is insufficient to cause enough PA accumulation in the seed coat and therefore, to completely complement the mutant phenotype. In the transcriptional regulation of PA biosynthesis in Arabidopsis, TT2 is responsible for the specific recognition of the promoter of AtANR in combination with TT8 8. In the AaMYB3-tt2 transgenic line, AaMYB3 might not have combined well with AtTT8 to efficiently activate PA biosynthesis. The function of AaMYB3 was also characterized by its ectopic expression in tobacco. The overexpression of AaMYB3 alone produced light pink corolla limbs with greatly accumulated PAs. This is consistent with the overexpression of other PA-related MYB regulators in tobacco, coleus SsMYB3 21, grapevine VvMYBPA1 51, and Malus crabapple McMYB12b 22. The functions of these MYBs are thought to be conserved across a diverse range of species. The overexpression of AaMYB3 in the corolla limbs of transgenic tobacco caused the significantly upregulated expression of the key genes in the anthocyanin (NtDFR and NtANS) and PA biosynthetic pathways (NtLAR and NtANR). This result is similar to the results when SsMYB3 21 and VvMYBPA1 51 were overexpressed in the transgenic tobacco flowers and shows that AaMYB3 is functionally homologous to the MYBs described above, which are involved in the regulation of PA biosynthesis. The expression pattern of AabHLH1 was not associated with either PA or anthocyanin in various anthurium tissues, the spathes of different cultivars, or at different developmental stages of the spathe. This is consistent with the expression of the anthocyanin-modulating PabHLH3 in sweet cherry 52 but differs from the bHLH-TF-controlled anthocyanin biosynthesis in other horticultural plants, such as LcbHLHs in litchi 53 and DhbHLH1 in dendrobium 44. It can be hypothesized that the expression of AabHLH1 in anthurium is not directly involved in the accumulation of the PA or anthocyanin pigments. Our phylogenetic analysis indicates that AabHLH1 is homologous to AaTT8. Therefore, a complementation test of the Arabidopsis tt8 mutant was performed to functionally characterize AabHLH1. As expected, the overexpression of AabHLH1 in the tt8 mutant complemented the seed coat phenotype. In the AabHLH1-tt8 transgenic line, AabHLH1 functioned as AtTT8, combined with AtTT2 and successfully activating the PA pathway. This suggests that AabHLH1 plays a role in PA biosynthesis. When AabHLH1 was overexpressed in tobacco, it caused no significant change in the phenotype or in the accumulation of the anthocyanins or PAs in the corolla limbs. This outcome is similar to the phenomenon observed when tobacco was transformed with the MrbHLH1 of Chinese bayberry. The transgenic tobacco line overexpressing MrbHLH1 showed a phenotype similar to that of the control 54. However, the line overexpressing both MrbHLH1 and MrMYB1 accumulated significant amounts of anthocyanin in the whole plant 54. Therefore, we hypothesize that AabHLH1 requires an appropriate MYB partner to participate in PA biosynthesis. A transgenic tobacco line overexpressing both AabHLH1 and AaMYB3 was obtained by crossing transgenic plants that separately overexpressed AaMYB3 and AabHLH1 as described by Xie et al. 55. The offspring of the cross, which coexpressed AaMYB3 and AabHLH1, had much paler pink corolla limbs than the AaMYB3-ox line, and they accumulated PAs strongly, as well as total flavonoids and polyphenols. As in the AaMYB3-ox line, the tobacco anthocyanin and PA biosynthetic genes NtDFR, NtANS, NtLAR, and NtANR were strongly upregulated in the AaMYB3 + AabHLH1-ox line. As a result, in the transgenic tobacco, AabHLH1 recognized AaMYB3 as its partner and enhanced the expression of AaMYB3 for efficient PA biosynthesis. The formation of the AaMYB3-AabHLH1 and AaMYB2-AabHLH1 protein complex was confirmed using the Y2H and BiFC assays. These provided further evidence of the involvement of AabHLH1 in PA biosynthesis via its physical interaction with AaMYB3. There was evidence that AabHLH1 interacts with AaMYB2, which was identified as an anthocyanin regulator in anthurium in our previous study 29. Therefore, AabHLH1 may also play a role in anthocyanin biosynthesis. The PA regulatory functions of AaMYB3 and AabHLH1 were characterized by their exogenous expression in Arabidopsis and tobacco. However, the regulatory mechanism controlling PA production in anthurium is still unclear. Because AaMYB3 is primarily expressed in the spathe, our gene expression analysis was primarily performed in spathe tissues. In the red spathes of cultivars "Vitara" and "Tropical", the expression pattern of AaMYB3 was similar to that of AaCHS (r = 0.845 **, p < 0.01), AaDFR (r = 0.924 **, p < 0.01), AaF3H (r = 0.842 **, p < 0.01), AaANS (r = 0.623 **, p < 0.01), AaLAR (r = 0.904 **, p < 0.01), and AaANR (r = 0.660 **, p < 0.01) and correlated well with proanthocyanin accumulation (r = 0.754 **, p < 0.01). This implies that AaMYB3 regulates the early biosynthetic genes in the flavonoid pathway (AaCHS and AaF3H) and the late biosynthetic genes AaDFR and AaANS, as well as the PAspecific genes AaLAR and AaANR. Similarly, the persimmon PA regulator DkMyb2 upregulated the expression of DkCHS, DkDFR, DkANS, DkLAR, and DkANR in transgenic persimmon calluses 56. In contrast, the expression pattern of AabHLH1 was similar to that of AaF3H (r = 0.645 **, P < 0.01) in the spathes of the two cultivars. A previous study of the expression of the flavonoid pathway genes suggested that AaF3H and AaANS are coregulated, while AaDFR and AaF3H are separately regulated 23,26. Our previous study suggested that AaMYB2 regulates the expression of AaF3H and AaANS, and possibly AaCHS, in the anthocyanin biosynthetic pathway of anthurium, while the expression pattern of AaMYB2 differed dramatically from that of AaDFR and AaF3H 29. In this study, a gene expression analysis suggested that AaMYB3 specifically coregulates AaDFR, AaLAR, and AaANR expression in the PA pathway, while AabHLH1 is probably involved in the regulation of AaF3 H. These results extend our understanding of the regulation of flavonoid biosynthesis in anthurium. These results suggest that AaMYB3 functions as a negative regulator of anthocyanin accumulation and a positive regulator of PA accumulation in the anthurium spathe. This may occur by the upregulation of the expression of the genes for the enzymes specific to PAs biosynthesis, such as AaLAR and AaANR, and the acceleration of the reduction of leucocyanidin and anthocyanidin to promote the biosynthesis of PAs and repress the accumulation of anthocyanins. Based on the analyses described above, we propose a model of how anthocyanin and PA accumulation are affected by the anthocyanin regulator AaMYB2 and the PA regulator AaMYB3 in the anthurium spathe (Fig. 6c). In the red, pink, and purple spathes in which anthocyanin and PA accumulate simultaneously, both AaMYB2 and AaMYB3 are expressed, and the two TFs coregulate the expression of AaCHS, AaF3H, and AaANS. AaMYB3 specifically actives AaDFR, AaLAR, and AaANR expression. Therefore, anthocyanins and PAs accumulate together in the spathe. In the white and green spathes in which only PA accumulates, the expression of AaMYB2 is negligible, and AaMYB3 activates the PA biosynthetic pathway, causing the accumulation of PAs. AabHLH1 interacts with AaMYB3 and AaMYB2 in the transcriptional regulation process.
A Note on "The Ordering of Portfolios in Terms of Mean and Variance" for some real-valued function u(x) ? Chipman devotes the larger part of his paper to answering this question for the cases of (i) the family X of normal distributions with mean,u and variance o2 and (ii) the family.9T of two-point distributions attaching probability 0 to a and 1-0 to b where 0 is fixed at some value in the open interval. The members of X can be characterized in terms of their means,u and standard deviations a while, with 0 fixed, the members of S0 can be characterized in terms of their means,u and " signed standard deviations " s = sgn (b a) * a. Hence, X is a two-parameter family and with 0 fixed, the corresponding.Seis a two-parameter family. For the family X and the special case 1/2 of two-point even-chance distributions, equation specializes to
<reponame>saczuac/smart-currencyer<filename>exchanger/views.py # -*- coding: utf-8 -*- from __future__ import unicode_literals from django.contrib.auth.models import User from rest_framework import viewsets from serializers import UserSerializer, CurrencySerializer from serializers import TransactionSerializer, WalletSerializer from rest_framework import permissions from models import Currency, Transaction, Wallet from django_filters import rest_framework from rest_framework.decorators import detail_route, list_route from rest_framework.response import Response from django.views import generic from django.shortcuts import render class UserViewSet(viewsets.ReadOnlyModelViewSet): queryset = User.objects.all() serializer_class = UserSerializer permission_classes = (permissions.IsAuthenticated,) class CurrencyViewSet(viewsets.ModelViewSet): queryset = Currency.objects.all() serializer_class = CurrencySerializer permission_classes = (permissions.IsAuthenticated,) @list_route() def havent_user(self, request): user_currencies = Wallet.objects.filter( user=request.user).values_list('currency', flat=True) currencies = Currency.objects.exclude(pk__in=user_currencies) if currencies: serializer = self.get_serializer(currencies, many=True) return Response(serializer.data) return Response([]) class TransactionViewSet(viewsets.ModelViewSet): queryset = Transaction.objects.all() serializer_class = TransactionSerializer permission_classes = (permissions.IsAuthenticated,) class WalletViewSet(viewsets.ModelViewSet): queryset = Wallet.objects.all() serializer_class = WalletSerializer permission_classes = (permissions.IsAuthenticated,) filter_backends = (rest_framework.DjangoFilterBackend,) filter_fields = ('user__username',) def get_queryset(self): user = self.request.user return Wallet.objects.filter(user=user) @detail_route(methods=['post']) def get_wallet_of_user(self, request, pk=None): user = User.objects.get(pk=pk) wallets_of = Wallet.objects.filter( user=user).filter(currency=request.data['id']) if wallets_of: serializer = self.get_serializer(wallets_of, many=True) return Response(serializer.data) return Response( {'detail': 'The user has not wallet a of this currency'}, status=400) class AppView(generic.View): template_name = "exchanger/index.html" def get(self, request, *args, **kwargs): ctx = {} return render(request, self.template_name, ctx)
/** * A {@link SocketChannel} which is using Old-Blocking-IO */ public class OioSocketChannel extends OioByteStreamChannel implements SocketChannel { private static final InternalLogger logger = InternalLoggerFactory.getInstance(OioSocketChannel.class); private final Socket socket; private final OioSocketChannelConfig config; /** * Create a new instance with an new {@link Socket} */ public OioSocketChannel() { this(new Socket()); } /** * Create a new instance from the given {@link Socket} * * @param socket the {@link Socket} which is used by this instance */ public OioSocketChannel(Socket socket) { this(null, socket); } /** * Create a new instance from the given {@link Socket} * * @param parent the parent {@link Channel} which was used to create this instance. This can be null if the * {@link} has no parent as it was created by your self. * @param socket the {@link Socket} which is used by this instance */ public OioSocketChannel(Channel parent, Socket socket) { super(parent); this.socket = socket; config = new DefaultOioSocketChannelConfig(this, socket); boolean success = false; try { if (socket.isConnected()) { activate(socket.getInputStream(), socket.getOutputStream()); } socket.setSoTimeout(SO_TIMEOUT); success = true; } catch (Exception e) { throw new ChannelException("failed to initialize a socket", e); } finally { if (!success) { try { socket.close(); } catch (IOException e) { logger.warn("Failed to close a socket.", e); } } } } @Override public ServerSocketChannel parent() { return (ServerSocketChannel) super.parent(); } @Override public OioSocketChannelConfig config() { return config; } @Override public boolean isOpen() { return !socket.isClosed(); } @Override public boolean isActive() { return !socket.isClosed() && socket.isConnected(); } @Override public boolean isInputShutdown() { return super.isInputShutdown(); } @Override public boolean isOutputShutdown() { return socket.isOutputShutdown() || !isActive(); } @UnstableApi @Override protected final void doShutdownOutput() throws Exception { shutdownOutput0(); } @Override public ChannelFuture shutdownOutput() { return shutdownOutput(newPromise()); } @Override protected int doReadBytes(ByteBuf buf) throws Exception { if (socket.isClosed()) { return -1; } try { return super.doReadBytes(buf); } catch (SocketTimeoutException ignored) { return 0; } } @Override public ChannelFuture shutdownOutput(final ChannelPromise promise) { EventLoop loop = eventLoop(); if (loop.inEventLoop()) { shutdownOutput0(promise); } else { loop.execute(new Runnable() { @Override public void run() { shutdownOutput0(promise); } }); } return promise; } private void shutdownOutput0(ChannelPromise promise) { try { shutdownOutput0(); promise.setSuccess(); } catch (Throwable t) { promise.setFailure(t); } } private void shutdownOutput0() throws IOException { socket.shutdownOutput(); } @Override public InetSocketAddress localAddress() { return (InetSocketAddress) super.localAddress(); } @Override public InetSocketAddress remoteAddress() { return (InetSocketAddress) super.remoteAddress(); } @Override protected SocketAddress localAddress0() { return socket.getLocalSocketAddress(); } @Override protected SocketAddress remoteAddress0() { return socket.getRemoteSocketAddress(); } @Override protected void doBind(SocketAddress localAddress) throws Exception { SocketUtils.bind(socket, localAddress); } @Override protected void doConnect(SocketAddress remoteAddress, SocketAddress localAddress) throws Exception { if (localAddress != null) { SocketUtils.bind(socket, localAddress); } boolean success = false; try { SocketUtils.connect(socket, remoteAddress, config().getConnectTimeoutMillis()); activate(socket.getInputStream(), socket.getOutputStream()); success = true; } catch (SocketTimeoutException e) { ConnectTimeoutException cause = new ConnectTimeoutException("connection timed out: " + remoteAddress); cause.setStackTrace(e.getStackTrace()); throw cause; } finally { if (!success) { doClose(); } } } @Override protected void doDisconnect() throws Exception { doClose(); } @Override protected void doClose() throws Exception { socket.close(); } @Override protected boolean checkInputShutdown() { if (isInputShutdown()) { try { Thread.sleep(config().getSoTimeout()); } catch (Throwable e) { // ignore } return true; } return false; } @Override protected void setReadPending(boolean readPending) { super.setReadPending(readPending); } }
Origin of the chemical ligation concept for the total synthesis of enzymes (proteins) A s an undergraduate, I was fortunate enough to study biochemistry in the Chemistry Department at Victoria University of Wellington in my native New Zealand. The high quality of that department and the many excellent teachers on the faculty gave me an integrated perspective on biochemistry and chemistry. From such a context, it was natural for me to have always considered proteins to be part of the science of chemistry, a notion reinforced by the recent pioneering studies of Sanger that elucidated the covalent molecular structure of insulin , Anfinsens work showing that the folded structure of a protein is determined by the amino acid sequence of its polypeptide chain , and the first X-ray structures of enzyme molecules by Phillips (lysozyme) and Harker (ribonuclease) . It was also at a time when the first true chemistry of enzymes was coming to fruition, with the work of Moore and Stein on ribonuclease A that elucidated the role of the protein molecule in the chemical mechanism of catalysis by that enzyme . Consequently, I became determined to follow a scientific career focused on understanding the chemistry of enzyme catalysis. With that goal in mind, as an undergraduate I set out to systematically equip myself with all the techniques that I would need in order to design and build enzymes using chemistry. As a summer research project, I set up and brought into operation the first automated amino acid analyzer in the country. In my M.Sc. graduate thesis research, at Massey University in New Zealand, I used mass spectrometry of permethylated peptides to determine the amino acid sequences of a mixture of unknown peptides isolated from bovine casein . At U.C. Berkeley, I studied for a Ph.D. in Organic Chemistry and in my thesis research used NMR to measure the rotational correlation time of the lysine side chain in the active site of horse liver alcohol dehydrogenase (LADH), C labeled by site-specific chemical modification. By then my list of techniques ranged from organic chemistry through traditional protein chemistry such as chemical modification and amino acid analysis, to peptide mass spectrometry and protein NMR. It was time to learn how to make proteins using chemistry. In the Fall of 1974, I joined the Merrifield Laboratory at the Rockefeller University as a research associate, for post-doctoral studies on chemical protein synthesis. My goal was to use solid phase peptide synthesis to site-specifically label with C NMR probe nuclei individual atoms in the catalytic side chains of RNase A, and to systematically explore the electronic properties of those moieties in the act of catalysis. Merrifield had not long before reported a total synthesis of RNase A by stepwise solid phase synthesis , and had also carried out a series of semisynthetic studies on the chemistry of enzyme catalysis based on the RNase S system. At the Rockefeller, I became deeply involved in fundamental studies of solid phase peptide synthesis. These studies consisted of elucidating the mechanisms of a series of chronic chemical side reactions in SPPS, including Na-trfluoroacetylation, acylation-resistant deletion peptide formation, and the formation of terminated peptides with free a-amino groups. In collaboration with others in the Merrifield lab, notably Alex Mitchell, I was able to minimize these and other side reactions and render them no longer significant. I also optimized the synthesis of aminomethyl-resin and carried out studies of the fundamental physico-chemical properties of peptide-resins, showing that the peptide-resin undergoes super-elastic swelling: the longer the peptide that is synthesized, the greater the swelling of the peptideresin beads, meaning that essentially unlimited amounts of even long peptides could be made by solid phase synthesis. These studies have been summarized , and culminated in the understanding that solid phase peptide synthesis owes its almost universal applicability to the solvation-enhancing properties of the organic solvent-swollen interpenetrating polymer network: counter-intuitively, attachment to the solid phase resin beads enhances the solubility of maximally protected peptide chains! After leaving the Rockefeller University, I joined the research faculty at the California Institute of Technology, where I became expert in determining the covalent structure of proteins through automated Edman degradation combined Editorial
What is the risk of tickborne diseases to UK pets? At this time of year many animal owners will be finding ticks on their pets and themselves, highlighting the One Health risk that tickborne diseases pose.1 The study by Wright and others,1 summarised on p 514 of this weeks issue of Vet Record, which uses data collected by Public Health Englands (PHE) Tick Surveillance Scheme (TSS) (www.gov.uk/guidance/tick-surveillance-scheme), stands out as a fantastic example of what can be achieved when a One Health surveillance approach is adopted.1 The TSS is well supported by the veterinary profession, who have submitted 46.8 per cent of all ticks received by the scheme.2 Their work provides a solid evidence base that enables the veterinary profession to offer sound advice to clients and the general public. It is important that such coordinated surveillance schemes continue to be supported and funded. However, there is a large hole in our knowledge of tickborne diseases in companion animals and the actual incidence of clinical disease. Wright and others1 describe where clinical examinations for ticks should be focused, the range of tick species found and the seasonality of exposure.1 By combining their findings with other work displaying the geographic distribution of ticks, the study findings contribute to our understanding of expanding tick distribution in the UK.24 Together these studies describe the native tick species that are the most prevalent in the UK (predominately Ixodes ricinus and Ixodes hexagonus ). With such information available on geographical and seasonal spread, vets should no longer say: This is the wrong time of year for ticks or This isnt a tick area. Pet owners are one and a half times more likely to be bitten by a tick than non-pet owners.5 This risk extends beyond rural areas as it is now well established that exposure can
This week, the people of Berks County found something odd in their mailboxes — lies about their neighbor, Manan Trivedi. Congressman Jim Gerlach went to Washington and stopped listening to the people of Berks, voting again and again against them. The only way serial prevaricator Jim Gerlach can keep his cushy seat in Washington is to cover up that failed record with lies again and again and again. 1) In Gerlach’s campaign’s out-of-touch world, Trivedi filed his candidacy for Congress from a PO Box because he wasn’t living in the district at the time. In reality, Trivedi was happily renting a home in Reading after returning home from the service, before he filed his candidacy. Trivedi filed his campaign papers from a mailing address just as many other candidates do, including Jim Gerlach [http://query.nictusa.com/pdf/975/28930738975/28930738975.pdf#navpanes=0]. 2) In Gerlach’s campaign’s out-of-touch world, Manan falsely says he was born and raised in the district. In reality, Trivedi was born in Reading and raised in Fleetwood, PA, both of which were in the 6th District before Jim Gerlach and his buddies in Harrisburg nonsensically gerrymandered the 6th District, rendering it completely unrecognizable. 3) In Gerlach’s campaign’s out-of-touch world, Manan is not employed at Reading Hospital. In reality, while Manan is currently unable to put in the hours his patients deserve and took a leave of absence for the sake of the hospital, he remains employed as an internist at Reading Hospital.
<gh_stars>10-100 # -*- coding: UTF-8 -*- ####################################################################### # ---------------------------------------------------------------------------- # "THE BEER-WARE LICENSE" (Revision 42): # @Daddy_Blamo wrote this file. As long as you retain this notice you # can do whatever you want with this stuff. If we meet some day, and you think # this stuff is worth it, you can buy me a beer in return. - Muad'Dib # ---------------------------------------------------------------------------- ####################################################################### # Addon Name: Placenta # Addon id: plugin.video.placenta # Addon Provider: Mr.Blamo import re,traceback,urllib,urlparse,base64 import requests from resources.lib.modules import client from resources.lib.modules import cleantitle from resources.lib.modules import log_utils class source: def __init__(self): self.priority = 1 self.language = ['en'] self.domains = ['bobmovies.net','bobmovies.online'] self.base_link = 'https://bobmovies.online/' #'https://bobmovies.mrunlock.trade' # 'https://bobmovies.mrunlock.trade' # https://mrunlock.bid/ # https://mrunlock.stream/ # http://gomov.download/ # MRUNLOCK.INFO self.goog = 'https://www.google.com/search?q=bobmovies.online+' def movie(self, imdb, title, localtitle, aliases, year): try: scrape = cleantitle.get_simple(title) google = '%s%s'%(self.goog,scrape.replace(' ','+')) get_page = requests.get(google).content log_utils.log('Scraper bobmovies - Movie - title: ' + str(title)) log_utils.log('Scraper bobmovies - Movie - search_id: ' + str(scrape)) match = re.compile('<a href="(.+?)"',re.DOTALL).findall(get_page) for url1 in match: if '/url?q=' in url1: if self.base_link in url1 and 'google' not in url1: url2 = url1.split('/url?q=')[1] url2 = url2.split('&amp')[0] headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36'} html = requests.get(url2,headers=headers,timeout=5).content results = re.compile('<div class="page_film_top full_film_top">.+?<h1>(.+?)</h1>.+?<td class="name">Quality:</td><td><a href=.+?">(.+?)</a>.+?<td class="name">Year:</td><td><a href=.+?">(.+?)</a>',re.DOTALL).findall(html) for item_title, qual, date in results: if not scrape == cleantitle.get_simple(item_title): continue if not year in date: continue log_utils.log('Scraper bobmovies - Movie - url2: ' + str(url2)) return url2 return except: failure = traceback.format_exc() log_utils.log('BobMovies - Exception: \n' + str(failure)) return def sources(self, url, hostDict, hostprDict): try: if url == None: return sources = [] headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36'} html = client.request(url,headers=headers) vidpage = re.compile('id="tab-movie".+?data-file="(.+?)"',re.DOTALL).findall(html) for link in vidpage: if 'trailer' not in link.lower(): link = urlparse.urljoin(self.base_link, link) sources.append({'source':'DirectLink','quality':'SD','language': 'en','url':link,'info':[],'direct':True,'debridonly':False}) other_links = re.findall('data-url="(.+?)"',html) for link in other_links: if link.startswith('//'): link = 'http:'+link sources.append({'source':'DirectLink','quality':'SD','language': 'en','url':link,'info':[],'direct':False,'debridonly':False}) else: sources.append({'source':'DirectLink','quality':'SD','language': 'en','url':link,'info':[],'direct':False,'debridonly':False}) return sources except: failure = traceback.format_exc() log_utils.log('BobMovies - Exception: \n' + str(failure)) return def resolve(self, url): return url
package mundosk_libraries.java_websocket.server; import java.nio.channels.SelectionKey; import java.nio.channels.SocketChannel; import java.util.List; import mundosk_libraries.java_websocket.WebSocketAdapter; import mundosk_libraries.java_websocket.WebSocketImpl; import mundosk_libraries.java_websocket.drafts.Draft; import mundosk_libraries.java_websocket.server.WebSocketServer.WebSocketServerFactory; public class DefaultWebSocketServerFactory implements WebSocketServerFactory { @Override public WebSocketImpl createWebSocket(WebSocketAdapter a, Draft d) { return new WebSocketImpl( a, d ); } @Override public WebSocketImpl createWebSocket( WebSocketAdapter a, List<Draft> d) { return new WebSocketImpl( a, d ); } @Override public SocketChannel wrapChannel( SocketChannel channel, SelectionKey key ) { return channel; } @Override public void close() { } }
<reponame>vasundhara7/College-EWallet from django.conf.urls import url from . import views from django.urls import path,include, re_path urlpatterns = [ url('main/', views.main, name="student.main"), path('paysub/<str:amount>', views.paysub, name="student.paysub"), # url(r'result/', views.librarydues,name="result"), ]
Low temperature interconnect technology using surface activated bonding (SAB) method SAB is a process for bonding surfaces which have been cleaned and activated by ion beam bombardment or plasma irradiation. The concept is based on the reactivity of atomically clean surfaces of solids and the formation of chemical bonds on contact between such clean and activated surfaces. The bonding procedure consists of cleaning followed by contact in ultra-high vacuum or in a certain ambient atmosphere. The highly activated surfaces enable them to bond to each other at a lower temperature than the conventional bonding process. This paper reviews the development and current status of the SAB process. A high-density bumpless interconnect for Cu electrodes (3 /spl mu/m in diameter, 10 /spl mu/m pitch) of 100,000 pieces at room temperature, and its application on the assembly of a flash memory card are demonstrated. Two new additional processes using a nano-layer adhesion and a sequential activation process are proposed for bonding of ionic materials such as SiO/sub 2/, glass and LiNbO/sub 3/.
// Bind values based on the position of the element public void bind(Tweet tweet) { tvBody.setText(tweet.body); tvScreenName.setText(tweet.user.screeName); tvStamp.setText(tweet.getFormattedTimestamp()); Glide.with(context).load(tweet.user.profileImageUrl).into(ivProfileImage); }
What kind of restoration have you done on the KRESS building over the last few years, Robert? Beginning in 1896, Samuel H Kress built hundreds of architecturally distinguished buildings around the country for his chain of 5-10-25¢ stores. The chain ceased operating in 1980. The Fresno store dates from the 1920s. In the 1950s the upper floors were closed off and the exterior sheathed hiding the Kress decor. The ground floor continued as a retail store. In 2010 I had that covering removed and restored the elaborate exterior. What are your plans for the building now, Robert? I intend to fill the upper floors of the Kress Building with new businesses. Each floor is 11,000 square feet. Possibilities include offices, retail, entertainment, education, fitness and restaurants. The building will have enhanced access and street entrances from Fulton Mall and the alley to the underground city parking. CONTACT ME with your ideas for its next life. With the building’s original façade brought back to the surface, how does that old style of architecture fit in with the Fulton Mall today? It’s a beautiful reminder of Fresno’s history made functional and economic for today’s user. It radiates a friendly, inviting warmth to all who walk by. On the ground floor the retailer Bluebird sells women’s fashions for value conscious shoppers. Besides KRESS, what other Fresno area buildings have you worked on or own, Robert? I own the adjacent building at 1130 Fulton containing All 4-U, a women’s clothing retailer and the corner building 1108 Fulton with Payless Shoes. On the Mariposa Mall side small restaurant spaces is a food court. What are the main things that go into modernizing or retrofitting old buildings, Robert? Location, Economics, Vision, Community cooperation. I have an imaginative architect and contractor. Follow the building code requirements. What kinds of obstacles might you run into in that kind of work, Robert? Delays for permits, financing and materials. Resolving the unknowns that reveal themselves only after beginning the work causes further delays. What kind of Fulton Mall do you expect to see if city planners are successful in opening the walkway back up to traffic, Robert? Over the next decade, a revitalized downtown will become the place to be, because of the thousands more daily employees, shoppers, tourists, travelers, vacationers, researchers, students, business people and residents coming there 24/7. Investors from all over the country will start new businesses. I envision new construction for modernization and replacement of obsolete buildings. Think about the transition of Fresno from the time when the Eckbo-designed pedestrian mall was built 50 years ago, to what it has become today. In the 1960’s, Fresno had a population of 150,000. Fulton Street boasted architecturally significant pre-WW2 buildings. Now the Fresno population approaches 600,000 and is comprised of many nationalities, particularly Asian and Latin American, who weren’t here then. It is still the heart of California’s major agricultural region but now with many more ties to the state and overseas. As these grow over the next fifty years, Downtown Fresno can become the San Joaquin Valley’s business hub, and a cultural center for this multi-national population. How do you see downtown Fresno changing with the build-out of high-speed rail, Robert? High-speed rail is a game-changer for Fresno. Travel time to Fresno from the Bay Area or Southern California will be cut in half. Fresnans will be able to make easy round trips to these cities in a day. Angelenos like me will no longer be forced to drive four hours each way. What do you teach as an instructor at UCLA, Robert? I am a retired researcher and lecturer from the Schools of Engineering and Public Health. My currently active connection to UCLA is as a sea-kayaking instructor for the recreation program. What was your first job and what did you learn from it? RAND Corporation in Santa Monica on computer models for defense logistics and regional hospital planning. Subsequently at the Jet Propulsion Laboratory and Caltech, I investigated applications of high temperature solar energy and synthetic fuels. At the Information Sciences Institute affiliated with USC I developed computerized commerce networks. I learned that by giving bright people freedom to create we could achieve exceptional results. What are your roots in the San Joaquin Valley, Robert? I came to Fresno in the 80s on the recommendation of a friend who was bringing overseas investors to Fresno. I purchased the buildings on the Fulton Mall at that time, and have been coming to Fresno regularly. As an outsider who has seen revitalization first hand in Santa Monica, Los Angeles and Ventura County I know that with persistence, it will work in Fresno. I listen to local people. The people of Fresno have been good to me.
<gh_stars>0 package com.vaadin.tutorial.issues.webapp.security.password; public interface PasswordEncoder { String encode(String password); }
Type 99 155 mm self-propelled howitzer History The development of Type 99 self-propelled howitzer began in 1985 in order to replace the old Type 75 self-propelled howitzer. The new self-propelled artillery would use a L52 155 mm gun instead of the old L30 155 mm gun and would also mount the latest Fire-control system. Mitsubishi Heavy Industries was tasked to design the chassis and the gun would be manufactured by Japan Steel Works. The designing stage cost 5 billion yen and was completed in 1992. After various technical and practical tests, the first vehicle was delivered to the training division of Japan Ground Self-Defense Force in 1999. Overview Research and development was started from 1985 as a successor for the older Type 75 self-propelled 155 mm howitzer. Japan Steel Works was the primary contractor, and developed the main gun and the turret. The Type 99 uses a modified chassis from the Mitsubishi Type 89 IFV, lengthened with an additional road wheel. Type 99 uses a 52 caliber barrel compared to the 39 caliber barrel on the Type 75 self-propelled howitzer. Secondary armament of the Type 99 consists of a roof-mounted 12.7-mm machine gun, fitted with a shield. Armor of the Type 99 provides protection against small arms fire and artillery shell splinters. Vehicle is powered by a diesel engine, developing 600 horsepower. A traveling lock is provided at the front of the hull. It folds back onto the glacis plate when not in use. The Type 99 self-propelled howitzer is resupplied from the Type 99 ammunition resupply vehicle.
The interface between medical oncology and supportive and palliative cancer care. Traditionally, medical oncology has focused on the active period of diagnosis, treatment and follow-up of cancer patients, and palliative medicine, the pre-terminal and end-of-life phases. Palliative medicine physicians have particular expertise in communication and symptom control, especially, for example, with pain management. Medical oncologists also have need of excellent communication skills and knowledge of supportive care issues, such as the management of emesis, bone marrow suppression, mucositis, neuropathy, and symptoms created by treatment. This article examines the interface between medical oncology and supportive and palliative care to emphasize how each can benefit from the others.
(Washington, DC) – The World Bank’s internal watchdog should investigate whether bank projects are contributing to forced labor in Uzbekistan, the Cotton Campaign said today. The Cotton Campaign, a coalition of human rights, labor, investor, and business organizations dedicated to ending forced labor in the cotton sector of Uzbekistan, echoed calls that independent Uzbek groups made in a November 2014 letter to the Inspection Panel. The World Bank Inspection Panel will decide by December 19 whether existing bank projects benefit the forced labor system under which Uzbek authorities forcibly mobilize more than a million citizens each year to pick cotton. However, it delayed by a year making a decision on whether to investigate to give the World Bank time to establish labor standards monitoring and address the policies underlying forced labor and child labor. Over the last year, though, the World Bank has made little progress in addressing labor abuses in Uzbekistan. It has not worked with the Uzbek government to address the root causes of forced labor. The bank has relied on project-level mitigation measures despite protests from independent Uzbek groups that such measures would not prevent bank financing from being linked to the government’s centralized system of forced labor. The World Bank has expanded its agriculture portfolio, for instance also investing in an irrigation project that will benefit the cotton industry. The Uzbek government’s forced labor system for cotton production is a gross violation of international law. As activists in Uzbekistan have documented, the Uzbek government system of coercing farmers to cultivate cotton and forcing adults and children to harvest the crop continued for the 2014 season. Authorities also suppressed any attempts by citizens to report on these abuses. Following international pressure, the government reduced the number of young children sent to harvest cotton in 2014, as it had done in 2013, but increased the use of older children and adults. The forced labor of adults disrupts the delivery of essential services nationwide, as authorities mobilize public sector workers – including doctors, nurses, and teachers – to fill quotas. World Bank President Jim Kim has emphasized the importance of the World Bank learning from its mistakes. After the Inspection Panel identified a plausible link between Bank loans and forced labor in Uzbekistan, it is imperative that it fully investigate what went wrong to prevent similar problems in the future, the Cotton Campaign said.
package com.sihenzhang.crockpot.block; import com.google.common.collect.ImmutableList; import com.sihenzhang.crockpot.CrockPotConfig; import com.sihenzhang.crockpot.CrockPotRegistry; import mcp.MethodsReturnNonnullByDefault; import net.minecraft.block.Block; import net.minecraft.block.BlockState; import net.minecraft.block.Blocks; import net.minecraft.block.CropsBlock; import net.minecraft.item.BlockItem; import net.minecraft.item.Item; import net.minecraft.state.IntegerProperty; import net.minecraft.state.StateContainer; import net.minecraft.state.properties.BlockStateProperties; import net.minecraft.util.IItemProvider; import net.minecraft.util.ResourceLocation; import net.minecraft.util.math.BlockPos; import net.minecraft.util.math.shapes.ISelectionContext; import net.minecraft.util.math.shapes.VoxelShape; import net.minecraft.world.IBlockReader; import net.minecraft.world.World; import net.minecraft.world.server.ServerWorld; import net.minecraftforge.common.ForgeHooks; import net.minecraftforge.common.IPlantable; import net.minecraftforge.registries.ForgeRegistries; import javax.annotation.ParametersAreNonnullByDefault; import java.util.ArrayList; import java.util.List; import java.util.Random; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; @ParametersAreNonnullByDefault @MethodsReturnNonnullByDefault public class CrockPotUnknownCropsBlock extends CrockPotCropsBlock { public static final IntegerProperty AGE = BlockStateProperties.AGE_1; private static final VoxelShape[] SHAPE_BY_AGE = { Block.box(0.0D, 0.0D, 0.0D, 16.0D, 2.0D, 16.0D), Block.box(0.0D, 0.0D, 0.0D, 16.0D, 2.0D, 16.0D) }; private static final Lock LOCK = new ReentrantLock(); private static volatile List<Block> CROPS_BLOCKS = null; @Override public IntegerProperty getAgeProperty() { return AGE; } @Override public int getMaxAge() { return 1; } @Override protected void createBlockStateDefinition(StateContainer.Builder<Block, BlockState> builder) { builder.add(AGE); } @Override public VoxelShape getShape(BlockState state, IBlockReader worldIn, BlockPos pos, ISelectionContext context) { return SHAPE_BY_AGE[state.getValue(this.getAgeProperty())]; } protected static List<Block> getCropsBlocks() { if (CROPS_BLOCKS == null) { LOCK.lock(); try { if (CROPS_BLOCKS == null) { List<Block> tmp = new ArrayList<>(); for (String key : CrockPotConfig.UNKNOWN_SEEDS_CROPS_LIST.get()) { Item item = ForgeRegistries.ITEMS.getValue(new ResourceLocation(key)); BlockItem blockItem = item instanceof BlockItem ? (BlockItem) item : null; if (blockItem != null && blockItem.getBlock() instanceof IPlantable) { tmp.add(blockItem.getBlock()); } else { Block block = ForgeRegistries.BLOCKS.getValue(new ResourceLocation(key)); if (block != null && block != Blocks.AIR) { tmp.add(block); } } } CROPS_BLOCKS = ImmutableList.copyOf(tmp); } } finally { LOCK.unlock(); } } return CROPS_BLOCKS; } @Override public void randomTick(BlockState state, ServerWorld worldIn, BlockPos pos, Random random) { if (!worldIn.isAreaLoaded(pos, 1)) { return; } if (worldIn.getRawBrightness(pos, 0) >= 9) { float growthChance = getGrowthSpeed(this, worldIn, pos); if (ForgeHooks.onCropsGrowPre(worldIn, pos, state, random.nextInt((int) (25.0F / growthChance) + 1) == 0)) { worldIn.setBlock(pos, getCropsBlocks().get(random.nextInt(getCropsBlocks().size())).defaultBlockState(), 2); ForgeHooks.onCropsGrowPost(worldIn, pos, state); } } } @Override public void growCrops(World worldIn, BlockPos pos, BlockState state) { Block block = getCropsBlocks().get(worldIn.random.nextInt(getCropsBlocks().size())); int age = this.getBonemealAgeIncrease(worldIn) - 1; if (block instanceof CrockPotDoubleCropsBlock) { CrockPotDoubleCropsBlock cropsBlock = (CrockPotDoubleCropsBlock) block; int maxAge = cropsBlock.getMaxGrowthAge(cropsBlock.defaultBlockState()); if (age > maxAge) { worldIn.setBlock(pos, cropsBlock.getStateForAge(maxAge), 2); if (worldIn.isEmptyBlock(pos.above())) { worldIn.setBlock(pos.above(), cropsBlock.getStateForAge(age), 2); } } else { worldIn.setBlock(pos, cropsBlock.getStateForAge(age), 2); } } else if (block instanceof CropsBlock) { CropsBlock cropsBlock = (CropsBlock) block; worldIn.setBlock(pos, cropsBlock.getStateForAge(Math.min(age, cropsBlock.getMaxAge())), 2); } worldIn.setBlock(pos, block.defaultBlockState(), 2); } @Override protected IItemProvider getBaseSeedId() { return CrockPotRegistry.unknownSeeds; } }
<filename>tests/test_decoder.py<gh_stars>0 import unittest try: import prison except ImportError: from .context import prison class TestDecoder(unittest.TestCase): def test_dict(self): self.assertEqual(prison.loads('()'), {}) self.assertEqual(prison.loads('(a:0,b:1)'), { 'a': 0, 'b': 1 }) self.assertEqual(prison.loads("(a:0,b:foo,c:'23skidoo')"), { 'a': 0, 'c': '2<PASSWORD>', 'b': 'foo' }) self.assertEqual(prison.loads('(id:!n,type:/common/document)'), { 'type': '/common/document', 'id': None }) self.assertEqual(prison.loads("(a:0)"), { 'a': 0 }) def test_bool(self): self.assertEqual(prison.loads('!t'), True) self.assertEqual(prison.loads('!f'), False) def test_invalid(self): with self.assertRaises(prison.decoder.ParserException): prison.loads('(') def test_none(self): self.assertEqual(prison.loads('!n'), None) def test_list(self): self.assertEqual(prison.loads('!(1,2,3)'), [1, 2, 3]) self.assertEqual(prison.loads('!()'), []) self.assertEqual(prison.loads("!(!t,!f,!n,'')"), [True, False, None, '']) def test_number(self): self.assertEqual(prison.loads('0'), 0) self.assertEqual(prison.loads('1.5'), 1.5) self.assertEqual(prison.loads('-3'), -3) self.assertEqual(prison.loads('1e30'), 1e+30) self.assertEqual(prison.loads('1e-30'), 1.0000000000000001e-30) def test_string(self): self.assertEqual(prison.loads("''"), '') self.assertEqual(prison.loads('G.'), 'G.') self.assertEqual(prison.loads('a'), 'a') self.assertEqual(prison.loads("'0a'"), '0a') self.assertEqual(prison.loads("'abc def'"), 'abc def') self.assertEqual(prison.loads("'-h'"), '-h') self.assertEqual(prison.loads('a-z'), 'a-z') self.assertEqual(prison.loads("'wow!!'"), 'wow!') self.assertEqual(prison.loads('domain.com'), 'domain.com') self.assertEqual(prison.loads("'<EMAIL>'"), '<EMAIL>') self.assertEqual(prison.loads("'US $10'"), 'US $10') self.assertEqual(prison.loads("'can!'t'"), "can't")
Determining the Role of Single Currency for Countries Economic Development: A Case Study of the East African Community Since the reintroduction of the East African Community (EAC), there has been a clamor for the establishment of a single currency. This is due to the effect of multiple currencies in cross-border transactions, which has caused the value of goods and services to rise. Just like some countries within the European Union have some. While it may be ideal, a critical feasibility study is required to determine the role of a single currency in driving EAC economic development. The single currency will serve as a means of payment for cross-border trade and payments, familiarize the public with the benefits of monetary integration, and encourage EAC monetary policy coordination. The study was centered on a single research question: what role does a single currency play in EAC economic development? The study was based in Arusha city the headquarter of EAC. According to the findings of the study, the majority of respondents are well-versed in the factors that encourage EAC member countries to pursue the adoption of a single currency for economic development. These factors include and not exhaustive: enhancing currency stability, reduction of financial risks, reduction of transaction cost, reduction of exchange rate fluctuation, enhancing price transparency and reduction of inflation that impact on trade within the region. According to the study, in order for effective integration to occur, it is necessary to investigate various monetary union models, design the integration process, and put in place a mechanism for gradually implementing monetary integration.
(a) Field of the Invention The invention relates to a liquid crystal display. (b) Description of the Related Art Liquid crystal displays (“LCDs”) are one of the most widely used flat panel displays, and an LCD includes a pair of panels provided with field-generating electrodes and a liquid crystal (“LC”) layer interposed between the two panels. The LCD displays images by applying voltages to the field-generating electrodes to generate an electric field in the LC layer that determines the orientation of LC molecules therein to adjust polarization of incident light. The liquid crystal display includes a pixel including a switching element realized by a thin film transistor (“TFT”) as a three terminal element, and a display panel including display signal lines such as a gate line and a data line. The TFT functions as a switching element transmitting or blocking a data signal transmitted through the data line to a pixel, according to a gate signal transmitted through the gate line. The display panel of the liquid crystal display includes a display area formed with the pixel for displaying image signals, and a non-display area excluding the display area. The non-display area is a region required for driving the liquid crystal display. Here, as the size of the liquid crystal display is increased, it is preferable that the display area is maximized and the non-display area is minimized. Also, a tiled display realized by liquid crystal displays that are arranged in a matrix such as 3×3 or 4×4 matrix has been spotlighted. The tiled display of a large size may be realized by using small liquid crystal displays, and the tiled display device may be applied to various fields. However, when the width of a bezel as the non-display area disposed between liquid crystal displays is wide, natural connection of the display is difficult. Accordingly, the bezel of the tiled display device must be minimized, and the non-display area of the liquid crystal display must be minimized. For this, a sealant combining two display panels includes conductive balls having conductivity such that the width of a light blocking member is decreased to minimize the non-display area. However, the conductive balls included in the sealant are positioned at a region except at a short point such that the upper and lower panels are short-circuited when the sealant is diffused into a driving circuit when forming the sealant. The above information disclosed in this Background section is only for enhancement of understanding of the background of the invention and therefore it may contain information that does not form the prior art that is already known in this country to a person of ordinary skill in the art.
More people claim to have caught a glimpse of the legendary Loch Ness monster in 2017 than in any other year this century. Up to eight official sightings of the fabled creature were recorded in Scotland’s most famous waterway this year alone. The most recent sighting came in early November when genetic scientist Dr. Jo Knight, of Lancaster University, spotted something “very dark” while monster hunting with her son in Urquhart Bay, a favourite haunt of Nessie. But, after taking hundreds of pictures, she concluded it might not actually have been the legendary leviathan after all. Gary Campbell, the recorder and keeper of the Official Loch Ness Monster Sightings Register, told the Daily Express that Knight and her son’s sighting had been considered official, making it the eighth this year – hitting a new record for the 21st century. “This is the most we have had this century,” he told the Express newspaper this week. “In recent years the most sightings in a year we have had is 17 – and that was in 1996. There were three official sightings in June, and one in May, April, August and October.
Lack of Efficacy of Streptozocin and Doxorubicin in Patients With Advanced Pancreatic Endocrine Tumors Background:The combination of streptozocin and doxorubicin has been considered the standard palliative chemotherapy regimen in patients with advanced pancreatic endocrine tumors (PETs). However, a recent review failed to confirm high antitumor activity in patients with advanced PETs. Methods:We retrospectively reviewed the records of 16 consecutive patients who received streptozocin and doxorubicin for advanced PETs at Dana Farber/Partners Cancer Care institutions. Baseline patient characteristics, radiographic response to therapy, treatment-related toxicity, progression-free and overall survival were analyzed. Results:One patient demonstrated an objective partial response to therapy (objective response rate , 6%; 95% confidence interval , 018%). Six patients achieved stable disease (38%; 95% CI, 1462%) and 9 patients demonstrated disease progression on initial restaging (56%; 95% CI, 3377%). The median progression-free survival and overall survival were 3.9 months (95% CI, 2.88.8) and 20.2 months (95% CI, 9.737.4), respectively. Conclusions:In this retrospective cohort, the combination of streptozocin and doxorubicin failed to demonstrate substantial antitumor activity in patients with advanced PET. Our findings underscore the need for new therapeutic options in this patient population.
<gh_stars>1-10 package models import ( "fmt" "math" ) type Address struct { StreetLines []string `xml:"q0:StreetLines"` City string `xml:"q0:City"` StateOrProvinceCode string `xml:"q0:StateOrProvinceCode"` PostalCode string `xml:"q0:PostalCode"` CountryCode string `xml:"q0:CountryCode"` // CountryName string `xml:"q0:CountryName"` Residential Bool `xml:"q0:Residential"` } func (a Address) ShipsOutWithInternationalEconomy() bool { switch a.CountryCode { case "CA", "US", "": return false default: return true } } type AddressReply struct { // When we use Address in a request, we want to xml marshall the fields to // "q0:fieldName" but on unmarshalling an Address in a response, we want to // unmarshal the fields as just "fieldName" // TODO instead of two structs, we could merge this with Address, add // omitemptys to each of the below fields, so that whenever we marshall, we // marshall the q0 fields, but on unmarshalling a response, we check the // below fields. i'm not sure what's best StreetLines []string `xml:"StreetLines"` City string `xml:"City"` StateOrProvinceCode string `xml:"StateOrProvinceCode"` PostalCode string `xml:"PostalCode"` CountryCode string `xml:"CountryCode"` CountryName string `xml:"CountryName"` } type AdvanceNotificationDetail struct { EstimatedTimeOfArrival Timestamp Reason string Status string StatusDescription string StatusTime Timestamp } type AncillaryDetail struct { Reason string ReasonDescription string } type Broker struct { Type string `xml:"q0:Type"` Broker Shipper `xml:"q0:Broker"` } type Charge struct { Currency string Amount float64 } type Commodity struct { Name string `xml:"q0:Name"` NumberOfPieces int `xml:"q0:NumberOfPieces"` Description string `xml:"q0:Description"` // Purpose *string CountryOfManufacture string `xml:"q0:CountryOfManufacture"` HarmonizedCode *string `xml:"q0:HarmonizedCode"` Weight Weight `xml:"q0:Weight"` Quantity int `xml:"q0:Quantity"` QuantityUnits string `xml:"q0:QuantityUnits"` // AdditionalMeasure *int UnitPrice *Money `xml:"q0:UnitPrice"` CustomsValue *Money `xml:"q0:CustomsValue"` ExportLicenseExpirationDate *string `xml:"q0:ExportLicenseExpirationDate"` CIMarksAndNumbers []string `xml:"q0:CIMarksAndNumbers"` } type Commodities []Commodity func (c Commodities) Weight() Weight { if len(c) == 0 { return Weight{Units: "LB", Value: 0.0} } // Assume all the units are the same weight := Weight{ Units: c[0].Weight.Units, Value: 0.0, } for _, commodity := range c { weight.Value += commodity.Weight.Value } return weight } func (c Commodities) CustomsValue() (Money, error) { total := Money{Currency: "USD"} if len(c) == 0 { return total, nil } // Set the currency for _, commodity := range c { if commodity.CustomsValue != nil { total.Currency = commodity.CustomsValue.Currency break } } for _, commodity := range c { if commodity.CustomsValue == nil { continue } if commodity.CustomsValue.Currency != total.Currency { return total, fmt.Errorf("mismatching customs currencies: %s %s", commodity.CustomsValue.Currency, total.Currency) } total.Amount += commodity.CustomsValue.Amount } total.Amount = math.Ceil(total.Amount*100.0) / 100.0 return total, nil } type CompletedPackageDetails struct { SequenceNumber string TrackingIds []TrackingID Label Label OperationalDetail PackageOperationalDetail } type PackageOperationalDetail struct { Barcodes Barcodes } type Barcodes struct { BinaryBarcodes BinaryBarcodes StringBarcodes StringBarcode } type BinaryBarcodes struct { Type string Value []byte } type StringBarcodes struct { Type string Value string } type CompletedShipmentDetail struct { UsDomestic string CarrierCode string MasterTrackingId TrackingID ServiceTypeDescription string ServiceDescription ServiceDescription PackagingDescription string OperationalDetail OperationalDetail ShipmentRating Rating ShipmentDocuments []ShipmentDocument CompletedPackageDetails CompletedPackageDetails } type CompletedTrackDetail struct { HighestSeverity string Notifications []Notification DuplicateWaybill bool MoreData bool TrackDetails []TrackDetail } type CommercialInvoice struct { Purpose string `xml:"q0:Purpose"` OriginatorName string `xml:"q0:OriginatorName"` } type CommercialInvoiceDetail struct { Format Format `xml:"q0:Format"` CustomerImageUsages []CustomerImageUsage `xml:"q0:CustomerImageUsages"` } type Contact struct { PersonName string `xml:"q0:PersonName"` CompanyName string `xml:"q0:CompanyName"` PhoneNumber string `xml:"q0:PhoneNumber"` EmailAddress string `xml:"q0:EMailAddress"` } type ContactAndAddress struct { Contact Contact `xml:"q0:Contact"` Address Address `xml:"q0:Address"` } type ContentRecord struct { PartNumber string ItemNumber string ReceivedQuantity int Description string } type CustomerImageUsage struct { Type string `xml:"q0:Type"` ID string `xml:"q0:Id"` } type CustomerReference struct { CustomerReferenceType string `xml:"q0:CustomerReferenceType"` Value string `xml:"q0:Value"` } type CustomsClearanceDetail struct { Brokers []Broker `xml:"q0:Brokers"` ImporterOfRecord Shipper `xml:"q0:ImporterOfRecord"` DutiesPayment Payment `xml:"q0:DutiesPayment"` DocumentContent *string `xml:"q0:DocumentContent"` CustomsValue *Money `xml:"q0:CustomsValue"` PartiesToTransactionAreRelated bool `xml:"q0:PartiesToTransactionAreRelated"` CommercialInvoice *CommercialInvoice `xml:"q0:CommercialInvoice"` Commodities Commodities `xml:"q0:Commodities"` } type DateOrTimestamp struct { Type string DateOrTimestamp Timestamp } type Destination struct { StreetLines []string City string StateOrProvinceCode string PostalCode string CountryCode string CountryName string Residential bool } type Dimensions struct { Length int `xml:"q0:Length"` Width int `xml:"q0:Width"` Height int `xml:"q0:Height"` Units string `xml:"q0:Units"` } func (d Dimensions) IsValid() bool { valuesAreValid := d.Length > 0 && d.Width > 0 && d.Height > 0 unitsIsValid := d.Units == DimensionsUnitsIn || d.Units == DimensionsUnitsCm return valuesAreValid && unitsIsValid } type EmailDetail struct { EmailAddress string `xml:"q0:EmailAddress"` Name string `xml:"q0:Name"` } type EtdDetail struct { RequestedDocumentCopies string `xml:"q0:RequestedDocumentCopies"` } type EdtCommodityTax struct { HarmonizedCode string Taxes []EdtTaxDetail } type EdtTaxDetail struct { TaxType string Name string TaxableValue Charge Description string Formula string Amount Charge } type Event struct { Timestamp Timestamp EventType string EventDescription string StatusExceptionCode string StatusExceptionDescription string Address Address ArrivalLocation string } type EventNotification struct { Role string `xml:"q0:Role"` Events []string `xml:"q0:Events"` NotificationDetail NotificationDetail `xml:"q0:NotificationDetail"` FormatSpecification FormatSpecification `xml:"q0:FormatSpecification"` } type EventNotificationDetail struct { AggregationType string `xml:"q0:AggregationType"` PersonalMessage string `xml:"q0:PersonalMessage"` EventNotifications []EventNotification `xml:"q0:EventNotifications"` } type Format struct { ImageType string `xml:"q0:ImageType"` StockType string `xml:"q0:StockType"` } type FormatSpecification struct { Type string `xml:"q0:Type"` } type FreightPickupDetail struct { ApprovedBy Contact `xml:"q0:ApprovedBy"` Payment string `xml:"q0:Payment"` Role string `xml:"q0:Role"` SubmittedBy Contact `xml:"q0:SubmittedBy"` LineItems []FreightPickupLineItem `xml:"q0:LineItems"` } type FreightPickupLineItem struct { Service string `xml:"q0:Service"` SequenceNumber int `xml:"q0:SequenceNumber"` Destination Address `xml:"q0:Destination"` Packaging string `xml:"q0:Packaging"` Pieces int `xml:"q0:Pieces"` Weight Weight `xml:"q0:Weight"` TotalHandlingUnits int `xml:"q0:TotalHandlingUnits"` JustOneMore bool `xml:"q0:JustOneMore"` Description string `xml:"q0:Description"` } type FromAndTo struct { FromAddress Address ToAddress Address FromContact Contact ToContact Contact } func (ft FromAndTo) IsInternational() bool { fromCountryCode := ft.FromAddress.CountryCode if fromCountryCode == "" { fromCountryCode = "US" } toCountryCode := ft.ToAddress.CountryCode if toCountryCode == "" { toCountryCode = "US" } return fromCountryCode != toCountryCode } type Identifier struct { Type string Value string } type Image struct { ID string `xml:"q0:Id"` Image string `xml:"q0:Image"` } type InformationNoteDetail struct { Code string Description string } type Label struct { Type string ShippingDocumentDisposition string ImageType string Resolution string CopiesToPrint string Parts []Part } type LabelSpecification struct { LabelFormatType string `xml:"q0:LabelFormatType"` ImageType string `xml:"q0:ImageType"` LabelStockType *string `xml:"q0:LabelStockType"` } type Localization struct { LanguageCode string `xml:"q0:LanguageCode"` } type Money struct { Currency string `xml:"q0:Currency"` Amount float64 `xml:"q0:Amount"` } type Name struct { Type string Encoding string Value string } type NotificationDetail struct { NotificationType string `xml:"q0:NotificationType"` EmailDetail EmailDetail `xml:"q0:EmailDetail"` Localization Localization `xml:"q0:Localization"` } type OperationalDetail struct { OriginLocationNumber string DestinationLocationNumber string TransitTime string IneligibleForMoneyBackGuarantee string DeliveryEligibilities string ServiceCode string PackagingCode string } type OriginDetail struct { UseAccountAddress Bool `xml:"q0:UseAccountAddress"` PickupLocation PickupLocation `xml:"q0:PickupLocation"` PackageLocation string `xml:"q0:PackageLocation"` BuildingPart string `xml:"q0:BuildingPart"` BuildingPartDescription string `xml:"q0:BuildingPartDescription"` ReadyTimestamp Timestamp `xml:"q0:ReadyTimestamp"` CompanyCloseTime string `xml:"q0:CompanyCloseTime"` } type PackageIdentifier struct { Type string `xml:"q0:Type"` Value string `xml:"q0:Value"` } type Part struct { DocumentPartSequenceNumber string Image []byte } type Payment struct { PaymentType string `xml:"q0:PaymentType"` Payor Payor `xml:"q0:Payor"` } type Payor struct { ResponsibleParty Shipper `xml:"q0:ResponsibleParty"` } type RateDetail struct { RateType string RateZone string RatedWeightMethod string DimDivisor string FuelSurchargePercent string TotalBillingWeight Weight TotalBaseCharge Charge TotalFreightDiscounts Charge TotalNetFreight Charge TotalSurcharges Charge TotalNetFedExCharge Charge TotalTaxes Charge TotalNetCharge Charge NetCharge Charge TotalRebates Charge TotalDutiesAndTaxes Charge TotalAncillaryFeesAndTaxes Charge TotalDutiesTaxesAndFees Charge TotalNetChargeWithDutiesAndTaxes Charge Surcharges []Surcharge DutiesAndTaxes []EdtCommodityTax } type RateReplyDetail struct { ServiceType string ServiceDescription ServiceDescription PackagingType string DestinationAirportID string `xml:"DestinationAirportId"` IneligibleForMoneyBackGuarantee bool SignatureOption string ActualRateType string RatedShipmentDetails []Rating } type RatedPackage struct { GroupNumber string EffectiveNetDiscount Charge PackageRateDetail RateDetail } type RatedShipmentDetail struct { EffectiveNetDiscount Charge ShipmentRateDetail RateDetail RatedPackages []RatedPackage } type Rating struct { ActualRateType string GroupNumber string EffectiveNetDiscount Charge // For the shipping service, the rate details is an array, but for the rate service, it is not ShipmentRateDetails []RateDetail ShipmentRateDetail RateDetail RatedPackages []RatedPackage } type RecipientDetail struct { NotificationEventsAvailable []string } type Reconciliation struct { Status string Description string } type RequestedPackageLineItem struct { SequenceNumber int `xml:"q0:SequenceNumber"` GroupPackageCount int `xml:"q0:GroupPackageCount,omitempty"` Weight Weight `xml:"q0:Weight"` Dimensions Dimensions `xml:"q0:Dimensions"` PhysicalPackaging string `xml:"q0:PhysicalPackaging"` ItemDescription string `xml:"q0:ItemDescription"` CustomerReferences []CustomerReference `xml:"q0:CustomerReferences"` } type RequestedShipment struct { ShipTimestamp Timestamp `xml:"q0:ShipTimestamp"` DropoffType string `xml:"q0:DropoffType"` ServiceType string `xml:"q0:ServiceType,omitempty"` PackagingType string `xml:"q0:PackagingType"` PreferredCurrency string `xml:"q0:PreferredCurrency,omitempty"` // We don't use these, but may do so later // ShipmentManifestDetail *ShipmentManifestDetail `xml:"q0:ShipmentManifestDetail,omitempty"` // TotalWeight *Weight `xml:"q0:TotalWeight,omitempty"` // TotalInsuredValue *Money `xml:"q0:TotalInsuredValue,omitempty"` // ShipmentAuthorizationDetail *ShipmentAuthorizationDetail `xml:"q0:ShipmentAuthorizationDetail,omitempty"` Shipper Shipper `xml:"q0:Shipper"` Recipient Shipper `xml:"q0:Recipient"` ShippingChargesPayment *Payment `xml:"q0:ShippingChargesPayment"` SpecialServicesRequested *SpecialServicesRequested `xml:"q0:SpecialServicesRequested,omitempty"` SmartPostDetail *SmartPostDetail `xml:"q0:SmartPostDetail,omitempty"` CustomsClearanceDetail *CustomsClearanceDetail `xml:"q0:CustomsClearanceDetail,omitempty"` LabelSpecification *LabelSpecification `xml:"q0:LabelSpecification"` ShippingDocumentSpecification *ShippingDocumentSpecification `xml:"q0:ShippingDocumentSpecification"` RateRequestTypes *string `xml:"q0:RateRequestTypes"` EdtRequestType *string `xml:"q0:EdtRequestType"` PackageCount *int `xml:"q0:PackageCount"` RequestedPackageLineItems []RequestedPackageLineItem `xml:"q0:RequestedPackageLineItems"` } type ReturnShipmentDetail struct { ReturnType string `xml:"q0:ReturnType"` } type Package struct { TrackingNumber string TrackingNumberUniqueIdentifiers []string CarrierCode string ShipDate string Destination Destination RecipientDetails []RecipientDetail } type PickupLocation struct { Contact Contact `xml:"q0:Contact"` Address Address `xml:"q0:Address"` } type SelectionDetails struct { CarrierCode string `xml:"q0:CarrierCode"` PackageIdentifier PackageIdentifier `xml:"q0:PackageIdentifier"` // Destination Destination // ShipmentAccountNumber string } type Service struct { Type string Description string ShortDescription string } type ServiceDescription struct { ServiceType string Code string Names []Name Description string AstraDescription string } type ShipmentDocument struct { Type string ShippingDocumentDisposition string ImageType string Resolution string CopiesToPrint string Parts []Part } type ShipmentManifestDetail struct { ManifestReferenceType string `xml:"q0:ManifestReferenceType,omitempty"` } type Shipper struct { AccountNumber string `xml:"q0:AccountNumber"` Contact Contact `xml:"q0:Contact"` Address Address `xml:"q0:Address"` } type ShippingDocumentSpecification struct { ShippingDocumentTypes []string `xml:"q0:ShippingDocumentTypes"` // CertificateOfOrigin []CertificateOfOrigin CommercialInvoiceDetail []CommercialInvoiceDetail `xml:"q0:CommercialInvoiceDetail"` // CustomPackageDocumentDetail []CustomPackageDocumentDetail // CustomShipmentDocumentDetail []CustomShipmentDocumentDetail // ExportDeclarationDetail []ExportDeclarationDetail // GeneralAgencyAgreementDetail []GeneralAgencyAgreementDetail // NaftaCertificateOfOriginDetail []NaftaCertificateOfOriginDetail // Op900Detail []Op900Detail // DangerousGoodsShippersDeclarationDetail []DangerousGoodsShippersDeclarationDetail // FreightAddressLabelDetail []FreightAddressLabelDetail // FreightBillOfLadingDetail []FreightBillOfLadingDetail // ReturnInstructionsDetail []ReturnInstructionsDetail } type SmartPostDetail struct { Indicia string `xml:"q0:Indicia"` AncillaryEndorsement string `xml:"q0:AncillaryEndorsement"` HubID string `xml:"q0:HubId"` } type SpecialHandling struct { Type string Description string PaymentType string } type SpecialServicesRequested struct { SpecialServiceTypes []string `xml:"q0:SpecialServiceTypes,omitempty"` EventNotificationDetail *EventNotificationDetail `xml:"q0:EventNotificationDetail,omitempty"` ReturnShipmentDetail *ReturnShipmentDetail `xml:"q0:ReturnShipmentDetail,omitempty"` EtdDetail *EtdDetail `xml:"q0:EtdDetail,omitempty"` } type StatusDetail struct { CreationTime Timestamp Code string Description string Location Address AncillaryDetails []AncillaryDetail } type StringBarcode struct { Type string Value string } type Surcharge struct { SurchargeType string Level string Description string Amount Charge } type TrackDetail struct { Notification Notification TrackingNumber string Barcode StringBarcode TrackingNumberUniqueIdentifier string StatusDetail StatusDetail InformationNotes []InformationNoteDetail // Not gonna bother with all of these fields until we need them // Most of the fields in this block are not important CustomerExceptionRequests []InformationNoteDetail Reconciliations []Reconciliation ServiceCommitMessage string DestinationServiceArea string DestinationServiceAreaDescription string CarrierCode string OperatingCompanyType string OperatingCompanyOrCarrierDescription string CartageAgentCompanyName string ProductionLocationContactAndAddress ContactAndAddress ContentRecord ContentRecord // ... more Service Service PackageWeight Weight ShipmentWeight Weight Packaging string PackagingType string PhysicalPackagingType string PackageSequenceNumber int PackageCount int Charges Charge NickName string Notes string Attributes []string ShipmentContents []ContentRecord PackageContents string TrackAdvanceNotificationDetail AdvanceNotificationDetail Shipper Contact ShipperAddress AddressReply OriginLocationAddress AddressReply // DatesOrTimes contains estimated arrivals, departures, etc. DatesOrTimes []DateOrTimestamp Recipient Contact DestinationAddress AddressReply ActualDeliveryAddress AddressReply SpecialHandlings []SpecialHandling DeliveryLocationType string DeliveryLocationDescription string DeliveryAttempts int DeliverySignatureName string TotalUniqueAddressCountInConsolidation int NotificationEventsAvailable string Events []Event } type TrackingID struct { TrackingIdType string TrackingNumber string } type TransactionDetail struct { CustomerTransactionID string `xml:"q0:CustomerTransactionId,omitempty"` } type Weight struct { Units string `xml:"q0:Units"` Value float64 `xml:"q0:Value"` } func (w Weight) IsZero() bool { return w.Value == 0.0 }
<reponame>lclpsoz/corrida-ga import pygame import pygame_gui import json import random from view import View from controller.controller_player import ControllerPlayer from controller.controller_ai import ControllerAI class Interface(): def __init__(self): pygame.init() self.config = dict(json.load(open('src/config.json'))) if 'seed' in self.config: random.seed(self.config['seed']) else: seed = random.randint(0, int(2**64-1)) print("seed =", seed) random.seed(seed) pygame.display.set_caption('F1 with GA') self.view = View(self.config) self.background = pygame.Surface((1200, 600)) self.background.fill(pygame.Color("#ffffff")) self.manager = pygame_gui.UIManager((1200, 600), 'src/theme.json') self.menu_btns(50, 200) self.all_configs = json.load(open("src/all-configs.json")) self.all_fields = [] self.menu_config(300, 100, self.all_configs) def menu_btns(self, x : int, y : int): """Create menu buttons. Receive top left position of principal menu buttons.""" x_delta = 0 y_delta = 50 elem_size = (200, 50) self.btn_ga = pygame_gui.elements.UIButton( relative_rect=pygame.Rect((x, y), elem_size), text='Algoritmo Genético', manager=self.manager ) x += x_delta y += y_delta self.btn_player = pygame_gui.elements.UIButton( relative_rect=pygame.Rect((x, y), elem_size), text='Direção Manual', manager=self.manager ) x += x_delta y += y_delta self.btn_ai_load = pygame_gui.elements.UIButton( relative_rect=pygame.Rect((x, y), elem_size), text='Assistir melhores AGs', manager=self.manager ) x += x_delta y += y_delta self.btn_exit = pygame_gui.elements.UIButton( relative_rect=pygame.Rect((x, y), elem_size), text='Sair', manager=self.manager ) def menu_config(self, x : int, y : int, all_configs : dict): """Interface for personalization of config.""" elem_size = (200, 50) all_x = [] for v in range(x, self.config['width']-elem_size[0]*2 + 1, elem_size[0]*2): all_x.append(v) all_y = [] for v in range(y, self.config['height']-elem_size[1] + 1, elem_size[1]): all_y.append(v) # Alert to press Enter pygame_gui.elements.ui_label.UILabel( relative_rect=pygame.Rect((all_x[-1], all_y[-1]), (elem_size[0]*2, elem_size[1])), manager=self.manager, text="Press ENTER to set values!" ) float_chars = ['0','1','2','3','4','5','6','7','8','9','.'] categories = ["geral", "ai"] for i in range(len(categories)): category = categories[i] # Title of column pygame_gui.elements.ui_label.UILabel( relative_rect=pygame.Rect((all_x[i], all_y[0]), (elem_size[0]*2, elem_size[1])), manager=self.manager, text=category.upper() ) items = list(all_configs[category].keys()) fields = sorted([(x, all_configs[category][x]) for x in items])[::-1] for j in range(1, len(fields)+1): field = fields[j-1] # Label pygame_gui.elements.ui_label.UILabel( relative_rect=pygame.Rect((all_x[i], all_y[j]), elem_size), manager=self.manager, text=field[0] + ":", ) if isinstance(field[1], str): # User input entry_text = pygame_gui.elements.UITextEntryLine( relative_rect=pygame.Rect((all_x[i]+elem_size[0], all_y[j]+10), elem_size), manager=self.manager, object_id=field[0] ) self.all_fields.append(entry_text) if "FLOAT" in field[1]: entry_text.set_allowed_characters(allowed_characters=float_chars) else: entry_text.set_allowed_characters('numbers') if category == 'ai': if field[0] == "max_frames": pass else: entry_text.set_text(str(self.config['ai'][field[0]])) elif field[0] == 'car_visions': entry_text.set_text(str(self.config['car']['number_of_visions'])) elif field[0] == 'car_vision_len': entry_text.set_text(str(self.config['car']['vision_length'])) else: entry_text.set_text(str(self.config[field[0]])) else: # Drop down menu pygame_gui.elements.UIDropDownMenu( options_list=list([str(x) for x in field[1]]), relative_rect=pygame.Rect((all_x[i]+elem_size[0], all_y[j]), elem_size), manager=self.manager, starting_option=str(list(field[1])[0]), object_id=field[0] ) def set_config(self, field, text): """Receive field and text to be setted on config dict.""" if field in self.all_configs['ai']: if self.all_configs['ai'][field].endswith('FLOAT'): tp = float else: tp = int elif not field.startswith("car"): if self.all_configs['geral'][field].endswith('FLOAT'): tp = float else: tp = int if field in self.config['ai'] or field == 'max_frames': try: self.config['ai'][field] = tp(text) except: print(text, "is a invalid input in", field) elif field == 'car_visions': try: self.config['car']['number_of_visions'] = int(text) except: print(text, "is a invalid input in", field) elif field == 'car_vision_len': try: self.config['car']['vision_length'] = int(text) except: print(text, "is a invalid input in", field) else: try: self.config[field] = tp(text) except: print(text, "is a invalid input in", field) def set_best_ai_info(self): """Read best_ai_info from file, if can't, try to load locally.""" try: self.best_ai_info = json.load(open("src/best_ai_info.json", 'r')) except: self.best_ai_info = best_ai_info def set_interface_config(self): """Read all fields in the interface and set them in the config dict.""" for ui_elem in self.all_fields: if ui_elem.text != '': self.set_config(ui_elem.object_ids[0], ui_elem.text) def run(self): clock = pygame.time.Clock() running = True while running: time_delta = clock.tick(60)/1000.0 for event in pygame.event.get(): if event.type == pygame.QUIT: running = False elif event.type == pygame.USEREVENT: if event.user_type == pygame_gui.UI_BUTTON_PRESSED: self.set_interface_config() if event.ui_element == self.btn_player: ControllerPlayer(self.config).run() elif event.ui_element == self.btn_ga: ControllerAI(self.config).run() elif event.ui_element == self.btn_ai_load: self.set_best_ai_info() self.config['ai']['fps'] = 60 self.config['ai']['fps_info'] = 60 self.config['ai']['population_size'] = 1 self.config['ai']['save'] = False self.config['ai']['train'] = False ControllerAI(self.config, self.best_ai_info[self.config['track']]).run() elif event.ui_element == self.btn_exit: running = False elif event.user_type == pygame_gui.UI_TEXT_ENTRY_FINISHED: self.set_config(event.ui_object_id, event.text) elif event.user_type == pygame_gui.UI_DROP_DOWN_MENU_CHANGED: if event.ui_object_id == "verbose": self.config[event.ui_object_id] = int(event.text) else: self.config[event.ui_object_id] = event.text self.manager.process_events(event) self.manager.update(time_delta) self.view.blit(self.background, (0, 0)) self.view.draw_text(220, 20, "Carro autônomo em pista 2D", pygame.font.SysFont('arial', 60, bold=False), pygame.Color("#000000")) self.manager.draw_ui(self.view.screen) pygame.display.update()
from django.apps import AppConfig class DjangoSimpleForumConfig(AppConfig): name = 'django_simple_forum'
During a call process, when a first electronic device plays a speech signal sent by a second electronic device, a microphone in the first electronic device collects the speech signal, and sends the speech signal to the second electronic device. Consequently, a user of the second electronic device hears an echo of the speech signal from the user, affecting call quality. Therefore, it is necessary to cancel the echo during the call process. In related technologies, an echo cancellation method is provided. First, a first electronic device inputs a far-end signal and a near-end signal to a double-end detection module of the first electronic device. The far-end signal is a speech signal sent by a second electronic device, and the near-end signal is a mixed signal including an echo signal generated when the first electronic device plays the far-end signal and a speech signal from a user of the first electronic device. The double-end detection module detects whether a speech signal exists in the near-end signal, inputs a detection result and the far-end signal to a normalized least mean square adaptive filtering (NLMS) module, obtains an estimated signal of the far-end signal according to the far-end signal and outputs the estimated signal, and then subtracts the estimated signal from the near-end signal to obtain a residual signal. The residual signal is inputted to a non-linear processing (NLP) module to reduce the echo signal when the detection result received by the NLMS module is that the speech signal exists in the near-end signal, to improve an effect of echo cancellation. The residual signal is inputted to the NLMS module when the detection result received by the NLMS module is that no speech signal exists in the near-end signal, and the residual signal is used for estimation of the far-end signal for the next time. Because there is certain error when the NLMS module estimates the far-end signal, a part of the echo signal remains in the residual signal obtained by subtracting the estimated signal from the near-end signal, affecting call quality.
Exploring the various Carmelo Anthony trade scenarios With the news that the Nuggets’ latest meeting with Carmelo and his camp didn’t go that well, it’s time to start looking at Anthony’s short list of teams to see what they can offer in the way of trade. Per Adrian Wojnarowski, the Nuggets are looking for “young players and draft picks,” so we’ll keep that in mind as we discuss the various trade scenarios. That Wojnarowski piece also listed five teams as potential landing spots for Anthony: New Jersey, L.A. (Clippers), Houston, Golden State and Charlotte (due to Anthony’s shoe deal with Brand Jordan). Let’s fire up the Trade Machine and go team-by-team to see what they can offer. Keep in mind that it’s virtually impossible to get equal value for a disgruntled star, so most of these trades are going to look better from the point of view of the team receiving Anthony. That’s just the way it is. New Jersey (soon to be Brooklyn) Nets New Jersey has four young(ish) players that might appeal to the Nuggets: Brook Lopez, Derrick Favors, Terrence Williams and Devin Harris. I don’t see the Nuggets getting Lopez out of this deal, but one idea is a simple swap of Troy Murphy and Derrick Favors for Anthony, with one or two first round draft picks to sweeten the deal if necessary. This would leave the Nets very thin at power forward, but they’d get a Top 15 player to build around while retaining Harris, Williams and Lopez. The Nuggets would get a power forward with a ton of potential to form a nice one-two punch with their best young piece, Ty Lawson. If the Nuggets aren’t sold on Lawson for some reason, they could ask for Harris, Williams and Kris Humphries (to even out the salaries). Harris, Favors and Humphries is another possibility. So is Harris, Favors and Williams, which looks like the best of the bunch. The Nuggets could hold onto Favors and Williams, and if they’re set with Lawson at point guard, move Harris in another trade. Would the Nets give up Harris, Favors and Williams? They should. It’s not often that a player of Carmelo’s stature comes on the market while in his prime. Teams should do whatever they can to get him, and worry about fixing the roster later. Los Angeles Clippers Hmm. Maybe the Clips will get their star after all. If they do, they have Anthony’s wife, LaLa Vasquez, and her burgeoning ‘entertainment career’ to thank. To make the numbers work, it appears that Chris Kaman would need to be involved in any trade for Anthony, unless the Nuggets were willing to take on Baron Davis (but he doesn’t exactly fit the ‘young player’ criteria). So how about Kaman, Eric Gordon and Al-Farouq Aminu? I’d be shocked if the Nuggets were able to wrest Blake Griffin away from the Clippers, so this may be the best they can do. L.A. could throw in a first round draft pick or two to get the Nuggets to bite. Denver could even throw in Chris Andersen if it wanted to dump more salary and give the Clips a center back in the deal. Houston Rockets The Rockets can offer a plethora of different proposals, and they are particularly attractive because they can include the Knicks’ first round draft picks over the next two seasons. If Denver is in the market for a good young shooting guard, how about Kevin Martin, Jordan Hill and Chase Budinger (to clear out the logjam at small forward in Houston)? Throw in a first round pick or two and this is a pretty attractive offer. If the Nuggets prefer Patrick Patterson to Hill, he could be included along with Mike Harris to make a similar deal work. Before Nuggets fans scoff at this offer, it should be noted that Martin is just 27 and has scored 20+ points in each of the last four seasons. Don’t want Martin? How about Budinger, Hill, Patterson, Jared Jeffries’ expiring deal and Aaron Brooks? There are four young players in that deal and the Rockets could add a draft pick or two to aid Denver in its rebuilding effort. Golden State Warriors I’m not sure what the Nuggets would want from the Warriors, but they’re not going to get Stephen Curry, so Denver fans should put that pipe dream to rest. Talent-wise, the best deal I can come up with is Monta Ellis and Andris Biedrins for Anthony and Chris Andersen. Ellis and Biedrins each still have four years on their deals, but they are 25 and 24 respectively, so they fit the Nuggets’ criteria. Salary-wise, this isn’t going to help Denver’s cap situation at all. Other than maybe Ekpe Udoh and Brandan Wright, the only other trade assets the Warriors have are the expiring contracts of Dan Gazuric and Vladimir Radmonovic. Either of these guys could be packaged with Ellis or Biedrins if the Nuggets wanted to go that route. Charlotte Bobcats It’s strange to see Charlotte on this list, but apparently Carmelo’s shoe deal with Brand Jordan makes it possible. If I’m the Nuggets, I would require that Gerald Wallace be a part of any deal, so maybe an offer of Wallace, Gerald Henderson and D.J. Augustin might get it done. Again, a draft pick or two could be included. Under this scenario, the Nuggets would get an All-Star (and a first team All-Defensive player) in return, though at 28, he might be a bit long in the tooth for Denver’s rebuilding project. Still, of all the trades described in this post, he is arguably the best player of the bunch. Tyrus Thomas isn’t eligible to be traded until mid-December and it doesn’t look like the Nuggets are going to wait that long, so there isn’t much else on the Bobcats’ offer that would be of interest. It’s not clear which of these deals are best for the Nuggets, as we’re not privy to their priorities. Is it important that they trade Carmelo out of the conference? When they say ‘young players’ are they referring to established up-and-comers (Aaron Brooks) or 19- or 20-year old draftees (Derrick Favors)? Are they sold on Lawson as their point guard of the future? And what about the Knicks? I still think a trade that would send Danilo Gallinari, Anthony Randolph and Eddy Curry’s expiring contract to Denver for Carmelo would be a pretty good deal for both sides.
#-*- coding:utf-8 -*- ##### # Created on 19-9-20 上午10:05 # # @Author: <NAME>(laygin) ##### import torch import torch.nn as nn from torch.autograd import Variable import numpy as np def normalize_adj(A, type="AD"): if type == "DAD": # d is Degree of nodes A=A+I # L = D^-1/2 A D^-1/2 A = A + np.eye(A.shape[0]) # A=A+I d = np.sum(A, axis=0) d_inv = np.power(d, -0.5).flatten() d_inv[np.isinf(d_inv)] = 0.0 d_inv = np.diag(d_inv) G = A.dot(d_inv).transpose().dot(d_inv) G = torch.from_numpy(G) elif type == "AD": A = A + np.eye(A.shape[0]) # A=A+I A = torch.from_numpy(A) D = A.sum(1, keepdim=True) G = A.div(D) else: A = A + np.eye(A.shape[0]) # A=A+I A = torch.from_numpy(A) D = A.sum(1, keepdim=True) D = np.diag(D) G = D - A return G def np_to_variable(x, is_cuda=True, dtype=torch.FloatTensor): v = Variable(torch.from_numpy(x).type(dtype)) if is_cuda: v = v.cuda() return v def set_trainable(model, requires_grad): for param in model.parameters(): param.requires_grad = requires_grad def weights_normal_init(model, dev=0.01): if isinstance(model, list): for m in model: weights_normal_init(m, dev) else: for m in model.modules(): if isinstance(m, nn.Conv2d): m.weight.data.normal_(0.0, dev) elif isinstance(m, nn.Linear): m.weight.data.normal_(0.0, dev) def clip_gradient(model, clip_norm): """Computes a gradient clipping coefficient based on gradient norm.""" totalnorm = 0 for p in model.parameters(): if p.requires_grad: modulenorm = p.grad.data.norm() totalnorm += modulenorm ** 2 totalnorm = np.sqrt(totalnorm) norm = clip_norm / max(totalnorm, clip_norm) for p in model.parameters(): if p.requires_grad: p.grad.mul_(norm) def EuclideanDistances(A, B): BT = B.transpose() vecProd = np.dot(A,BT) SqA = A**2 sumSqA = np.matrix(np.sum(SqA, axis=1)) sumSqAEx = np.tile(sumSqA.transpose(), (1, vecProd.shape[1])) SqB = B**2 sumSqB = np.sum(SqB, axis=1) sumSqBEx = np.tile(sumSqB, (vecProd.shape[0], 1)) SqED = sumSqBEx + sumSqAEx - 2*vecProd SqED[SqED<0]=0.0 ED = np.sqrt(SqED) return ED def bbox_transfor_inv(radius_map, sin_map, cos_map, score_map, wclip=(4, 12), expend=1.0): xy_text = np.argwhere(score_map > 0) # sort the text boxes via the y axis xy_text = xy_text[np.argsort(xy_text[:, 0])] origin = xy_text radius = radius_map[xy_text[:, 0], xy_text[:, 1], :] sin = sin_map[xy_text[:, 0], xy_text[:, 1]] cos = cos_map[xy_text[:, 0], xy_text[:, 1]] dtx = radius[:, 0] * cos * expend dty = radius[:, 0] * sin * expend ddx = radius[:, 1] * cos * expend ddy = radius[:, 1] * sin * expend topp = origin + np.stack([dty, dtx], axis=-1) botp = origin - np.stack([ddy, ddx], axis=-1) width = (radius[:, 0] + radius[:, 1]) // 3 width = np.clip(width, wclip[0], wclip[1]) top1 = topp - np.stack([width * cos, -width * sin], axis=-1) top2 = topp + np.stack([width * cos, -width * sin], axis=-1) bot1 = botp - np.stack([width * cos, -width * sin], axis=-1) bot2 = botp + np.stack([width * cos, -width * sin], axis=-1) bbox = np.stack([top1, top2, bot2, bot1], axis=1)[:, :, ::-1] bboxs = np.zeros((bbox.shape[0], 9), dtype=np.float32) bboxs[:, :8] = bbox.reshape((-1, 8)) bboxs[:, 8] = score_map[xy_text[:, 0], xy_text[:, 1]] return bboxs def clip_box(bbox, im_shape): # x >= 0 and x<=w-1 bbox[:, 0:8:2] = np.clip(bbox[:, 0:8:2], 1, im_shape[1] - 2) # y >= 0 and y <= h-1 bbox[:, 1:8:2] = np.clip(bbox[:, 1:8:2], 1, im_shape[0] - 2) return bbox def filter_bbox(bbox, minsize=16): ws = np.sqrt((bbox[:, 0] - bbox[:, 2])**2 + (bbox[:, 1] - bbox[:, 3])**2) hs = np.sqrt((bbox[:, 0] - bbox[:, 6])**2 + (bbox[:, 1] - bbox[:, 7])**2) keep = np.where(ws*hs >= minsize)[0] bbox = bbox[keep] return bbox def random_bbox(origin, limit=(8, 32)): rad0 = np.random.randint(limit[0], limit[1]) rad1 = np.random.randint(limit[0], limit[1]) cos = 2*np.random.random()-1 sin = 2*np.random.random()-1 dtx = rad0 * cos dty = rad0 * sin ddx = rad1 * cos ddy = rad1 * sin topp = origin + np.stack([dty, dtx], axis=-1) botp = origin - np.stack([ddy, ddx], axis=-1) width = (rad0 + rad1) // 3 width = np.clip(width, 4, 12) top1 = topp - np.stack([width * cos, -width * sin], axis=-1) top2 = topp + np.stack([width * cos, -width * sin], axis=-1) bot1 = botp - np.stack([width * cos, -width * sin], axis=-1) bot2 = botp + np.stack([width * cos, -width * sin], axis=-1) bbox = np.stack([[top1], [top2], [bot2], [bot1]], axis=1)[:, :, ::-1] bboxs = np.zeros((bbox.shape[0], 9), dtype=np.float32) bboxs[:, :8] = bbox.reshape((-1, 8)) bboxs[:, 8] = 0 ctrp = np.mean(bboxs[0, :8].reshape((4, 2)), axis=0).astype(np.int32) return np.array([ctrp[0], ctrp[1], rad0+rad1, 2*width, cos, sin, 0]) def jitter_gt_boxes(gt_boxes, jitter=0.25): """ jitter the gtboxes, before adding them into rois, to be more robust for cls and rgs gt_boxes: (G, 5) [x1 ,y1 ,x2, y2, class] int """ jit_boxs = gt_boxes.copy() ws = np.sqrt((jit_boxs[:, 0] - jit_boxs[:, 2])**2 + (jit_boxs[:, 1] - jit_boxs[:, 2])**2) + 1.0 hs = np.sqrt((jit_boxs[:, 0] - jit_boxs[:, 6])**2 + (jit_boxs[:, 1] - jit_boxs[:, 7])**2) + 1.0 ws = np.clip(ws, 10, 30) hs = np.clip(hs, 10, 120) width_offset = (np.random.rand(jit_boxs.shape[0]) - 0.5) * jitter * ws height_offset = (np.random.rand(jit_boxs.shape[0]) - 0.5) * jitter * hs width_offset = np.repeat(np.expand_dims(width_offset, axis=1), 4, axis=1) height_offset = np.repeat(np.expand_dims(height_offset, axis=1), 4, axis=1) jit_boxs[:, 0:8:2] += width_offset jit_boxs[:, 1:8:2] += height_offset return jit_boxs def jitter_gt_map(gt_map, jitter=0.25): """ jitter the gtboxes, before adding them into rois, to be more robust for cls and rgs gt_map: (G, 7) [xc ,yc ,h, w, cos, sin, class] int """ hs = gt_map[:, 2] ws = gt_map[:, 3] dim = gt_map.shape[0] x_offset = (np.random.rand(dim) - 0.5) * 2 y_offset = (np.random.rand(dim) - 0.5) * 2 h_offset = (np.random.rand(dim) - 0.5) * jitter * hs w_offset = (np.random.rand(dim) - 0.5) * jitter * ws cos_offset = (np.random.rand(dim) - 0.5) * 0.2 sin_offset = (np.random.rand(dim) - 0.5) * 0.2 gt_map[:, 0] += x_offset gt_map[:, 1] += y_offset gt_map[:, 2] += h_offset gt_map[:, 3] += w_offset sin = gt_map[:, 4] + cos_offset cos = gt_map[:, 5] + sin_offset scale = np.sqrt(1.0 / (sin ** 2 + cos ** 2)) gt_map[:, 4] = scale * sin gt_map[:, 5] = scale * cos return gt_map
Karnataka Chief Minister H D Kumaraswamy Wednesday hit out at Prime Minister Narendra Modi questioning his contribution to the country over the last five years. "Today when Modi is touring across the country, giving speeches and seeking votes, he asks people to vote for the security of the nation. Instead of development, he seeks vote for the safety and security of the nation," the chief minister said. "You have to think what Narendra Modi has given to the nation. Did he provide any relief to the farmers? Did women get any security? Did he provide jobs to the unemployed youth? Which problem could he sort out?" the chief minister asked the gathering. The first phase of polling, covering 14 other seats in the State, will take place Thursday. Karnataka has 28 Lok Sabha constituencies. Madhu Bangarappa is the son of former Karnataka chief minister S Bangarappa.
/* * Reporter cannot work without a configured RunInfo. */ @Test public void noRunInfoTest() throws ReportingException { final ResponseTimeStatsReporter r = new ResponseTimeStatsReporter(); Exception e = null; try { r.report(null); } catch (final ReportingException ee) { e = ee; } Assert.assertNotNull(e, "An exception was supposed to be thrown as RunInfo was not set."); }
// Run tests for PeX. func RunPex(env *runtime.RunEnv, initCtx *run.InitContext) error { var ( tick = time.Millisecond * time.Duration(env.IntParam("tick")) tickAmount = env.IntParam("tickAmount") c = env.IntParam("c") s = env.IntParam("s") p = env.IntParam("p") d = env.FloatParam("d") ) ctx, cancel := context.WithCancel(context.Background()) defer cancel() h, err := libp2p.New(context.Background()) if err != nil { return err } defer h.Close() sub, err := h.EventBus().Subscribe(new(pex.EvtViewUpdated)) if err != nil { return err } gossip := pex.Gossip{c, s, p, d} go recordLocalRecordUpdates(ctx, env, h, sub, &gossip) disc := &boot.RedisDiscovery{ ClusterSize: env.TestInstanceCount, C: initCtx.SyncClient, Local: host.InfoFromHost(h), } px, err := pex.New(ctx, h, pex.WithGossip(func(ns string) pex.Gossip { return gossip }), pex.WithDiscovery(disc), pex.WithTick(func(ns string) time.Duration { return tick }), pex.WithLogger(zaputil.Wrap(env.SLogger()))) if err != nil { return err } initCtx.SyncClient.MustSignalAndWait(ctx, tsync.State("initialized"), env.TestInstanceCount) for i := 0; i < tickAmount; i++ { if initCtx.GlobalSeq == 1 { fmt.Printf("Tick %v/%v\n", i+1, tickAmount) } ttl, err := px.Advertise(ctx, ns) if err != nil && !strings.Contains(err.Error(), "stream reset") && !strings.Contains(err.Error(), "failed to dial") && !strings.Contains(err.Error(), "i/o deadline reached") { return err } if err != nil { env.RecordMessage(err.Error()) } env.SLogger(). With(zap.Duration("ttl", ttl)). Debug("call to advertise succeeded") initCtx.SyncClient.MustSignalAndWait(ctx, tsync.State(fmt.Sprintf("advertised %v", i)), env.TestInstanceCount) atomic.AddInt64(&metricTick, 1) initCtx.SyncClient.MustSignalAndWait(ctx, tsync.State(fmt.Sprintf("ticked %v", i)), env.TestInstanceCount) } env.RecordSuccess() initCtx.SyncClient.MustSignalAndWait(ctx, tsync.State("finished"), env.TestInstanceCount) return nil }
def idle_time(x): libx11 = ctypes.cdll.LoadLibrary('libX11.so.6') display = libx11.XOpenDisplay(None) root = libx11.XDefaultRootWindow(display) libxss = ctypes.cdll.LoadLibrary('libXss.so.1') libxss.XScreenSaverAllocInfo.restype = ctypes.POINTER(XScreenSaverInfo) libxss_info = libxss.XScreenSaverAllocInfo() libxss.XScreenSaverQueryInfo(display, root, libxss_info) return libxss_info.contents.idle
""" Created on Dec 28, 2017 @author: khoi.ngo Verify that user cannot encrypt message with invalid wallet handle. """ import pytest from indy import crypto from utilities import common, utils from test_scripts.functional_tests.crypto.crypto_test_base \ import CryptoTestBase class TestCryptoBoxWithInvalidHandle(CryptoTestBase): @pytest.mark.asyncio async def test(self): # 1. Create wallet. # 2. Open wallet. self.wallet_handle = await common.create_and_open_wallet_for_steps( self.steps, self.wallet_name, self.pool_name, credentials=self.wallet_credentials) # 3. Create the first verkey with empty json. self.steps.add_step("Create the first key") first_key = await utils.perform(self.steps, crypto.create_key, self.wallet_handle, "{}") # 4. Create the second verkey with empty json. self.steps.add_step("Create the second key") second_key = await utils.perform(self.steps, crypto.create_key, self.wallet_handle, "{}") # 5. Create a crypto box with Expected error = 200(WalletInvalidHanlde) self.steps.add_step("Create a crypto box with Expected error = 200") msg = "Test crypto".encode("UTF-8") await utils.perform_with_expected_code( self.steps, crypto.auth_crypt, self.wallet_handle + 9999, first_key, second_key, msg, expected_code=200)
import { Component, OnInit } from '@angular/core'; import { trigger, state, style, transition, animate} from '@angular/animations'; @Component({ selector: 'mt-about', templateUrl: './about.component.html', animations: [ trigger('aboutAppeared', [ state('ready', style({opacity: 1})), transition('void => ready', [ style({opacity: 0, transform: 'translateY(-30px)'}), animate('300ms 0s ease-in') ]) ]) ] }) export class AboutComponent implements OnInit { aboutState = 'ready'; constructor() { } ngOnInit() { } }
Abused Refugee Women : ' Ikauma and Testimony This paper discusses a number of issues generated by my experience as a therapist and a researcher in the area of human rights violations against women. I am especially concerned with issues of ritual space, gender, testimony, and victimization. As coordinator of psychosocial projects for war-traumatized women and children in the former Yugoslavia, I have witnessed the extensive use of sexuality as a repressive tool against women, namely the very high incidence of rape that is used as a weapon of war directed at both the women themselves and at their families and their people. With respect to the issue of ritual space, I will discuss how I attempted to create such a space during interviews with 40 traumatized refugee women and in my own subsequent writing. Next, I will consider the gender-specific abuse of women, which falls within the realm of human rights violations of women. I will also outline some of the ways I have used testimony therapeutically within a ritual space. Lastly, I will look at some issues connected with sexual victimizationfocusing in particular on the problem of complicity and the power of shame. I use the concept of human rights based upon the United Nations Universal Declaration of Human Rights as my point of departure. This perspective focuses on universal cross-cultural rights and, in the case of refugee women, it directs attention towards universal similarities instead of cultural differences. Moreover, the ethical dimension of universal human rights has implications in my therapy practice: it places the blame outside of the victim, and also forces the therapist to take a moral stand. I find that the This paper discusses a number of issues generated by my experience as a therapist and a researcher in the area of human rights violations against women. I am especially concerned with issues of ritual space, gender, testimony, and victimization. As coordinator of psychosocial projects for war-traumatized women and children in the former Yugoslavia, I have witnessed the extensive use of sexuality as a repressive tool against women, namely the very high incidence of rape that is used as a weapon of war directed at both the women themselves and at their families and their people. With respect to the issue of ritual space, I will discuss how I attempted to create such a space during interviews with 40 traumatized refugee women and in my own subsequent writing. Next, I will consider the gender-specific abuse of women, which falls within the realm of human rights violations of women. I will also outline some of the ways I have used testimony therapeutically within a ritual space. Lastly, I will look at some issues connected with sexual victimizationfocusing in particular on the problem of complicity and the power of shame. I use the concept of human rights based upon the United Nations Universal Declaration of Human Rights as my point of departure. This perspective focuses on universal cross-cultural rights and, in the case of refugee women, it directs attention towards universal similarities instead of cultural differences. Moreover, the ethical dimension of universal human rights has implications in my therapy practice: it places the blame outside of the victim, and also forces the therapist to take a moral stand. I find that the Inger Agger concept of human rights can serve as a sort of epiphany for both patient and therapist. For many refugees, it comes as a surprise that they have any rights at all. They are surprised to discover that the international community has accepted their right to a dignified life. Such revelations may be the first step in a consciousness-raising process, or a process of post-traumatic therapy. Growing awareness of the ethical dimensions of the human rights concept may also be the beginning of a consciousness-raising process for the therapist. There is increasing discussion in the field of post-traumatic therapy of how we, as therapists and researchers, react to horrifying and traumatic stories about deliberate hu-low the victim to become a survivor, and the therapist or researcher to convert his or her own pain into a struggle for human rights. I had worked for five years as a therapist for torture victims when I decided that Ineeded to translate some of my experiences into theory and additional research. The research project I planned was governed by an attempt to understand some of the strange and terrifying experiences which my female patients had told me about, in particular, the use of sexuality as a means of torture in the political prisons. But it was also governed by an attempt to rid myseif of some of the evil I, as a therapist, had to confront in the women's stories. It was an attempt For many refugees, it comes as a surprise that they have any rights at all. They are surprised to discover that the international community has accepted their right to a dignified life. Such revelations may be the first step in a consciousness-raising process, or a process of post-traumatic therapy. man rights violations. How do w ewho are supposed to be the victims' helpers and counsellors-manage our own pain? Human rights violations are a worldwide probIem. Violations of women's rights are widespread and growing in the industrialized societies of the West, in the developing countries and, of course, in the former Yugoslavia where I worked. Therapists working with victims of trauma encounter women and girls who have experienced gender-specific human rights violations-be it refugee women who have been sexually abused in camps, in their homes or during flight, or Western victims of rape, wife-battering, incest, or prostitution. Therapists and researchers are, therefore, confronted with the problem of how to create a space which becomes healing for both the victims and the therapist, a space that will al-both to purify myself by giving meaning to the unbelievable stories I had heard as well as to offer my own testimony of what it feels like tomeet deliberate human evil. The research project involved interviews with 40 refugee women (20 from the Middle East and 20 from Latin America) and, inevitably, I was again experiencing the therapist's problem of containing the trauma story as all the women told me their life-stories. Although I was now acting as a researcher, it did not make much difference. For one thing, I could not just forget my clinical background. Secondly, my clinical background was very valuable for the interview process, because it gave me the courage to explore parts of the trauma story that were important to document from a theoretical and human rights point of view. So I tried to create a ritual space, which I called "the blue room." In fact, Refuge, Vol. 14, No. 7 (December 1994) 19 the walls of the room in which I carried out the interviews were painted blue. But I also use "the blue room" as a metaphor for the space in which the interviews took place. In this room the woman and I together created a space which I, inspired by the anthropologist Kirsten Hastrup, have called "the third culture." While we might come from different cultures, in the blue room we created something new and different-a third culture-to which we both contributed. It was in this context that the story was told by her and heard by me. At the beginning of each meeting in the blue room I would explain the purpose of my research project, namely to learn more about the special conditions of refugee women and to publish this information, so that people in asylum countries might know more about the human rights violations perpetrated against women. I would also disclose information about my own background: that I had worked for several years as a therapist for torture victims, and that I was not "neutral" as a researcher. I expressed my opinion that it was important that the world know what happens to women. In this way I tried to establish an atmosphere of compassion and solidarity. An important component of the ritual space of the blue room was the tape recorder. I began each interview by testing the tape recorder. For this test, I had the woman say her name, age, and nationality. I would then play the tape back. Together we listened and confirmed that the woman's voice was recorded, and that it could be heard in the blue room. Then we started recording the interview. The woman being interviewed then knew that her voice and her name could be heard. We were ready to start her testimony. In the meetings with the refugee women in the blue room, I used testimony as a research method. Thus I attempted to unite the use of testimony in the consciousness-raising groups of the women's movement with expeiiences from my therapeutic training and my work with testimony as a transcultural therapeutic method. The use of this method implies that research and therapeutic processes can overlap. For victims of human rights violations, testimony has a special significance because it serves as a documented accusation and a piece of evidence against the repressive system. "Testimony" as a concept has a special, double connotation: it contains objective, judicial, public, and political elements, as well as subjective, spiritual, cathartic, and private ones. Testimony thus has the capacity to unite the private and the public spheres (Agger and Jensen 1990). In Chile during the dictatorship, testimonies were used for registering and denouncing extreme examples of torture. The victims' stories were taped, transcribed, and sent to international organizations as evidence against the dictatorship. Gradually, the therapeu-or her pain. But now it is an act which is inscribed in the original existential project. The information will not be used against the comrades, but, rather, against the torturers". During the interview process, I did not meet any women who refused to give testimony. On the contrary, they seemed to find it important to provide this "confession" against their oppressors. All the women had suffered human rights violations. Some had been in prison and had been tortured, others had close relatives who had been tortured or murdered, still others had been forced to leave small children with relatives in their homelands in order to leave quickly and save their own lives. Some had lived clandestinely for long periods in their own countries before they could get out. After arriving in Denmark, many had I would also disclose information about my own background: that I had worked for several years as a therapist for torture victims, and that I was not "neutral" as a researcher. I expressed my opinion that it was important that the world know what happens to women. In this way I tried to establish an atmosphere of compassion and solidarity. tic value of the method also became evident and a series of reflections started on the possibilities and limitations of such a method for the healing of emotional wounds resulting from torture. Psychologists Eugenia Weinstein and Elizabeth Lira (Weinstein, Lira, ) observed that giving testimony alleviated symptoms and transformed a painful experience into a document that could be useful to other people. It was not only cathartic, but also acted as a political and legal weapon against the aggressors. In this way, some of the aggression that the abuse had created in the victim could be redirected in a socially constructive mapner, thereby breaking a self-destructive spiral. "It is a paradox," Weinstein and Lira noted, "that the testimony in some ways is a complete confession-that which they tried to extract by means of torture and which the subject protected at the cost of his lived for months in refugee camps anxiously waiting for asylum to be granted. Nearly all the refugee women in my research project had made themselves "visible" through their political activity. They were well-educated and anxious to talk about the conditions in their homelands. In this way, they could describe in vivid detail the circumstances of their own lives and of those of their silent sisters in poorer conditions. Through their testimonies, I tried to understand how sexual abuse of women who belong to a certain "dangerous" group is connected with the surrounding sexual and political power structure and with the historically transmitted definitions of "the shameful" and "the unclean." This could, I thought, add to the understanding of the paradoxical feeling of complicity that can arise in a person 20 m g e, Vol. 14, No. 7 (December 1994) who is subjected to sexual as well as other forms of abuse. It was my hypothesis that an under- standing of the social and psychological dynamics of this problem of complicity would provide greater insight into one's own counselling practices with women who have suffered human rights violations. It would also allow for a better understanding of the dynamics of other forms of sexual trauma, e.g. rape, incest, and women battering. Once the interviewing process was completed, I transcribed those parts of the women's testimonies which I found relevant to the questions I had proposed in my research project. This left me with pages of testimonies and I was alone in the blue room with my collection of trauma stories. It was now up to me to write my testimony of what I had seen, heard, thought, and felt. I chose then to expand the ritual space of the blue room into the written text by using two key metaphors: rooms in a house, and boundaries between spaces. The blue room was the space in which the interviews took place. There is also the "mother's room," the "father's room," the "cell," the "children's room," the "living room," and the "veranda." Each room represents a different kind of experience and relationship in the women's lives. The metaphor of boundaries, especially crossing boundaries, symbolizes the problematic danger in the lives of the interviewed women. I wrote my narrative in a certain style in which I used metaphorical language as a way of expressing my own counter-transference reactions. I thus attempted to heal the trauma in the ritual spaces of each room. In the last chapter, "The Veranda," I created a metaphorical healing circle in which I let the voices of the women speak to each other-in this way suggesting that one important tool for healing is for the victims to get together in such ritual spaces as a way of giving testimony in an atmosphere of compassion and understanding. The testimony of the refugee women expressed a virtually univer-sal pattern of female oppression. This universal pattern expresses itself in a whole range of human rights violations against women, such as rape, wife battering, incest, and prostitution. The common characteristic of all these forms of gender-specific abuse is the violation of the woman's right to her own body-to her physical and psychological integrity-and ultimately to a life of dignity. The gender-specific issues in all these examples of human rights violations of women revolve around sexuality and the way in which it is used as a means to control, exploit, and punish women. In women's lives, in most parts of the world, the significant definitional spaces are those which are connected with sexuality and reproduction. It is, therefore, also within those definitional spaces that the social and political control of women is exercised. I see these mechanisms of social con-Danger, by the British anthropologist Mary Douglas. I have taken as my point of departure the almost universal ambivalence towards women's blood, be it menstrual blood or the blood from the first intercourse. This blood is an exterior sign of the movement from one social condition to another. The first menstruation marks the transformation from girl to virgin, while the blood after the first intercourse marks the shift from virgin to woman. Both are symbols of status, of social boundaries that are crossed, and also of social boundaries that have been violated. Through the societal attitude to her blood, the girl learns about the shameful dangers that people find threatening. One of the most serious dangers that threaten people is that of contamination or impurity. If, for example, there is no blood on the wedding night, this could be a sign that the girl has been The common charecteristic of all these forms of gender-specijk abuse is the violation of the woman's right to her own body-to her physical andpsychological integrity-and uItimately to a life of dignity. trol of dangerous, i.e. vocal, women as clear-cut examples of the social control that also governs the lives of their more silent and invisible sisters. If women leave the private sphere to enter the public one, speak up and become visible in, for example, sexual or political revolts, this can create a crisis in the system. It can cause disorder and contamination, and it represents a special threat if the system is already in a state of crisis (Goddard 1987). Thus visible women become dangerous women. Yet, when both the silent and the dangerous women are hit in their souls by the sexual control mechanisms of society, they very often blame themselves. Why does the victim blame herself? This problem of complicity must involve deep-seated feelings and values that are part of the unconscious of both the individual and the societal structure. I have analyzed the question from an anthropological perspective inspired by a book entitled Purity and involved in forbidden sexual acts. If there is no menstrual blood, the implication could be the same. Regardless of how the forbidden has happened, or whether it happened with or without her consent, she is nevertheless an accomplice. She could have been more careful. She learns that it is her responsibility, that she has to be careful that she does not bring shame on herself or on her family. Although it was never her intention to understand trauma or work in the field of therapy, I think Douglas provides some insight into the problem of complicity that is also relevant for other trauma victims, who have been involved in acts which the official ideology of society does not condone: "The ideal order of society is guarded by the dangers which threaten transgressors," writes Douglas. One of the most serious dangers threatening those who are not careful is pollution or contamination. Dirt is defined by Douglas as something which is in the wrongplace. The unclean and the dirty must not be present if a societal model is to be maintained. If you disturb the order of things, you expose yourself to dangerous pollution. But you do not only pollute yourself. You are also dangerous to others. One's intentions are irrelevant. The danger of pollution is a power that threatens careless human beings. As Douglas says: A polluting person is always in the wrong. He or she has developed some wrong condition or simply crossed some line which should not have been crossed and this displacement unleashed danger for someone else." I think Douglas expresses a fundamental aspect of victimization. The victim has been involved in something which is not regarded as nice or good, neither by herself or himself, nor by society. The feeling of being wrong or dirty, then, does not have anything to do with responsibility. We could also analyze these dynamics from the perspective of shame. Douglas' focus is on societal mechanisms of control. But if we look at the Hungarian philosopher Agnes Heller's concept of "the power of shame", we see the same mechanisms from an individual viewpoint. According to Heller, shame is the very feeling that makes us adjust to our cultural environment. Shame is universally the first and most basic moral feeling, and it is internhlized by the child at a very early age. When the child learns what he or she should be ashamed of, the child simultaneously learns of the legitimization of a system of domination. The external power is internalized in the feeling of shame. This is society's silent voice, or "the eyes of the others" that we hear and sense inside of us. And the prostitute, Heller says, is the symbol of shame. She is the embodiment of lost honour. The problem of complicity and the feelings of shame are prominent features of the trauma of torture victims, especially if they have been sexually abused. But these features are also found in the trauma of other victims. As noted by Judith Herman in her re-cent book, Trauma and Recovery, the psychological impact of victimization may have many common features, whether victimization occurs within the public sphere of po 'tics or within the supposedly pri 2 te---but equally political-sphere of sexual and domestic relations. My own research process through the woman's house of exile has not ended yet. Upon the publication of my book, The Blue Room, a heated discussion ensued that involved some very personal attacks against me. One sarcastic reviewer did not trust the testimonies. Another reviewer was extremely angry and wrote that the choice of such a "perverse" topic reflected the author's own personal problems. Apparently, some readers have directed their mistrust or anger at the messenger. When we work in the field of trauma and see the effects of extreme human evil, therapists and researchers experience some of the same feelings that our clients or research-objects experience. This I see as an invaluable tool for a deeper understanding of the psychological mechanisms of trauma.m
/* (c) Copyright 2003, 2004, 2005, 2006, 2007, 2008, 2009 Hewlett-Packard Development Company, LP [See end of file] $Id: PrefixMappingImpl.java,v 1.2 2009/08/06 13:42:47 chris-dollin Exp $ */ package com.hp.hpl.jena.shared.impl; import com.hp.hpl.jena.rdf.model.impl.Util; import com.hp.hpl.jena.shared.*; import com.hp.hpl.jena.util.CollectionFactory; import java.util.*; import java.util.Map.Entry; import org.apache.xerces.util.XMLChar; /** An implementation of PrefixMapping. The mappings are stored in a pair of hash tables, one per direction. The test for a legal prefix is left to xerces's XMLChar.isValidNCName() predicate. @author kers */ public class PrefixMappingImpl implements PrefixMapping { protected Map<String, String> prefixToURI; protected Map<String, String> URItoPrefix; protected boolean locked; public PrefixMappingImpl() { prefixToURI = CollectionFactory.createHashedMap(); URItoPrefix = CollectionFactory.createHashedMap(); } protected void set( String prefix, String uri ) { prefixToURI.put(prefix, uri) ; URItoPrefix.put(uri, prefix) ; } protected String get( String prefix ) { return prefixToURI.get( prefix ); } public PrefixMapping lock() { locked = true; return this; } public PrefixMapping setNsPrefix( String prefix, String uri ) { checkUnlocked(); checkLegal( prefix ); if (!prefix.equals( "" )) checkProper( uri ); if (uri == null) throw new NullPointerException( "null URIs are prohibited as arguments to setNsPrefix" ); set( prefix, uri ); return this; } public PrefixMapping removeNsPrefix( String prefix ) { checkUnlocked(); String uri = prefixToURI.remove( prefix ); regenerateReverseMapping(); return this; } protected void regenerateReverseMapping() { URItoPrefix.clear(); for (Map.Entry<String, String> e: prefixToURI.entrySet()) URItoPrefix.put( e.getValue(), e.getKey() ); } protected void checkUnlocked() { if (locked) throw new JenaLockedException( this ); } private void checkProper( String uri ) { // suppressed by popular demand. TODO consider optionality // if (!isNiceURI( uri )) throw new NamespaceEndsWithNameCharException( uri ); } public static boolean isNiceURI( String uri ) { if (uri.equals( "" )) return false; char last = uri.charAt( uri.length() - 1 ); return Util.notNameChar( last ); } /** Add the bindings of other to our own. We defer to the general case because we have to ensure the URIs are checked. @param other the PrefixMapping whose bindings we are to add to this. */ public PrefixMapping setNsPrefixes( PrefixMapping other ) { return setNsPrefixes( other.getNsPrefixMap() ); } /** Answer this PrefixMapping after updating it with the <code>(p, u)</code> mappings in <code>other</code> where neither <code>p</code> nor <code>u</code> appear in this mapping. */ public PrefixMapping withDefaultMappings( PrefixMapping other ) { checkUnlocked(); for (Entry<String, String> e: other.getNsPrefixMap().entrySet()) { String prefix = e.getKey(), uri = e.getValue(); if (getNsPrefixURI( prefix ) == null && getNsURIPrefix( uri ) == null) setNsPrefix( prefix, uri ); } return this; } /** Add the bindings in the prefixToURI to our own. This will fail with a ClassCastException if any key or value is not a String; we make no guarantees about order or completeness if this happens. It will fail with an IllegalPrefixException if any prefix is illegal; similar provisos apply. @param other the Map whose bindings we are to add to this. */ public PrefixMapping setNsPrefixes( Map<String, String> other ) { checkUnlocked(); for (Entry<String, String> e: other.entrySet()) setNsPrefix( e.getKey(), e.getValue() ); return this; } /** Checks that a prefix is "legal" - it must be a valid XML NCName. */ private void checkLegal( String prefix ) { if (prefix.length() > 0 && !XMLChar.isValidNCName( prefix )) throw new PrefixMapping.IllegalPrefixException( prefix ); } public String getNsPrefixURI( String prefix ) { return get( prefix ); } public Map<String, String> getNsPrefixMap() { return CollectionFactory.createHashedMap( prefixToURI ); } public String getNsURIPrefix( String uri ) { return URItoPrefix.get( uri ); } /** Expand a prefixed URI. There's an assumption that any URI of the form Head:Tail is subject to mapping if Head is in the prefix mapping. So, if someone takes it into their heads to define eg "http" or "ftp" we have problems. */ public String expandPrefix( String prefixed ) { int colon = prefixed.indexOf( ':' ); if (colon < 0) return prefixed; else { String uri = get( prefixed.substring( 0, colon ) ); return uri == null ? prefixed : uri + prefixed.substring( colon + 1 ); } } /** Answer a readable (we hope) representation of this prefix mapping. */ @Override public String toString() { return "pm:" + prefixToURI; } /** Answer the qname for <code>uri</code> which uses a prefix from this mapping, or null if there isn't one. <p> Relies on <code>splitNamespace</code> to carve uri into namespace and localname components; this ensures that the localname is legal and we just have to (reverse-)lookup the namespace in the prefix table. @see com.hp.hpl.jena.shared.PrefixMapping#qnameFor(java.lang.String) */ public String qnameFor( String uri ) { int split = Util.splitNamespace( uri ); String ns = uri.substring( 0, split ), local = uri.substring( split ); if (local.equals( "" )) return null; String prefix = URItoPrefix.get( ns ); return prefix == null ? null : prefix + ":" + local; } /** Compress the URI using the prefix mapping. This version of the code looks through all the maplets and checks each candidate prefix URI for being a leading substring of the argument URI. There's probably a much more efficient algorithm available, preprocessing the prefix strings into some kind of search table, but for the moment we don't need it. */ public String shortForm( String uri ) { Entry<String, String> e = findMapping( uri, true ); return e == null ? uri : e.getKey() + ":" + uri.substring( (e.getValue()).length() ); } public boolean samePrefixMappingAs( PrefixMapping other ) { return other instanceof PrefixMappingImpl ? equals( (PrefixMappingImpl) other ) : equalsByMap( other ) ; } protected boolean equals( PrefixMappingImpl other ) { return other.sameAs( this ); } protected boolean sameAs( PrefixMappingImpl other ) { return prefixToURI.equals( other.prefixToURI ); } protected final boolean equalsByMap( PrefixMapping other ) { return getNsPrefixMap().equals( other.getNsPrefixMap() ); } /** Answer a prefixToURI entry in which the value is an initial substring of <code>uri</code>. If <code>partial</code> is false, then the value must equal <code>uri</code>. Does a linear search of the entire prefixToURI, so not terribly efficient for large maps. @param uri the value to search for @param partial true if the match can be any leading substring, false for exact match @return some entry (k, v) such that uri starts with v [equal for partial=false] */ private Entry<String, String> findMapping( String uri, boolean partial ) { for (Entry<String, String> e: prefixToURI.entrySet()) { String ss = e.getValue(); if (uri.startsWith( ss ) && (partial || ss.length() == uri.length())) return e; } return null; } } /* (c) Copyright 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009 Hewlett-Packard Development Company, LP All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */
<gh_stars>0 /* * Copyright 2014-2022 Real Logic Limited. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * https://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #if defined(__linux__) #define _BSD_SOURCE #define _GNU_SOURCE #endif #include <errno.h> #include <string.h> #include "util/aeron_error.h" #include "util/aeron_symbol_table.h" #include "aeron_termination_validator.h" static const aeron_symbol_table_func_t aeron_termination_validator_table[] = { { "allow", "aeron_driver_termination_validator_default_allow", (aeron_fptr_t)aeron_driver_termination_validator_default_allow }, { "deny", "aeron_driver_termination_validator_default_deny", (aeron_fptr_t)aeron_driver_termination_validator_default_deny } }; static const size_t aeron_termination_validator_table_length = sizeof(aeron_termination_validator_table) / sizeof(aeron_symbol_table_func_t); bool aeron_driver_termination_validator_default_allow(void *state, uint8_t *token_buffer, int32_t token_length) { return true; } bool aeron_driver_termination_validator_default_deny(void *state, uint8_t *token_buffer, int32_t token_length) { return false; } aeron_driver_termination_validator_func_t aeron_driver_termination_validator_load(const char *validator_name) { return (aeron_driver_termination_validator_func_t)aeron_symbol_table_func_load( aeron_termination_validator_table, aeron_termination_validator_table_length, validator_name, "terminate validator"); }
Antimalarial plants of northeast India: An overview The need for an alternative drug for malaria initiated intensive efforts for developing new antimalarials from indigenous plants. The information from different tribal communities of northeast India along with research papers, including books, journals and documents of different universities and institutes of northeast India was collected for information on botanical therapies and plant species used for malaria. Sixty-eight plant species belonging to 33 families are used by the people of northeast India for the treatment of malaria. Six plant species, namely, Alstonia scholaris, Coptis teeta, Crotolaria occulta, Ocimum sanctum, Polygala persicariaefolia, Vitex peduncularis, have been reported by more than one worker from different parts of northeast India. The species reported to be used for the treatment of malaria were either found around the vicinity of their habitation or in the forest area of northeast India. The most frequently used plant parts were leaves (33%), roots (31%), and bark and whole plant (12%). The present study has compiled and enlisted the antimalarial plants of northeast India, which would help future workers to find out the suitable antimalarial plants by thorough study. INTRODUCTION Malaria is caused by single-celled protozoan parasites called Plasmodium and transmitted to man through the Anopheles mosquito. It is one of the major fatal diseases in the world, especially in the tropics, and is endemic in some 102 countries, with more than half of the world population at risk with fatality rates being extremely high among young children below 5 years of age. The World Health Organization estimates that there are between 300 and 500 million new cases of malaria worldwide, every year, mostly in Africa, Asia, South Pacifi c Islands and South America, which causes at least 1 million death s annually. In spite of control programs in many countries, there has been very little improvement in the control of malaria, and infections can reduce the effectiveness of labor and can lead to both economic and human losses. Control of malaria is complex because of the appearance of drugresistant strains of Plasmodium and with the discovery that man becomes infested with species of simian (monkey) malaria. At the same time, the Anopheles mosquitoes have developed resistance to many insecticides. Spread of multidrug-resistant strains of Plasmodium and the adverse side effects of the existing antimalarial drugs have necessitated the search for novel, well-tolerated and more effi cient antimalarial drugs that kill either the vector or the parasite. The use of plant-derived drugs for the treatment of malaria has a long and successful tradition. The fi rst antimalarial drug was quinine, isolated from the bark of Cinchona species (Rubiaceae) in 1820. It is one of the oldest and most important antimalarial drugs, which is still used today. In 1940, another antimalarial drug, chloroquine, was synthesized and is being used for the treatment of malaria. Unfortunately, after an early success, the malarial parasite, especially Plasmodium falciparum, also became resistant to chloroquine. Treatment of chloroquine-resistant malaria was done with alternative drugs or drug combinations, which were rather expensive and sometimes toxic. The extract of the bark and leaves of Azadirachta indica has also been used in Thailand and Nigeria as an antimalarial for a long time. Charaka in 300 BC and Susruta in 200 BC reported the antimalarial and antipyretic activity of this species. Hence, it is clear that the main drugs developed for malaria and used until now (quina alkaloid derived drugs and artemisinin) were discovered based on traditional use and ethnomedical data. Miers ex Hook f. and Thomas are used in the treatment of malaria. The list of antimalarial plants of India has not yet been completely searched out and it is an urgent need to compile this data. The aim of this study was to compile the antimalarial plants reported by different workers from northeast India. The present database of antimalarial plants would help the future phytochemist to evaluate the best antimalarial plants and it would be possible to formulate the most effective medicine from this region of the world. It might, therefore, be useful to test the antibacterial, antiviral and anti-infl ammatory activities of these groups of plants. The present survey has provided information about the range of species of plants used in the treatment of malaria in northeast India. Accordingly, researchers should consider the ethnomedical information of all species before deciding which kind of screening should be used in the search for an antimalarial. MATERIALS AND METHODS All primary ethnobotanical studies from books and journals, research papers of different universities and institutes of northeast India were collected for information about botanical therapies and plant species used for malaria. Local traditional healers were interacted with for confi rmation and validation as far as possible. Any data or references to plants used for malaria were carefully inserted into a template and botanical name and classifi cation were re-examined and confi rmed with the fl ora of northeast India and fl ora of India. For compliance of study, authors had interaction with traditional healers through interviews in the prominent communities of Adi, Apatani, Khampti, Mishmis, Nyishi, Monpa, Nocte, and Sherdukpens in Arunachal Pradesh, Mizo in Mizoram, Aos in Nagaland, and Nepalis and Lepcha in Sikkim. As per the available literature, various folklores treating malaria in different communities have been reported. RESULTS AND DISCUSSION After thorough literature survey using the above method, it can be confi rmed that 68 species of plants belonging to 33 families are used by the people of northeast India for the treatment of malaria . Of the 33 families studied, Verbenaceae, Acanthaceae, Asteraceae, Rubiaceae, Rutaceae, Lamiaceae, and Euphorbiaceae are predominant in terms of the number of species used to treat malaria. Six plant species, namely, Alstonia scholaris, Coptis teeta, Crotolaria occulta, Ocimum sanctum, Polygala persicariaefolia, and Vitex peduncularis, have been reported by more than one worker from different parts of northeast India. The species reported to be used for the treatment of malaria were either found around the vicinity of their habitation and in the forest area of northeast India. More than 20 authors have reported antimalarial plants and the author himself has reported 10 antimalarial plants from different parts of Arunachal Pradesh. Similarly 9,12, 4 and 2 species from Assam, Manipur, Mizoram and Sikkim respectively. Most of the plants were reported from Assam, Arunachal Pradesh, Manipur and Mizoram, whereas Nagaland and Tripura have been less explored by any author. The plants recorded in this survey were used traditionally for the treatment of malaria and its symptoms. Majority of the plants were used as decoctions and some plants were used both internally and externally. Herbs and shrubs were found to be predominantly used as antimalarial drugs, and the most frequently used plant parts were leaves (33%), roots (31%), and bark and whole plant (12%). The enormous frequency of the usage of leaves in traditional preparations is related to their abundant availability and easy collection. Information from traditional healers of Assam Ayurveda Regional Research Institute, Itanagar, revealed that they had used pills of Kalmegh (Andrographis paniculata), stem bark of Latakaranja (Caesalpinia crista) and Guduchi (T. cordifolia). Some species like Holoptelea integrifolia Planch, T. cordifolia (Willd.), Calotropis procera (Ait.) R. Br., Nerium indicum Mill., Ajuga bracteosa Wall., Leucas cephalotes Spreng., Enicostemma hyssopifolium (Willd.) Verdoorn, Vernonia cinerea Less., Justicia adhatoda Linn., Orthospihon pallidus Royle ex Benth., Pongamia pinnata (L.) Merr., Nyctanthes arbortistis L., Calotropis gigantea (L.) R. Br., Capsicum annum L., Phyllanthus fraternus L., Plectranthus sp., Elephantopus scaber Linn., Combretum decandrum Roxb., Holarrhena antidysentrica Wall., Cleome viscosa L., Vernonia roxburghii Less., Achyranthes aspera L. are also available in northeast India, but any report on their use in any part of northeast India has not yet been published. The knowledge of plants used in the treatment of malaria in northeast India, combined with the high level of correlation found with the uses of these plants (or related species) in diverse parts of India, indicates the inheritance of our ancestral knowledge throughout the country. It represents The leaves are boiled and the water is used for bathing and the leaf paste is applied on the whole body as an eff ective cure for chronic fever/malaria Alstonia scholaris R.Br. Rhizome and root Extract is a bitter tonic Vandellia sessilifl ora Benth. Scrophulariaceae Plant Decoction of whole plant is used Vitex peduncularis Wall Verbenaceae Thing-khawi-lu (Mizo) Bark, leaf, stem The bark is crushed and boiled. The steam vapor is inhaled by a patient suff ering from malarial fever; infusion of leaves or of root bark or young stem bark is useful in malaria and black water fever,, Picrorhiza kurrooa Benth. Scrophulariaceae Root Pounded in water and given Xanthium strumarium L. Asteraceae Agara (Ass) Leaf Zanthoxylum hamiltonianum Wall. Rutaceae Root and bark (Ass) -Assamese, (H)-Hindi, (Beng)-Bengali, (M) -Manipuri, (Mi) -Mizo, (S)-Sanskrit sometimes the only available alternative malarial treatment in remote communities of the northeast India. Four species from Assam, Arunachal Pradesh, Meghalaya and Mizoram were also found to be used as mosquito repellents . Though species like Homalomena aromatica, Ocimum gratissimum, Elsholtzia blanda, Eucalyptus globules are written as repellents, whether these plants are repellents or insecticides or both has not yet been suffi ciently proved. Local people of this region used these plants as a substitute for DDT and other insecticides, as it is well known that DDT and other insecticides have adverse effects on environment and human health. Several classes of the secondary plant metabolites are responsible for antimalarial activity, but the most important and diverse biopotency has been observed in alkaloids, quassinoids and sesquiterpene lactones. The active compounds isolated from antimalarial plants have been compiled from the review work of Saxena and others. Plants which produce different antimalarial compounds, namely, alkaloids, quassinoids, sesquiterpenes, triterpenoids, flavonoids, etc. can be very important sources of antimalarial drugs. These compounds have low, moderate or high in vitro antiplasmodial activity, whereas some of them are inactive. They also gave a critical account of crude extracts, essential oils and active constituents with diverse chemical structure from higher plants possessing signifi cant antimicrobial activity. In the information obtained, also, there were many details CONCLUSION The present survey has provided information about the range of species of plants used in the treatment of malaria in northeast India. (Accordingly, researchers should observe ethnomedical information of all species before deciding which kind of screening should be used in the search for antimalarial drugs.) It develops good scope for Pharmaceutics to develop new drug for malaria after combining drugs having action against Plasmodium, antiinfl ammatory drugs as well as hepatic protector by using these traditional information and furnishing chemical analysis, pharmacological action, and in vitro studies. Traditional healers working in very remote parts of the region are paying much attention to treat various kinds of ailments. While using herbs for treatment, it has been observed that some are really devoted to the methodology of treatment, whereas some others concentrate on their use. There is a need to generate reliable scientifi c data to determine whether the plants currently used to treat malaria are actually effective. In the long term, this should help to prevent deaths due to ignorance and the misuse of plants for self-medication in the absence of advice from a qualifi ed medical professional. Individual plants are rarely used alone. In most cases, they are used as mixtures. It will never be easy to determine which plants are likely to be the most useful and should be examined to isolate pure active compounds. Some antimalarial plants are used for preparing baths or for inhalations (aromatic plants).
<reponame>rboudrouss/adventofcode2020 steps = [(1,1),(3,1),(5,1),(7,1),(1,2)] main = lambda x : x def main_debug(inp): rep = [] inp = inp.split('\n') for R,D in steps: nb=x=y=0 maxx = len(inp[0]) while y<len(inp): if inp[y][x]=='#': nb+=1 x=(R+x)%maxx y+=D rep.append(nb) rep_ = 1 for i in rep: rep_ *= i return rep_ if __name__ == "__main__": # FIXME 2983070376 print((lambda i,s,n,x,r:[r:=[n:=n+1 for y in range(1,len(i))if(x:=(R+x)%len(i[0]))+1 and i[y][x]=='#'][-1]*r for R,D in s][-1])(open("i").readlines(),[(1,1),(3,1),(5,1),(7,1),(1,2)],0,0,1))
Impact of fetal drug exposures on the adolescent brain. Drug use during pregnancy, which in the United States is estimated at 1 million individuals each year,1 puts the newborn at increased risk for multiple adverse health outcomes. While the deleterious effects of alcohol, and to a lesser extent nicotine, to the fetus are well recognized, those of illicit drugs are much less understood. Drugs can harm the fetus both via effects to the placenta interfering with nutrient delivery and through direct effects to the fetus. Moreover, the high lipophilicity of drugs ensures that significant concentrations reach the fetus brain and other organs.2 Thus, the consequences of fetal drug exposure on brain development and function, and ultimately on behavior, constitute an important area of research. The article by Liu et al3 in the current issue is an example of such a study that takes advantage of brain imaging technologies and cognitive assessment in a prospective cohort of 40 adolescents (13-15 years of age) born to mothers who used tobacco (nicotine) or cocaine or both during pregnancy. This study reported distinct brain morphologic effects for these 2 drugs, documenting decreased cortical thickness in the right dorsolateral prefrontal cortex of adolescents exposed to cocaine and decreased volumes in globus pallidus in those exposed to nicotine. It also reported that larger volumes of the thalamus were associated with greater impulsivity both in adolescents exposed to cocaine and in those exposed to nicotine. Since impaired prefrontal cortex is associated with low self-control and impulsivity is a risk factor for substance use disorders, these findings would be consistent with an increased risk for externalizing disorders in those adolescents exposed to cocaine and nicotine during pregnancy. Indeed, smoking during pregnancy may account for up to 25% of externalizing behaviors.4 However, from such studies, it is not possible to disentangle the factors that are related to fetal drug exposures from those that pertain to genetic factors that modulate brain development and function, as well as the interactions between specific gene(s) and fetal drug exposures.5 For example, there is evidence from brain imaging studies comparing monozygotic and dizygotic twins that the volume of the prefrontal lobe (as well as that of other brain regions)6 and cortical thickness7 are genetically determined. Similarly, there is increased recognition from preclinical and clinical studies (mostly in adults) that the effects of drugs on the brain are modulated by genetics.8-10 Additionally, genetic factors in the mother are likely to determine during pregnancy who continues to take drugs and who does not, highlighting the importance of controlling for parental characteristics and for relevant environmental exposures (ie, social stress). Drug use during and after pregnancy is likely to influence the interaction of the mother with the infant and the family environment in which the child grows, which in turn can influence brain development. Mothers who abuse drugs during pregnancy are more likely to display impaired parenting skills,11 and in turn poor parental interactions are associated with impaired brain development and function.12 For example, a recent study showed that cumulative life stress was associated with smaller volumes in the prefrontal cortex,13 which in turn mediated the relationship between early childhood stress and impaired working memory.14 Based on the results of preclinical studies, it is also apparent that doses, drug combinations, and the trimester of exposure influence the outcomes to the fetus.15 In this respect, drug combinations (particularly those that include alcohol, which is highly teratogenic) are particularly deleterious. The findings from Liu et al3 highlight the importance of research on fetal drug exposures on brain development and identify the need for developing standards of healthy brain development as a function of sex and age (including pubertal status). Standards for normal brain development and function will require the creation of large brain imaging databases of healthy children that would serve as normative baseline for comparisons, for example, the brain images of children with fetal drug exposures and/or brain diseases. The feasibility of creating such large databases from brain images of children and adolescents is already being realized by several projects including the 1000 Functional Connectomes Project (initiated with brain images from adults and then expanded to children),16 which integrated images from independent investigators with little standardization as well as projects such as the IMAGEN Study17 that standardized procedures from the inception. Such databases will also allow researchers to study how genes associated with brain developmental disorders (ie, addiction and schizophrenia) influence brain morphology structure and function. For example, a recent study (n=79)reported that the neuregulin-1 (NRG-1) gene,which is a leading candidate gene for schizophrenia, was associated with decreased gray matter volume in the frontal cortex in individuals carrying the risk allele.18 Replication in larger data sets will allow the assessment of whether NRG-1's modulation of the risk for schizophrenia is mediated by its effects on brain structure. In parallel, such databases will benefit from detailed phenotypic characterization, ideally along a dimensional continuum, so that particular brain characteristics can be linked to particular outcomes. This is illustrated by a recent study of adolescents (n=1896) in which distinct impulsivity phenotypes were associated with specific neuronal networks, which in turn predicted functional outcomes such as substance use and attention-deficit/hyperactivity disorder symptoms (ie, inattention and impulsivity).19 Finally, the findings from Liu et al3 demonstrate in humans that both legal and illegal drug exposures during fetal development influence brain structure during adolescence, and they highlight the urgency of instituting prevention programs against drug use during pregnancy and treatment interventions for pregnant women who abuse drugs.
#!/usr/bin/env python # -*- coding: utf-8; py-indent-offset:4 -*- ############################################################################### # # Copyright (C) 2015, 2016, 2017 <NAME> # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # ############################################################################### from __future__ import (absolute_import, division, print_function, unicode_literals) from backtrader.utils.py3 import filter, string_types, integer_types from backtrader import date2num import backtrader.feed as feed class PandasDirectData(feed.DataBase): ''' Uses a Pandas DataFrame as the feed source, iterating directly over the tuples returned by "itertuples". This means that all parameters related to lines must have numeric values as indices into the tuples Note: - The ``dataname`` parameter is a Pandas DataFrame - A negative value in any of the parameters for the Data lines indicates it's not present in the DataFrame it is ''' params = ( ('datetime', 0), ('open', 1), ('high', 2), ('low', 3), ('close', 4), ('volume', 5), ('openinterest', 6), ) datafields = [ 'datetime', 'open', 'high', 'low', 'close', 'volume', 'openinterest' ] def start(self): super(PandasDirectData, self).start() # reset the iterator on each start self._rows = self.p.dataname.itertuples() def _load(self): try: row = next(self._rows) except StopIteration: return False # Set the standard datafields - except for datetime for datafield in self.getlinealiases(): if datafield == 'datetime': continue # get the column index colidx = getattr(self.params, datafield) if colidx < 0: # column not present -- skip continue # get the line to be set line = getattr(self.lines, datafield) # indexing for pandas: 1st is colum, then row line[0] = row[colidx] # datetime colidx = getattr(self.params, 'datetime') tstamp = row[colidx] # convert to float via datetime and store it dt = tstamp.to_pydatetime() dtnum = date2num(dt) # get the line to be set line = getattr(self.lines, 'datetime') line[0] = dtnum # Done ... return return True class PandasData(feed.DataBase): ''' Uses a Pandas DataFrame as the feed source, using indices into column names (which can be "numeric") This means that all parameters related to lines must have numeric values as indices into the tuples Params: - ``nocase`` (default *True*) case insensitive match of column names Note: - The ``dataname`` parameter is a Pandas DataFrame - Values possible for datetime - None: the index contains the datetime - -1: no index, autodetect column - >= 0 or string: specific colum identifier - For other lines parameters - None: column not present - -1: autodetect - >= 0 or string: specific colum identifier ''' params = ( ('nocase', True), # Possible values for datetime (must always be present) # None : datetime is the "index" in the Pandas Dataframe # -1 : autodetect position or case-wise equal name # >= 0 : numeric index to the colum in the pandas dataframe # string : column name (as index) in the pandas dataframe ('datetime', None), # Possible values below: # None : column not present # -1 : autodetect position or case-wise equal name # >= 0 : numeric index to the colum in the pandas dataframe # string : column name (as index) in the pandas dataframe ('open', -1), ('high', -1), ('low', -1), ('close', -1), ('volume', -1), ('openinterest', -1), ) datafields = [ 'datetime', 'open', 'high', 'low', 'close', 'volume', 'openinterest' ] def __init__(self): super(PandasData, self).__init__() # these "colnames" can be strings or numeric types colnames = list(self.p.dataname.columns.values) if self.p.datetime is None: # datetime is expected as index col and hence not returned pass # try to autodetect if all columns are numeric cstrings = filter(lambda x: isinstance(x, string_types), colnames) colsnumeric = not len(list(cstrings)) # Where each datafield find its value self._colmapping = dict() # Build the column mappings to internal fields in advance for datafield in self.getlinealiases(): defmapping = getattr(self.params, datafield) if isinstance(defmapping, integer_types) and defmapping < 0: # autodetection requested for colname in colnames: if isinstance(colname, string_types): if self.p.nocase: found = datafield.lower() == colname.lower() else: found = datafield == colname if found: self._colmapping[datafield] = colname break if datafield not in self._colmapping: # autodetection requested and not found self._colmapping[datafield] = None continue else: # all other cases -- used given index self._colmapping[datafield] = defmapping def start(self): super(PandasData, self).start() # reset the length with each start self._idx = -1 # Transform names (valid for .ix) into indices (good for .iloc) if self.p.nocase: colnames = [x.lower() for x in self.p.dataname.columns.values] else: colnames = [x for x in self.p.dataname.columns.values] for k, v in self._colmapping.items(): if v is None: continue # special marker for datetime if isinstance(v, string_types): try: if self.p.nocase: v = colnames.index(v.lower()) else: v = colnames.index(v) except ValueError as e: defmap = getattr(self.params, k) if isinstance(defmap, integer_types) and defmap < 0: v = None else: raise e # let user now something failed self._colmapping[k] = v def _load(self): self._idx += 1 if self._idx >= len(self.p.dataname): # exhausted all rows return False # Set the standard datafields for datafield in self.getlinealiases(): if datafield == 'datetime': continue colindex = self._colmapping[datafield] if colindex is None: # datafield signaled as missing in the stream: skip it continue # get the line to be set line = getattr(self.lines, datafield) # indexing for pandas: 1st is colum, then row line[0] = self.p.dataname.iloc[self._idx, colindex] # datetime conversion coldtime = self._colmapping['datetime'] if coldtime is None: # standard index in the datetime tstamp = self.p.dataname.index[self._idx] else: # it's in a different column ... use standard column index tstamp = self.p.dataname.iloc[self._idx, coldtime] # convert to float via datetime and store it dt = tstamp.to_pydatetime() dtnum = date2num(dt) self.lines.datetime[0] = dtnum # Done ... return return True
RELATIONSHIP BETWEEN VESSEL PARAMETERS AND CLEAVAGE ASSOCIATED WITH CHECKING IN Eucalyptus grandis WOOD In this work, the relationship between vessels parameters and the wood cleavage strength were studied to clarify the process of formation of this type of check, very common in Eucalyptus wood. The objective was to identify the relationship between the wood cleavage strength, the average area of the vessel, and the percentage area of vessels on the wood transverse surface. For this, two Eucalyptus grandis trees at 22 years old were felled and specimens for the cleavage test were produced to determine the wood cleavage strength. From these specimens, samples were taken to determine the average area of the vessel and the percentage area of vessels, aiming at adjusting mathematical models that explain the variation in the cleavage strength. The results showed that the higher the average area of the vessel and the percentage of area occupied by vessels in the wood, the lower its cleavage strength. The multiple linear regression model can estimate the cleavage strength as a function of the average area of the vessel, and the percentage area of the vessels. INTRODUCTION As a sustainable alternative to the use of wood from Brazilian native species for the sawn timber industry, species of the genus Eucalyptus have been used. However, this wood is prone to defects caused by the release of growth stress and the stresses generated during the drying process. The presence of checks is a negative factor for the sawn timber production sector, resulting in loss of sawn yield, less possibilities for use, and consequently, loss of commercial value. The classic mechanics equations indicate that if the frequency of the orifices is higher in a porous solid, the stress concentration factor also increases (Mattheck and Kubler 1997). This is because the pores are discontinuities considered as failures that do not offer mechanical resistance to stresses applied to the solid. Thus, the higher the porosity, the higher the concentration of stresses on the solid structure of the material and the higher the probability of fractures occurring (Ugural and Fenster 1995, Craig 2011, Sanford 2003. In wood, the pores are the vessels seen in the transverse plane, that is, the higher the concentration of vessels, the higher the chances of fracture occurring in the material. According to the mechanics of solids (Ugural and Fenster 1995, Craig 2011, Sanford 2003, the maximum tangential stress generates the wedge effect on the structure of solid wood, starting the fracture from the geometric center of the vessel. Sanford indicates that the resulting stress, perpendicular to the direction of the original stress at the orifice edge, is about three times higher than the maximum tangential stress applied to the porous solid. Gacita et al. analyzed samples taken from wood discs, both with and without checks, in Eucalyptus nitens at 16 years of age. They observed that the average area of the vessel was higher in the samples with visible checks than in the samples without visible checks. Valenzuela et al. (2012a) analyzed the checks and characteristics of the vessels in 12-year-old E. nitens wood samples and found a higher vessels diameter and higher average area of vessels in the wood with a higher percentage of checks. Soares et al., studying the juvenile wood and the mature wood of E. cloeziana at the age of 37 years, stated that the lower vessel frequency and higher vessel diameter found in mature wood were associated with a lower occurrence of end checks. It is possible to observe in logs or pieces of sawn wood that the checks that occur due to the release of growth stresses, and/or as a consequence of the drying process, most often follow the notations I TR and I TL (Bodig and Jayne 1982). This means that the stresses occur mainly in the cleavage mode, with stress that causes the check opening following the tangential direction, and the propagation front following the radial (I TR ) or longitudinal (I TL ) direction. Thus, it is believed that the wood may be more prone to checking when the material is less resistant when subjected to cleavage testing. In this context, the objective of this work was to contribute to clarifying the relationship between the wood mechanical cleavage strength, the average area of the vessel, and the percentage area of vessels. Material origin and collection Two 22-year-old Eucalyptus grandis trees were selected among those present in an experimental planting on the campus of the Federal University of Lavras, located in the municipality of Lavras -MG, latitude 2°14'4'' south and longitude 44°59'5'' west. The material biometric characterization was performed by measuring the diameter at breast height (DBH), total height, and commercial height of the logs to be sawn, considering up to a minimum diameter of 20 cm. This diameter was set to allow the specimens for mechanical tests to be prepared from the central planks that were obtained after sawing the logs. Cleavage mechanical test The base logs were chosen (0 m to 3 m height) and cross-sectioned to obtain 1,5 m length small logs. The option for the base logs was based on the hypothesis that this region better represents the wood properties for the whole tree, considering old trees (). From the logs, central planks were obtained using a vertical band saw by the alternate method and parallel to the log center. The central planks of each tree were sawn to make 25 specimens per tree. The production of defect-free specimens followed the suggestions of the ASTM D143-14, without distinction of heartwood and sapwood, or juvenile wood and mature wood. The cleavage mechanical test was performed using a universal testing machine EMIC, model DL 30000. The specimens were tested in the green condition. It was decided to produce the specimens in such a way that the cleavage occurred in the direction of the rays, aiming at the provocation of checks of the I TL type. This method was chosen taking into account that the checks occur more frequently in the direction of the rays (I TR and I TL types -Bodig and Jayne 1982). Vessels parameters analysis Among the 50 specimens tested, 30 were selected (10 more resistant, 10 intermediate resistant, and 10 less resistant) for the production of cubes with approximately 1 cm, used in the analysis of the vessels seen in the transverse plane. The cubes were removed in the region close to the check, as shown in Figure 1. Using the light stereomicroscope with a coupled camera, images of the wood transverse plane were obtained, called areas of interest. The images were binarized with the aid of the ImageJ software so that the vessels were black and other areas white. Analyzing the binary images, the measurement of the areas occupied by vessels in each area of interest was performed also using the ImageJ software. In this study, the "average area of the vessel" (AAV) per sample was measured (, a) and calculated using Equation 1. The percentage of area occupied by vessels on the wood transverse surface was also measured, and this variable is here called the "percentage area of vessels" (PAV). The PAV was calculated using Equation 2. Where: AAV = average area of the vessel of one sample (m 2 ); AV = area of one entire vessel present on an area of interest measuring 1,12 mm 2 (m 2 ); n = Number of entire vessels present on the area of interest that measures 1,12 mm 2. Where: PAV = percentage area of vessels of one sample (%); AV = area that one vessel occupies on the area of interest that measures 1,12 mm 2 (mm 2 ); 1,12 = area of interest (mm 2 ). Three different areas of interest, approximately 1,12 mm 2 each, were analyzed on the transverse surface of each wood sample. Thus, AAV and PAV per sample were obtained by averaging these three areas (repetitions), as shown in Equation 1 and Equation 2. Data statistical analysis Initially, the data of wood cleavage strength, the average area of the vessel (AAV), and percentage area of vessels (PAV) were subjected to descriptive statistical analysis to assess the averages and data variation. Simple regression analyzes were performed to adjust models in order to explain the behavior of cleavage strength as a function of the AAV and the PAV for the evaluated samples. Multiple regression analyzes were also carried out to adjust models capable of predicting the cleavage strength using the vessel parameters mentioned as a basis. Table 1 presents the results of the descriptive statistical analysis performed for the data of cleavage strength, the average area of the vessel (AAV), and the percentage area of vessels (PAV). The average cleavage strength of 0,64 MPa is higher than the average found by Arajo, of 0,57 MPa, calculated from the values for 115 Brazilian tropical species. The cleavage strength values seen in Table 1 are within the range found by Dias and Lahr, who found cleavage strength averages ranging between 0,40 MPa and 2,00 MPa for the 40 hardwood species analyzed in their study. RESULTS AND DISCUSSION About the AAV, the average of 12493 m 2 is higher than that found by Barotto et al., of 9673 m 2, in wood of four E. grandis clones analyzed at 18 years of age. This may be related to the age difference between the materials evaluated, in addition to the influence of the possible difference in water availability at different sites which affects vessel dimensions (Carlquist 2001). With regard to PAV, the average of 18,7 % is higher than the average of 14,4 % found by Barotto et al., indicating that the average area of the vessel tends to directly influence the percentage area of the vessels. Equation 3 and Equation 4 show the models adjusted for cleavage strength as a function of AAV and PAV, respectively. This data's behavior is also illustrated in Figure 2 The coefficient of determination (R 2 ) of the regressions for cleavage strength as a function of AAV, and for cleavage strength as a function of PAV, were 0,546 and 0,538 respectively. The adjusted models were tested and presented significance at a 1 % probability of error, as shown in Table 2 and Table 3. The behaviors seen in Figure 2 and Figure 3 indicate that there is an inverse relationship between the parameters of the vessels evaluated in the present study and the wood cleavage strength. Based on this, it is possible to infer that the higher the average area of the vessel and the higher the percentage area occupied by the vessels on the wood transverse surface, the higher the occurrence of I TL type checks. These results corroborate those found by Gacita et al., Valenzuela et al. (2012a) and, Valenzuela et al. (2012b), who observed a higher intensity of checks in the woods that presented higher AAV. These statements also support what was said by Ugural and Fenster, Craig and Sanford regarding the concentration of stresses at the edges of orifices contained in solids. Based on the results observed for the cleavage strength as a function of AAV and PAV, a multiple linear regression model was adjusted for these parameters. Figure 4 illustrates the graphic behavior obtained from this analysis. Figure 4 clearly shows that the wood mechanical cleavage strength decreases when the average area of the vessels and the percentage of the transverse surface area occupied by vessels increases. In this way, the wood will tend to be less prone to checks when the area occupied by vessels in the wood is smaller. Table 4 shows the significance test of the parameters used to adjust the model. The variable that most contributed to the adjustment of the multiple regression model was AAV, while PAV contributed less. This inference is based on the Student's t-statistic shown in Table 4, which indicates a lower contribution of the parameter to the model when the calculated t-value is closer to zero. Table 5 shows that the regression is significant at a 1 % probability of error. The coefficient of determination (R 2 ) of the model is equal to 0,675 and the adjusted coefficient of determination (R 2 adj.) is equal to 0,645. * significant at 1 % probability of error. The R 2 found in the multiple linear regression analysis is higher than the R 2 of the simple linear regressions adjusted for each parameter separately. It is interesting that the multiple regression model had a better fit than the simple regression models, taking into account that the multiple regression model is considered more complex because it involves two independent variables. It can be said that the multiple regression model was able to estimate the cleavage strength more accurately based on the average area of the vessel, and percentage area of vessels, because it is more complete than simple regression models. CONCLUSIONS Eucalyptus grandis wood, analyzed at 22 years of age, has an average cleavage strength of 0,64 MPa. The average area of the vessel in this wood is 12493 m 2, while the average percentage area occupied by vessels on the transverse surface is 18,7 %. Vessel parameters are inversely related to cleavage strength, pointing out that the higher the average area of the vessel and the percentage area occupied by vessels on the transverse surface, the more prone the wood tends to be to the incidence of checks of the I TL type. The multiple linear regression model is able to estimate the cleavage strength as a function of the average area of the vessel, and the percentage area of vessels, with higher precision than the simple linear regression models adjusted from these independent variables separately.
The Race to the Bottom, from the Bottom The dominant perspective in discussions of labour and environmental standards and globalization is that of North-South competition and its impact on Northern standards. This paper presents an alternative perspective, that of South-South competition to export to the North and its impact on Southern standards. It develops a simple model of Southern competition, and demonstrates that whether a Southern race to the bottom is possible depends intricately on the Northern demand curve, the size of large exporters relative to each other and the relative size of the competitive fringe of small exporters. The possibility that Northern trade protectionism may undermine Southern standards is also examined. Copyright (c) The London School of Economics and Political Science 2006.
<reponame>surendranaidu/concourse /* * Copyright (c) 2013-2019 Cinchapi Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.cinchapi.concourse.security; import java.io.IOException; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; import java.util.Iterator; import java.util.Map; import com.cinchapi.common.base.CheckedExceptions; import com.cinchapi.concourse.annotate.Restricted; import com.cinchapi.concourse.server.io.ByteSink; import com.cinchapi.concourse.server.io.Byteable; import com.cinchapi.concourse.server.io.ByteableCollections; import com.cinchapi.concourse.server.io.FileSystem; import com.cinchapi.concourse.util.ByteBuffers; import com.google.common.collect.Maps; /** * The {@link LegacyAccessManager} controls access to the pre-0.5.0 * Concourse server by keeping tracking of valid credentials and * handling authentication requests. This LegacyAccessManager is used * to upgrade pre-0.5.0 user credentials to work with {@link UserService}. * * @author knd */ public class LegacyAccessManager { /** * Create a LegacyAccessManager that stores its legacy * credentials in {@code backingStore}. * * @param backingStore * @return the LegacyAccessManager */ public static LegacyAccessManager create(String backingStore) { return new LegacyAccessManager(backingStore); } // The legacy credentials are stored in memory private final Map<String, Credentials> credentials = Maps .newLinkedHashMap(); /** * Construct a new instance. * * @param backingStore */ private LegacyAccessManager(String backingStore) { Iterator<ByteBuffer> it = ByteableCollections .iterator(FileSystem.readBytes(backingStore)); while (it.hasNext()) { LegacyAccessManager.Credentials creds = Credentials .fromByteBuffer(it.next()); credentials.put(creds.getUsername(), creds); } } /** * Transfer the legacy {@link #credentials} managed by * this LegacyAccessManager to the specified {@code AccessManager} so that * they are now working with and managed by {@code AccessManager}. * * @param manager */ public void transferCredentials(UserService manager) { for (LegacyAccessManager.Credentials creds : credentials.values()) { manager.insertFromLegacy( ByteBuffers.decodeFromHex(creds.getUsername()), ByteBuffers.decodeFromHex(creds.getPassword()), creds.getSalt()); } } /** * Create a user with {@code username} and {@code password} as * a legacy {@link Credentials} managed by this LegacyAccessManager. * This method should only be used for testing. * * @param username * @param password */ @Restricted protected void createUser(ByteBuffer username, ByteBuffer password) { // visible // for // testing ByteBuffer salt = Passwords.getSalt(); password = <PASSWORD>(password, salt); Credentials creds = LegacyAccessManager.Credentials.create( ByteBuffers.encodeAsHex(username), ByteBuffers.encodeAsHex(password), ByteBuffers.encodeAsHex(salt)); credentials.put(creds.getUsername(), creds); } /** * Sync the memory store {@link #credentials} to disk * at {@code backingStore}. This method should only be used * for testing. * * @param backingStore */ @Restricted protected void diskSync(String backingStore) { // visible // for // testing FileChannel channel = FileSystem.getFileChannel(backingStore); ByteBuffer bytes = ByteableCollections .toByteBuffer(credentials.values()); try { channel.write(bytes); } catch (IOException e) { throw CheckedExceptions.wrapAsRuntimeException(e); } finally { FileSystem.closeFileChannel(channel); } } /** * A grouping of a username, password and salt that together identify a * valid authentication scheme for a user. * * @author knd */ private static final class Credentials implements Byteable { /** * Create a new set of Credentials for {@code username}, * {@code password} hashed with {@code salt}. * * @param username * @param password * @param salt * @return the Credentials */ public static Credentials create(String username, String password, String salt) { return new Credentials(username, password, salt); } /** * Deserialize the Credentials that are encoded in {@code bytes}. * * @param bytes * @return the Credentials */ public static Credentials fromByteBuffer(ByteBuffer bytes) { String password = ByteBuffers.encodeAsHex( ByteBuffers.get(bytes, Passwords.PASSWORD_LENGTH)); String salt = ByteBuffers .encodeAsHex(ByteBuffers.get(bytes, Passwords.SALT_LENGTH)); String username = ByteBuffers .encodeAsHex(ByteBuffers.get(bytes, bytes.remaining())); return new Credentials(username, password, salt); } private final String password; private final String salt; // These are hex encoded values. It Is okay to keep them in memory as a // strings since the actual password can't be reconstructed from the // string hash. private final String username; /** * Construct a new instance. * * @param username * @param password * @param salt */ private Credentials(String username, String password, String salt) { this.username = username; this.password = password; this.salt = salt; } @Override public ByteBuffer getBytes() { ByteBuffer bytes = ByteBuffer.allocate(size()); copyTo(bytes); bytes.rewind(); return bytes; } /** * Return the hex encoded password. * * @return the password hex */ public String getPassword() { return password; } /** * Return the salt as a ByteBuffer. * * @return the salt bytes */ public ByteBuffer getSalt() { return ByteBuffers.decodeFromHex(salt); } /** * Return the hex encoded username. * * @return the username hex */ public String getUsername() { return username; } @Override public int size() { return Passwords.PASSWORD_LENGTH + Passwords.SALT_LENGTH + ByteBuffers.decodeFromHex(username).capacity(); } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("username: " + username) .append(System.getProperty("line.separator")); sb.append("password: " + password) .append(System.getProperty("line.separator")); sb.append("salt: " + salt) .append(System.getProperty("line.separator")); return sb.toString(); } @Override public void copyTo(ByteSink buffer) { buffer.put(ByteBuffers.decodeFromHex(password)); buffer.put(ByteBuffers.decodeFromHex(salt)); buffer.put(ByteBuffers.decodeFromHex(username)); } } }
Sites of extranodal involvement are prognostic in patients with stage 1 follicular lymphoma Objectives Follicular lymphoma (FL) is the most common indolent B cell lymphoma in the United States and a quarter of patients present with stage I disease. The objective of this study was to examine if primary site of disease influences survival in early stage lymphoma. Results The most common extranodal primary sites were the integumentary system (8%), followed by the GI tract (6.4%) and head & neck (5.6%). We stratified patients into a pre-rituximab era and the rituximab era. In multivariable analysis, integumentary disease was associated with better overall survival (Hazard Ratio , 0.77; Confidence Interval , 0.66-0.9) while primary site FL of the nervous system (HR, 2.40; CI, 1.72-3.38) and the musculoskeletal system (HR, 2.14; CI, 1.44-3.18) were associated with worse overall survival when compared to primary nodal FL. Treatment in the pre-rituximab era, male gender and older age at diagnosis were associated with worse survival. Methods We queried the SEER database from 1983 to 2011. We included all adult patients (>18 years) with histologically confirmed stage I FL, active follow-up, and a single primary tumor. A total of 9,865 patients met eligibility criteria, with 2520 (25%) having an extranodal primary site. We classified the primary sites by organ or anatomic location into 11 sites. Conclusion Primary site of disease is a prognostic factor for patients with early stage FL and may help identify subsets of patients that could benefit from early, aggressive treatment. INTRODUCTION Follicular lymphoma (FL) is the most common subtype of indolent lymphomas and accounts for 35% of all Non-Hodgkin's Lymphomas in the United States. In approximately 85% of FL patients, the lymphoma cells harbor the pathognomonic translocation t (14; 18)(q32; q21) leading to overexpression of the BCL2 oncogene, an essential step in the pathogenesis of FL. FL is a heterogeneous disease with variable biologic behavior and clinical course. The Follicular Lymphoma International Prognostic Index or FLIPI score is the most commonly used prognostic tool that has been revised and validated in the rituximab era (FLIPI-2). Notwithstanding the prognostic classification, treatment of follicular lymphoma is guided primarily by the extent of disease involvement. Advanced stage FL (stage III/IV) is considered an incurable but indolent disease for which the decision to treat is based on several well-established clinical criteria. In contrast, treatment of limited stage FL (stage I and www.impactjournals.com/oncotarget/ Oncotarget, 2017, Vol. 8, (No. 45), pp: 78410-78418 Research Paper www.impactjournals.com/oncotarget contiguous stage II) has the potential to result in long-term disease-free survival, however the treatment modality of choice is not well defined. Several consensus based practice guidelines recommend radiation therapy (RT) as the preferred treatment for stage 1 follicular lymphoma, which represents approximately 25% of the presenting cases of FL. However, the optimal radiation dose and field size have not been definitively determined to date An analysis of the National Lymphocare Study, a large prospective cohort study of patients with FL, revealed that only 27% of the patients with early stage disease were treated with RT while others were only observed, received single-agent rituximab, or a combination of rituximab with chemotherapy with or without subsequent radiotherapy. These practices likely reflect uncertainty over the most beneficial initial treatment strategy coupled with concern about the short and long-term treatment related toxicities of RT and/or chemoimmunotherapy. Evidence suggests that the site of involvement is associated with outcome in several Non-Hodgkin's lymphomas (NHL's). Therefore identifying subsets of patients with pathobiologically different tumor types could help tailor initial therapy employed in managing limited stage follicular lymphoma. The objective of this study was to assess how the primary site of involvement influences prognosis and can inform therapy in stage I follicular lymphoma. Patient characteristics We identified 14,059 adult cases of histologically confirmed Stage I follicular lymphoma from 1983-2011, of whom 9,865 patients met eligibility criteria ( Figure 1). Extranodal primary site was noted in 2,520 patients (25.5%). Patient characteristics are outlined in Table 1. Briefly, of the patients analyzed, 4,749 (48%) were male, 8,833 (90%) were white and 5,459 (55%) were older than age 60. The median age in our series was similar between patients presenting with primary LN disease versus extranodal disease (61.5 vs. 61.8 years). The most common extranodal primary sites were the integumentary system (31.5%), followed by GI tract (25%), Head & Neck (21.9%) and Breast (4.8%; Figure 2). Survival analysis We performed a survival analysis comparing each primary extranodal site of disease with LN primary disease. On univariable analysis, integumentary disease was associated with better overall survival (p = 0.001, HR 0.74, 95% CI 0.63-0.86) when compared to a nodal primary site. Primary disease of the respiratory system was associated with worse OS (p = 0.037, HR: 1.68, 95% CI 1.18-2.4) as well as primary disease of the muscle & connective tissue system (p = 0.003, HR: 2.02, CI: 1.37-3; Table 2). On multivariable analysis, integumentary disease remained associated with better OS (p =0.001, HR 0.77, CI 0.66 to 0.9), while primary disease of the nervous system (p =0.01, HR 2.4, CI 1.72 to 3.38) and muscle & connective tissue system (p = 0.001, HR 2.14, CI 1.44 to 3.18) were associated with worse OS. Primary respiratory site was no longer significantly associated with survival on multivariate analysis (p =0.66, HR 1.39, CI 0.977 to 1.986; see also Table 2 and Figure 3). The other primary sites were not significantly associated with differential OS compared to LN primary site. Patients treated in the pre-rituximab era had a worse OS on multivariate analysis than if treated in the rituximab era (p< 0.001, HR: 1.59, CI: 1.43-1.73). Female sex was associated with better OS (p< 0.001, HR: 0.76, CI: 0.71-0.81); increased age at diagnosis was associated with decreased survival and for every one-year increment in age, the risk of death increased by 7.1% (p<0.001, HR: 1.071, CI: 1.07-1.08). Patients who underwent treatment for their stage I FL with either surgery (p<0.001, HR: 1.58, CI: 1.46-1.71) or radiation (p<0.001, HR 1.36, CI 1.26-1.47) had better survival than those who did not receive these therapies. Race was not associated with OS. DISCUSSION In a large, nationally representative, retrospective cohort study of patients with stage 1 FL, we observed that specific sites of involvement are associated with better or worse survival after adjusting for age, sex, race and therapy. Although patients with FL have an excellent median OS approaching 20 years in the modern chemotherapy era, a majority of patients continue to relapse within 5 years of their initial treatment. The treatments they receive are also associated with second malignancies and organ dysfunction. Finding additional prognostic factors that supplement the FLIPI score will help refine it and risk stratify patients as having good versus poor risk. This in turn will guide therapeutic decision-making that will improve the quality of life for good risk patients and decrease the impact of overtreatment and the accompanying financial toxicity of cancer treatment. Our findings demonstrate that stage 1 FL of the integumentary system has a significantly better outcome than LN primary disease. Primary cutaneous follicular lymphoma (PCFL) has traditionally been associated with an improved survival and response to therapy in prior studies. It is also designated as a specific category in the WHO classification. These findings suggest that Stage I FL of the integumentary system is pathobiologically a different disease than LN primary disease. This may be due to the previously described rare expression of BCL-2 and a different gene signature in comparison to nodal follicular lymphoma. In comparison, stage I FL of the muscle & connective tissue system and nervous system have a significantly worse survival than LN primary disease while other disease sites like the GI tract, head & neck, and respiratory system are not significantly associated with worse survival on multivariate analysis. Muscle & connective tissue disease being anatomically closer to bone, may represent transformed FL and this could explain the poor outcomes. A case series in the literature suggest that nervous system involvement by NHL carries a very poor prognosis with overall survival approximated at 4 months. Although most cases of primary CNS lymphoma reported in the literature are diffuse large B cell lymphomas, a minority of cases has been pathologically confirmed to be follicular lymphoma. These cases retained the poor prognosis similarly to what we observed in our study. The integumentary system and GI tract were the most commonly involved extranodal site of disease as previously reported. Otherwise only limited data on the prognostic significance of non-cutaneous extranodal sites of FL exists which limits deriving definite conclusions on their impact on survival. Multiple epidemiological studies conducted have indicated that NHL characteristics, incidence and survival rates are influenced by race. This is less clear for FL probably due to the long indolent course of the disease and good overall survival [18,. In our study we found no association of race on overall survival for stage I FL. FL is much more common in Whites than in other races CT, cardiothoracic; GI: gastrointestinal. *Cases not included here are those that have a known but unclassified primary site and reticuloendothelial system as a primary site.. We similarly found a much smaller representation of Black and other races (10%) in comparison to Whites (90%), which may further limit statistical power to detect a small survival differences based on race. Older age and male sex predicted for inferior patient survival in our study as previously reported. The use of radiation therapy in early stage FL has been associated with improved OS after correcting for socio-demographic, tumor and treatment factors. Involved-field RT is a commonly used treatment modality for early-stage FL. Rituximab has also emerged as an efficient systemic therapy, though clinical data are limited regarding its impact in early stage FL. Although radiation therapy is considered to be better tolerated than chemoimmunotherapy, its use in early stage FL continues to decline in favor of alternative treatment strategies of single agent chemotherapy or observation. We found a beneficial effect on OS when patients were initially treated with radiation therapy for their stage I FL. Our observation is in contrast to the general paradigm that the watch & wait approach is equivalent to the early treatment approach and supports findings from a growing body of literature that has shown improvement in quality of life indices as well as improvement in OS with the early treatment approach, making a case for carefully selecting the patient population that can benefit from this strategy. The SEER database represents 28% of the United States population and provides a real world, multi-ethnic setting to further study the association between specific sites of involvement by follicular lymphoma, its clinical characteristics and outcomes. Our study is the largest to date to address this question for stage 1 follicular lymphoma. However, a limitation of the SEER database is the lack of information about chemotherapy in the SEER database as a major modality of treatment for FL is chemoimmunotherapy. We partially addressed this issue by analyzing survival in pre-rituximab and rituximab era and in keeping with previously reported literature; we found an improved survival in the rituximab era. Information about staging modalities is also limited. Staging patients accurately is the cornerstone of distinguishing early stage disease from disseminated disease. FDG-PET upstages a significant proportion of patients compared to conventional CT scans and aids in patients receiving the appropriate stage specific therapy and might not have been available for the majority of patients included in this analysis. It is also unclear how many patients underwent a bone marrow biopsy as part of their staging work up. Incorporating information about staging modalities used in future studies will prove useful as the Lymphocare study reported that adequately staged patients had a better outcome regardless of the modality of treatment selected. Another drawback is the lack of central pathologic confirmation. Studies including patients diagnosed before 2001 had codes from earlier ICD-O versions that were converted to ICD-O-3 and have higher proportions of unclassified (e.g., lymphoma and not otherwise specified) cases. However, Clarke et al showed an 89% and 84% agreement between computer-converted ICD-O-2 codes to ICDO-3 codes and registry-assigned codes for follicular lymphoma cases diagnosed in the 1988-1994 and 1998-2000 SEER periods respectively. Patients We obtained data from the Surveillance, Epidemiology & End Results (SEER) program. SEER collects cancer incidence, treatment, and survival information from 18 geographic areas in the United States, representing 28% of the entire US population. We used direct case listings extracted by SEER*Stat software version 8.1.5, released March 31,2014. This report includes all FL patients in the SEER database during 1/1/83 and 12/31/11. Measurements We identified patients with a diagnosis of FL using the International Classification of Disease for Oncology, 3 rd edition, ICD-0-3 histology code 9690, 9691, 9695, 9698 until the latest follow-up recorded in the SEER submission. These codes have been validated in previous studies. SEER identifies cases as follicular lymphoma based on several parameters that include histopathology, FISH testing, genetic testing for the BCL2 gene rearrangement, translocation t (q32;q21) and immunophenotyping. Our inclusion criteria were Stage I FL patients, age > 18 years, patients with active followup, and those with a single primary tumor. We excluded patients with a diagnosis established by autopsy and/or death certificate only, patients for whom the diagnosis of FL was a second or subsequent primary, and patients with an unknown primary site. Primary sites were classified by organ or anatomic site into 1) head and neck 2) gastrointestinal (GI tract) 3) pulmonary system 4) thymus, mediastinum, and heart 5) muscle & connective tissue 6) integumentary system 7) nervous system 8) breast tissue 9) genitourinary system 10) endocrine system and 11) lymphatic system. PCFL and stage I FL of the integumentary system represent the same entity, however prior to 2008, PCFL was not recognized and as such not reported as a separate entity. Hence, for ease of understanding, we refer to this category as the integumentary system throughout the paper. We excluded the category of blood and reticuloendothelial system as this includes disease in the bone marrow, which cannot be easily distinguished from more advanced stages of disease. We subsequently defined a pre-rituximab era and the rituximab era based on the year of FDA approval of rituximab for the treatment of FL in the US to indirectly estimate the effect of rituximab on survival. Statistical analysis We calculated OS as time in months from date of diagnosis and date of death, date last known to be alive, or the date of study cutoff. We performed descriptive statistics with Pearson's Chi-square test to analyze categorical variables. For continuous variables we used Mann Whitney or Student t tests, depending on the normality of distribution of data. OS estimates were calculated using Kaplan-Meier survival analysis. Multivariable Cox proportional-hazard regression models were fitted to evaluate the prognostic impact of each extranodal site of involvement using the lymph node group as reference, after adjustment for known prognostic factors such as age, sex, race, surgery, and radiation. All tests were two-tailed, with the threshold for significance of p value set at 0.05. The p-values were adjusted by the Bonferroni correction. All statistical analyses were performed using SAS, version 9.3. CONCLUSION In summary, primary site of disease is an important prognostic factor for patients with early stage FL as demonstrated by our population-based study. Patients with Stage I FL of the integumentary system had a significantly better outcome than primary nodal disease. Musculoskeletal and nervous system primary sites had a significantly worse survival than primary nodal sites. Furthermore, RT and surgery were associated with better survival than other treatment modalities, including expectant observation. This supports the hypothesis that a subset of patients with stage I FL may benefit from early and/or aggressive treatment in comparison to an observation only approach. It would be important to prospectively validate this finding in the current era of more accurate initial staging and more effective therapy that includes monoclonal CD20-directed antibodies. In addition to its prognostic significance, primary site may correlate with certain biological characteristics associated with disease behavior and pathogenesis. Going forward it will be important to elucidate different pathobiological characteristics that may be specific to site of involvement by comprehensive genomic and mutational analysis. This might help identify pathways that can be therapeutically targeted with novel agents with low toxicity. CONFLICTS OF INTEREST The authors have declared no conflicts of interest.
# @file # Split a file into two pieces at the request offset. # # Copyright (c) 2021, Intel Corporation. All rights reserved.<BR> # # SPDX-License-Identifier: BSD-2-Clause-Patent # ## # Import Modules # import argparse import os import io import shutil import logging import sys import tempfile parser = argparse.ArgumentParser(description=''' SplitFile creates two Binary files either in the same directory as the current working directory or in the specified directory. ''') parser.add_argument("-f", "--filename", dest="inputfile", required=True, help="The input file to split tool.") parser.add_argument("-s", "--split", dest="position", required=True, help="The number of bytes in the first file. The valid format are HEX, Decimal and Decimal[KMG].") parser.add_argument("-p", "--prefix", dest="output", help="The output folder.") parser.add_argument("-o", "--firstfile", help="The first file name") parser.add_argument("-t", "--secondfile", help="The second file name") parser.add_argument("--version", action="version", version='%(prog)s Version 2.0', help="Print debug information.") group = parser.add_mutually_exclusive_group() group.add_argument("-v", "--verbose", action="store_true", help="Print debug information.") group.add_argument("-q", "--quiet", action="store_true", help="Disable all messages except fatal errors") SizeDict = { "K": 1024, "M": 1024*1024, "G": 1024*1024*1024 } def GetPositionValue(position): ''' Parse the string of the argument position and return a decimal number. The valid position formats are 1. HEX e.g. 0x1000 or 0X1000 2. Decimal e.g. 100 3. Decimal[KMG] e.g. 100K or 100M or 100G or 100k or 100m or 100g ''' logger = logging.getLogger('Split') PosVal = 0 header = position[:2].upper() tailer = position[-1].upper() try: if tailer in SizeDict: PosVal = int(position[:-1]) * SizeDict[tailer] else: if header == "0X": PosVal = int(position, 16) else: PosVal = int(position) except Exception as e: logger.error( "The parameter %s format is incorrect. The valid format is HEX, Decimal and Decimal[KMG]." % position) raise(e) return PosVal def getFileSize(filename): ''' Read the input file and return the file size. ''' logger = logging.getLogger('Split') length = 0 try: with open(filename, "rb") as fin: fin.seek(0, io.SEEK_END) length = fin.tell() except Exception as e: logger.error("Access file failed: %s", filename) raise(e) return length def getoutputfileabs(inputfile, prefix, outputfile,index): inputfile = os.path.abspath(inputfile) if outputfile is None: if prefix is None: outputfileabs = os.path.join(os.path.dirname(inputfile), "{}{}".format(os.path.basename(inputfile),index)) else: if os.path.isabs(prefix): outputfileabs = os.path.join(prefix, "{}{}".format(os.path.basename(inputfile),index)) else: outputfileabs = os.path.join(os.getcwd(), prefix, "{}{}".format(os.path.basename(inputfile),index)) elif not os.path.isabs(outputfile): if prefix is None: outputfileabs = os.path.join(os.getcwd(), outputfile) else: if os.path.isabs(prefix): outputfileabs = os.path.join(prefix, outputfile) else: outputfileabs = os.path.join(os.getcwd(), prefix, outputfile) else: outputfileabs = outputfile return outputfileabs def splitFile(inputfile, position, outputdir=None, outputfile1=None, outputfile2=None): ''' Split the inputfile into outputfile1 and outputfile2 from the position. ''' logger = logging.getLogger('Split') if not os.path.exists(inputfile): logger.error("File Not Found: %s" % inputfile) raise(Exception) if outputfile1 and outputfile2 and outputfile1 == outputfile2: logger.error( "The firstfile and the secondfile can't be the same: %s" % outputfile1) raise(Exception) # Create dir for the output files try: outputfile1 = getoutputfileabs(inputfile, outputdir, outputfile1,1) outputfolder = os.path.dirname(outputfile1) if not os.path.exists(outputfolder): os.makedirs(outputfolder) outputfile2 = getoutputfileabs(inputfile, outputdir, outputfile2,2) outputfolder = os.path.dirname(outputfile2) if not os.path.exists(outputfolder): os.makedirs(outputfolder) except Exception as e: logger.error("Can't make dir: %s" % outputfolder) raise(e) if position <= 0: if outputfile2 != os.path.abspath(inputfile): shutil.copy2(os.path.abspath(inputfile), outputfile2) with open(outputfile1, "wb") as fout: fout.write(b'') else: inputfilesize = getFileSize(inputfile) if position >= inputfilesize: if outputfile1 != os.path.abspath(inputfile): shutil.copy2(os.path.abspath(inputfile), outputfile1) with open(outputfile2, "wb") as fout: fout.write(b'') else: try: tempdir = tempfile.mkdtemp() tempfile1 = os.path.join(tempdir, "file1.bin") tempfile2 = os.path.join(tempdir, "file2.bin") with open(inputfile, "rb") as fin: content1 = fin.read(position) with open(tempfile1, "wb") as fout1: fout1.write(content1) content2 = fin.read(inputfilesize - position) with open(tempfile2, "wb") as fout2: fout2.write(content2) shutil.copy2(tempfile1, outputfile1) shutil.copy2(tempfile2, outputfile2) except Exception as e: logger.error("Split file failed") raise(e) finally: if os.path.exists(tempdir): shutil.rmtree(tempdir) def main(): args = parser.parse_args() status = 0 logger = logging.getLogger('Split') if args.quiet: logger.setLevel(logging.CRITICAL) if args.verbose: logger.setLevel(logging.DEBUG) lh = logging.StreamHandler(sys.stdout) lf = logging.Formatter("%(levelname)-8s: %(message)s") lh.setFormatter(lf) logger.addHandler(lh) try: position = GetPositionValue(args.position) splitFile(args.inputfile, position, args.output, args.firstfile, args.secondfile) except Exception as e: status = 1 return status if __name__ == "__main__": exit(main())
#include<iostream> #include<vector> #include<algorithm> using namespace std; const int N=1111; int f[N]; vector<int>G[N]; int main() { int n,i,j; cin>>n; for(i=1;i<=n-1;++i) { cin>>j; G[j].push_back(i+1); } for(i=1;i<=n;++i) { for(j=0;j<G[i].size();++j) { if(G[G[i][j]].empty())f[i]++; } } for(i=1;i<=n;++i) { if(G[i].size()!=0&&f[i]<3) { cout<<"No"<<endl; return 0; } } cout<<"Yes"<<endl; return 0; }
MILWAUKEE — Milwaukee police are seeking a suspect in connection with an armed robbery at the Metro Market on Van Buren near Juneau. It happened on Monday, Oct. 29 around 8:15 a.m. Police said the suspect entered the Metro Market and displayed a black handgun that was in his waistband — demanding money from the bookkeeper at the central counter. The suspect was able to get away with about $200, fleeing on foot westbound through the parking structure. The suspect has been described as a male, black, standing 6’2″ tall with a thin build. He was wearing a black hooded sweatshirt with the hood up, a black mask that covered his mouth only, a black coat over the sweatshirt and possibly, black pants. Anyone with information is asked to please contact MPD.