content
stringlengths
7
2.61M
<reponame>vi4m/ralph #!/usr/bin/env python # -*- coding: utf-8 -*- from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import logging from django.core.mail import mail_admins from ralph.cmdb import models as db logger = logging.getLogger(__name__) class BaseImporter(object): """ Base class for importer plugins """ notify_mail = False matched = 0 not_matched = 0 def handle_duplicate_name(self, name): pass def do_summary(self): mail_admins( 'Integration statistics', ''' Report:<br> <hr> statistics: items matched: %(matched)d | items not matched %(not_matched)d<br> ''' % dict(matched=self.matched, not_matched=self.not_matched), fail_silently=True, html_message=True, ) def handle_integration_error(self, classname, methodname, e): logger.error("Integration error. %(classname)s %(methodname)s : %(error)s" % dict( classname=classname, methodname=methodname, error=e, )) if self.notify_mail: mail_admins( 'Integration errors', ''' Errors in %(class_name).%(method_name) Error message: %(error_message)s <br> ''' % dict(class_name=classname, method_name=methodname, error_message=unicode(e)), fail_silently=True, html_message=True, ) def get_ci_by_name(self, name, type=None): ci = db.CI.objects.filter(name=name).all() if len(ci) == 1: return ci[0] elif len(ci) > 1: self.handle_duplicate_name(name) return None else: return None
/** * Prints a state of a whole list of Alters, prefixed by s, only when the * logging verbosity is at least debug. */ void Alter::debug_print(const utils::Str &s, const utils::List<Alter> &la) { if (utils::logger::log_level >= utils::logger::LogLevel::LOG_DEBUG) { print(s, la); } }
Laser-Milled Microslits Improve the Bonding Strength of Acrylic Resin to Zirconia Ceramics Heightened aesthetic considerations in modern dentistry have generated increased interest in metal-free zirconia-supported dentures. The lifespan of the denture is largely determined by the strength of adhesion between zirconia and the acrylic resin. Thus, the effect on shear bond strength (SBS) was investigated by using an acrylic resin on two types of zirconia ceramics with differently sized microslits. Micromechanical reticular retention was created on the zirconia surface as the novel treatment (microslits (MS)), and air-abrasion was used as the control (CON). All samples were primed prior to acrylic resin polymerization. After the resin was cured, the SBS was tested. The obtained data were analyzed by using multivariate analysis of variance( = 0.05). After the SBS test, the interface failure modes were observed by scanning electron microscopy. The MS exhibited significantly higher bond strength after thermal cycles (p < 0.05) than the CON. Nevertheless, statistically comparisons resulted in no significant effect of the differently sized microslits on SBS (p > 0.05). Additionally, MS (before thermal cycles: 34.8 ± 3.6 to 35.7 ± 4.0 MPa; after thermal cycles: 26.9 ± 3.1 to 32.6 ± 3.3 MPa) demonstrated greater SBS and bonding durability than that of CON (before thermal cycles: 17.3 ± 4.7 to 17.9 ± 5.8 MPa; after thermal cycles: 1.0 ± 0.3 to 1.7 ± 1.1 MPa), confirming that the micromechanical retention with laser-milled microslits was effective at enhancing the bonding strength and durability of the acrylic resin and zirconia. Polycrystalline zirconia-based ceramics are a newly accessible material for improving removable prosthodontic treatment, as the bond strength with acrylic resin can be greatly enhanced by laser milling. Introduction Metal alloys, such as cobalt-chromium, titanium, and gold, are widely used in removable prosthodontic treatment as materials for the frameworks of removable particle dentures (RPDs). These metal alloys possess favorable characteristics, such as excellent mechanical strength and thermal conductivity. However, the demand for metal-free treatments has increased due to various factors, such as escalating precious metal prices, concerns regarding metal allergies, and increased emphasis on aesthetics. Moreover, due to factors such as health, anatomy, psychology, and economics, many patients are not ideal candidates for implant therapy. Particularly in geriatric dentistry, invasive treatment may cause a physiological burden, which can potentially result in postoperative complications. Therefore, RPD treatment may reduce the physical and mental burden on elderly patients, before and after treatment, and additionally rehabilitate the oral function and improve the psychological and social adaptation. Enhanced technology, such as computer-aided design and computer-aided manufacturing (CAD/CAM), has been successfully integrated into dentistry from the engineering sciences. The polycrystalline zirconia-based ceramics with superior mechanical properties, which can only be processed by CAD/CAM, have been widely utilized in clinical dentistry. Zirconia ceramics are attracting attention as biomedical materials because of their favorable characteristics, including excellent biocompatibility, high chemical stability, and mechanical strength comparable to that of metals. Previous literature has reported that zirconia ceramics are potentially alternative materials for connectors, clasps, and other components used in the framework of RPDs due to excellent bending properties and fatigue resistance. Moreover, zirconia-based frameworks are light in weight, aesthetically pleasing, and do not present complications, such as metal allergies. However, favorable mechanical properties are not the only considerations when contemplating the applications of zirconia ceramics on RPD frameworks, since the adhesive property of zirconia ceramics regarding acrylic resin (denture-based resins) is also a critical factor. Failure between materials is mainly due to the biofilm accumulated on the dental materials, and firm bonding promotes the long-term use of dental treatments, such as RPDs, as it reduces the likelihood of fractures along the margin edges, and the odor and discoloration that may be caused by saliva intrusion into the margin gap. Primers with functional monomers such as 4-META (4-methacryloyloxyethy trimellitate anhydride) or MDP (10-methacryloyloxydecyl dihydrogen phosphate), and tribochemical treatment are proposed for chemical preprocessing. Air-abrasion (alumina blasting) treatment has been generally endorsed as the mechanical procedure for creating irregularities on the adhesive interface of the materials. Thus, a combination of chemical and mechanical preprocessing was confirmed to improve the bonding between zirconia ceramics and other substrate materials. However, previous work indicates that on account of the high-hardness of zirconia ceramics, the superficial irregularities formed by conventional pretreatment methods are limited. In the last decade, with the advancement of laser techniques, the use of lasers as an alternative pretreatment method for dental procedures has become prevalent, such as by applying laser treatment to dental materials to improve the bonding properties between materials and resin-based luting agents or materials and porcelain. This resulted in the attempted use of lasers as a surface treatment method for zirconia ceramics, in order to enhance the bonding strength and durability of a number of dental materials by altering the surface microstructures. Some studies have reported that lasers improve the bond strengths of zirconia ceramics and other substrate materials; however, other studies have shown contradictory results. Furthermore, studies evaluating the bonding between zirconia ceramics and acrylic resin are limited; there is only one related published study on the effect of microslit retention on the bond strength of zirconia to dental materials. Therefore, research in the area of laser-milled microslits is critical for improving RPDs and will ultimately serve the future of clinical dentistry. Herein, we aim to investigate the effects of micromechanical retention (microslits of different sizes) created by lasers on durability and bond strength regarding two different zirconia ceramics and an acrylic resin. The theory is that laser pretreatment should increase the shear bond strength (SBS) and durability of the zirconia ceramics and acrylic resin; the different sizes of microslits (laser grooves) will alter the SBS; and the SBS would be unchanged after artificial aging via thermal cycling. Materials and Methods The details of the materials used in this study are provided in Table 1. The experiment was conducted using the following procedure: Zirconia Specimen Preparation Two types of zirconia ceramics (Y-TZP and Ce-TZP/A) were processed with the CAD/CAM system and then sintered in a high-temperature furnace. The temperature was raised to 1450 C and maintained for 2 h. The zirconia ceramics were then fabricated into disk-shaped specimens with a diameter of 10 mm and a thickness of 2.5 mm (Figure 1a). Shear Bond Strength A brass mold (6 mm inner diameter, 8 mm outer diameter, and 2 mm height) was positioned on the surface of each zirconia ceramic specimen to define the bonding area ( Figure 1a) and was fixed by a piece of double-sided tape. Then, a ceramic primer was applied to the zirconia ceramic surfaces Surface Treatment Eighty disk-shaped specimens were prepared for each zirconia ceramic. All specimens were then ground flat with 600-grit silicon carbide abrasive paper, followed by steam cleaning and air-drying. Subsequently, specimens were randomly distributed into four groups according to the surface treatments (n = 20 per group for each zirconia ceramic). Shear Bond Strength A brass mold (6 mm inner diameter, 8 mm outer diameter, and 2 mm height) was positioned on the surface of each zirconia ceramic specimen to define the bonding area ( Figure 1a) and was fixed by a piece of double-sided tape. Then, a ceramic primer was applied to the zirconia ceramic surfaces of all disks. Next, an acrylic resin was mixed in at the recommended liquid/powder ratio provided by the manufacturer and poured into a brass mold. Subsequently, the specimens were then polymerized at 55 C and 0.2 MPa for 30 min using a pressure vessel (Palamat practice ELT; Kulzer Japan Co., Ltd., Tokyo, Japan). One hour after the bonding procedure, the specimens were immersed in water at 37 C for 24 h. This state was defined as a "zero-thermal cycle." The 24 h shear bond strength of half the number of specimens in each group (n = 10) was tested at the zero-thermal cycle. The remaining specimens (n = 10) were placed in a thermal cycling apparatus (Thermal Cycler; Nissin Seiki Co., Ltd., Tokyo, Japan) and cycled between a cold bath (4 C) and a hot bath (60 C) with a one-minute dwell time per bath for 20,000 cycles. The specimens immersed in water for 24 h (at the zero-thermal cycle) and those after 20,000 thermal cycles were subjected to SBS testing. The specimens were attached to a shearing jig ( Figure 2) and the SBS test was conducted using the universal testing machine (Autograph AGS-J; Shimadzu Corp., Kyoto, Japan). A shear force was applied to the adhesive interface until fracture occurred at a crosshead speed of 0.5 mm/min. The load value of the debonded zirconia ceramics and the acrylic resin was then measured. After the SBS test, the fractured interfaces were observed with an optical microscope (S300II; Inoue Attachment Co. Ltd., Tokyo, Japan) at 8 magnification to determine the mode of failure. The failure modes were classified into adhesive failure (no residual resin on the zirconia ceramic surface, and it is cleanly delaminated at the zirconia-resin interface), cohesive failure (zirconia ceramic and resin are firmly bonded; cracks inside the resin cause debonding), and mixed failure (combination of cohesive and adhesive failure). The fractured surfaces were also observed by scanning electron microscopy (SEM) at 500 magnification (VE-8800; Keyence Corp., Osaka, Japan), at a high vacuum level and an operating voltage of 2 kV. Statistic Analysis The data collected for all tests were calculated, and the homogeneity of variance was primarily analyzed using Levene's test. Data comparisons were conducted using one-way analysis of variance (ANOVA) with post hoc Scheff test and post hoc Tukey's HSD (honestly significant difference) tests. All statistical analyses were performed using IBM SPSS statistical software (SPSS version 24; IBM, Armonk, NY, USA), and the level of statistical significance was set at 5%. After the SBS test, the fractured interfaces were observed with an optical microscope (S300II; Inoue Attachment Co. Ltd., Tokyo, Japan) at 8 magnification to determine the mode of failure. The failure modes were classified into adhesive failure (no residual resin on the zirconia ceramic surface, and it is cleanly delaminated at the zirconia-resin interface), cohesive failure (zirconia ceramic and resin are firmly bonded; cracks inside the resin cause debonding), and mixed failure (combination of cohesive and adhesive failure). The fractured surfaces were also observed by scanning electron microscopy (SEM) at 500 magnification (VE-8800; Keyence Corp., Osaka, Japan), at a high vacuum level and an operating voltage of 2 kV. Statistic Analysis The data collected for all tests were calculated, and the homogeneity of variance was primarily analyzed using Levene's test. Data comparisons were conducted using one-way analysis of variance (ANOVA) with post hoc Scheff test and post hoc Tukey's HSD (honestly significant difference) tests. All statistical analyses were performed using IBM SPSS statistical software (SPSS version 24; IBM, Armonk, NY, USA), and the level of statistical significance was set at 5%. Table 2 illustrates the results of SBS testing of Y-TZP. Irrespective of artificial aging, CON had a significantly lower SBS than MS. Prior to artificial aging, the SBSs were 34.8 ± 3.6 to 35.7 ± 4.0 MPa for MS, and 17.9 ± 5.8 MPa for CON. After artificial aging, there was a 94.3% reduction in SBS values, with a significant decline from 17.9 ± 5.8 MPa to 1.0 ± 0.3 MPa for CON (p < 0.05). However, in MS there was no dramatic reduction (<25%) observed in the SBS values irrespective of the artificial aging process or the different conditions of laser treatment (50 MS, 75 MS, or 100 MS). Table 2 illustrates the results of SBS testing of Y-TZP. Irrespective of artificial aging, CON had a significantly lower SBS than MS. Prior to artificial aging, the SBSs were 34.8 ± 3.6 to 35.7 ± 4.0 MPa for MS, and 17.9 ± 5.8 MPa for CON. After artificial aging, there was a 94.3% reduction in SBS values, with a significant decline from 17.9 ± 5.8 MPa to 1.0 ± 0.3 MPa for CON (p < 0.05). However, in MS there was no dramatic reduction (<25%) observed in the SBS values irrespective of the artificial aging process or the different conditions of laser treatment (50 MS, 75 MS, or 100 MS). The results of SBS testing of Ce-TZP/A follow a similar trend to those of Y-TZP, as seen in Table 3. The SBS of CON was significantly lower than that of MS, regardless of whether artificial aging was performed or not. Before artificial aging, the SBSs were 34.9 ± 4.3 to 35.2 ± 4.0 MPa for MS, and 17.3 ± 4.7 MPa for CON. Post aging, the SBS values of CON significantly decreased to 1.7 ± 1.1 MPa, corresponding to a reduction of 90.2% (p < 0.05). However, no dramatic reduction (< 9%) in SBS values occurred in MS regardless of the different conditions of laser treatment (50 MS, 75 MS, or 100 MS), or whether or not artificial aging was performed. From the representative SEM images of the debonded zirconia-resin interface (Figures 4 and 5), it was observed that the resin penetrated the microslits of MS. As a result, analysis of the debonded interface visibly displayed resin residue on the zirconia ceramic surface. In contrast, no residual resin was observed for CON. Discussion The shear bond strengths of the acrylic resin and two types of zirconia ceramic (Y-TZP and Ce-TZP/A) were evaluated after traditional surface pretreatment, air-abrasion with 50 m and 0.3 MPa Al 2 O 3 particles (CON), and a novel pretreatment, using a laser to create microslits (MS). Despite the fact that studies on the bonding strength between acrylic resin and metal alloys postulate that bonding strength is sufficient for clinical dentistry, the durability after thermal cycling as a means of artificial aging, requires discussing. The results confirmed that the SBS values of MS were higher than that of CON, regardless of artificial aging. Consequently, this means that the first null hypothesis was accepted. A contrasting explanation is that the microslits created by the laser provide a higher mechanical interlocking force due to the more profound irregularities than air-abrasion. Yamaguchi et al., reported that rougher surfaces have more extensive contact areas available for bonding. The current study did not conduct experiments pertaining to the surface roughness; however, other published papers have reported that the surface roughness of the air-abrasion with Al 2 O 3 (under identical conditions) was approximately 0.23 m for Y-TZP and 0.24 m for Ce-TZP/A. Therefore, it is understood that due to the uneven and rough surface, CON creates shallower surface defects than the microslits (50 MS, 75 MS or 100 MS). Hence, the contact area of CON was smaller than that of MS, resulting in a lower SBS in CON than in MS. The findings of this study confirmed that laser treatment is an effective mechanical process that can improve the bonding strength of acrylic resin and zirconia ceramics. Compared with the traditional mechanical surface treatment methods, laser treatment is suggested to be more controllable and stable. Thus, lasers have been widely used in medicine over the past decade. In dentistry, the erbium-doped yttrium-aluminum-garnet (Er: YAG) laser is the most frequently used laser system. Based on prior related experiments, it was determined that although Er: YAG lasers can improve the surface roughness of zirconia ceramics, they result in decreased bond strength. Usumez et al., concluded that neodymium-doped yttrium-aluminum-garnet (Nd: YAG) lasers cause microcracks in zirconia ceramics and might reduce the resistance and longevity of the material. However, computer numerical control (CNC) lasers are widely used in the field of engineering science for tasks such as the manufacturing of molds or lettering, and the production of microscopic surface textures on various materials. Therefore, we utilized a CNC laser with neodymium-doped yttrium orthovanadate (Nd: YVO 4 ) to make microslits on the surface of zirconia ceramics. The reproducibility of the microslit topography was improved by adjusting laser parameters. Three isometric grid microslits (Figure 1) of different sizes (50, 75, and 100 m) were created on the surface of the zirconia ceramics in order to analyze the effect of microslit size on bonding strength. The experimental results show that although the sizes of the microslits influence the contact area, with the 50MS exhibiting the largest contact area of bonding (when 100MS set to 1, 75MS would be 1.21, and 50MS would be 1.27), differences in size did not affect the SBS values. Therefore, the second null hypothesis is rejected. The proposed reason for the observed trend is that although the surface areas of the three microslits vary slightly, the influence of microslit size on the bonding strength is imperceptible due to similarity in microslit topography. Previous research on the effect of different conditions of air-abrasion to SBS also indicated similar results; although increasing the particle size or the jet pressure of the air-abrasion caused a significant increase in surface roughness, there was no apparent effect on SBS and bonding durability. Therefore, it is possible to conclude that the bond strength would potentially remain unchanged, despite increasing the size of the microslit. Previous studies have reported that when acrylic resin bonds to other materials, the insufficient bonding strength at the adhesive surface causes microbreakage due to differences in the thermal expansion coefficient between the two substrates. During artificial aging, the physical properties of the acrylic resin itself decreased due to water absorption. Moreover, due to massive differences in the thermal expansion coefficient between the acrylic resin and zirconia ceramics, there is induced water penetration in the adhesive layer and the stress at the interface is increased. Furthermore, the bond durability declines, and debonding occurs. The SBS values of CON after artificial aging had a significant decrease (> 90%) for Y-TZP and Ce-TZP/A, while SBS values of MS were not dramatically different for either substrate. The results proved the third null hypothesis, which states that the SBS values remain unchanged after artificial aging. One possible explanation is that when the acrylic resin became polymerized under pressure, the molecules were polymerizing towards the surface of the zirconia ceramics and were deeply interlocked into the microslits created by the laser (MS). Thus, even after artificial aging, the acrylic resin absorbed water and nonhomogeneous thermal expansion occurred between the two materials. However, because of the physical fitting, the material successfully overcomes the peeling and debonding, thereby improving the bonding durability. All specimens in MS showed cohesive failure, which suggests that the bonding strength was more significant than the strength of the acrylic resin structure, and thus fracture and failure occurred within the acrylic resin. Most CON specimens showed adhesive failure due to insufficient bond strength. The reason for failure can be attributed to insufficient resin penetrating the undercut caused by the air-abrasion treatment. When shear force was applied, debonding directly occurred at the zirconia-resin interface, which can be confirmed by the SEM images. The minimum acceptable SBS value is 5 MPa at the interface of resin-based materials and substrate, as specified in ISO 10477. Additionally, others have suggested that the bonding strength should be greater than 5.9 to 10.0 MPa to adequately satisfy routine clinical use. When acrylic resin bonds to a metal alloy (after priming and air-abrasion), the literature SBS values obtained after thermal cycling were 2.48 ± 0.66 MPa (cobalt-chromium alloy), 1.96 ± 0.78 MPa (titanium alloy), and 1.07 ± 0.74 MPa (gold alloy). In the case of acrylic resin bonded to zirconia ceramics, as shown by Iwaguro et al., even after priming and air-abrasion treatment, the SBS values obtained after thermocycling could not be evaluated (0 MPa). The results of the present study are in accordance with the previously mentioned reports (SBS values < 2 MPa) and infer that the bond strength might not be sufficient for long-term usage. Nevertheless, when the surface of zirconia ceramic contains microslits (MS), all the SBS values of both Y-TZP and Ce-TZP/A before (> 35.1 ± 2.1 MPa) and after (> 26.9 ± 3.1 MPa) artificial aging exceed the required SBS values (5-10 MPa) recommended in the literature. Laser pretreatment has more than doubled the bond strength of acrylic resin to zirconia ceramics, and preventing failure after artificial aging. Therefore, laser pretreatment is proposed as an alternative to mechanical processing because of its ability to effectively improve the bonding durability and strength between denture acrylic resin and zirconia ceramics in clinical dentistry. Additional research is required to determine the thickness of the RPD framework made by zirconia ceramics and combine it with microslits. Furthermore, the processing time and the strength of the Nd: YVO 4 laser also needs to be assessed in future studies. CNC lasers are not popularly used in dentistry due to their size, so a reduction in size, such as a desktop CNC laser machine, is also an essential issue that requires further investigation. Furthermore, when applying laser as a surface treatment method, the high temperature and heat of laser might cause microstructure change in zirconia ceramics and might further affect the material properties; thus, the internal structure changing mechanism of materials should also implement in future studies. Regardless, the present study provides substantial evidence to support the application of Nd: YVO 4 lasers in dentistry, in order to substantially improve the adhesive properties of acrylic resin bond to zirconia ceramics. Conclusions The limited results of this in vitro study prompt that micromechanical microslits etched with a Nd: YVO 4 laser effectively improve the bonding strength and durability of acrylic resin bond to zirconia ceramics. After artificial aging, the bond strength of the air-abrasion treated zirconia ceramics (CON) decreased, but that of the laser etched materials (MS) remained high. Yet, there was no difference in the bond strength between the different sizes of microslits diameter (50, 75, and 100 m).
#ifndef INSTRUMENTSET_H_INCLUDED #define INSTRUMENTSET_H_INCLUDED #include "../json/PicoJSONUtils.h" #include "C700BRR.h" #include "BRRPacker.h" #include <vector> #define DUMMY_INST_WAV_NAME "*dmy*" typedef struct _InstADSR { uint8_t A; uint8_t D; uint8_t Slv; uint8_t R; } InstADSR; typedef struct _InstDef { std::string filename; int srcn; InstADSR adsr; unsigned int fixPoint; int priority; } InstDef; typedef struct _WavSrc { std::string brrName; unsigned int attackOffset; unsigned int loopOffset; } WavSrc; typedef std::vector<C700BRR*> BRRPtrList; typedef std::vector<InstDef> InstDefList; typedef std::vector<WavSrc> SrcList; typedef enum { INSTS_OK = 0, INSTS_ERR_BADDEF = -8, INSTS_ERR_DUPFIX = -9 } InstsResult; class InstrumentSet { public: InstrumentSet(); virtual ~InstrumentSet(); void setVerboseLevel(int lv); InstLoadResult load(const char* manifestPath, const char* baseDir); C700BRR* findBRR(const std::string& name) const; static void setDefaultADSR(InstADSR& outADSR); static uint8_t makeADregister(const InstADSR& inADSR); static uint8_t makeSRregister(const InstADSR& inADSR); void dumpPackedBRR(); void exportBRR(ByteList& outBytes) const; void exportPackedSrcTable(ByteList& outBytes, unsigned int baseAddr); void exportPackedInstTable(ByteList& outBytes); float getBaseEq() const { return mBaseFq; } int getOriginOctave() const { return mOriginOctave; } unsigned int getFixPoint() const { return mFirstFixPoint; } protected: int mVerboseLevel; float mBaseFq; int mOriginOctave; InstDefList mInstList; unsigned int mMaxInstIndex; unsigned int mFirstFixPoint; unsigned int findMaxInst(const picojson::object& parentObj); InstsResult readInstDefs(picojson::object& parentObj); static bool isInstDefValid(const picojson::object& def); InstsResult addInst(const picojson::object& src); void addDummyInst(); bool loadBRRBodies(const std::string& baseDir); bool loadBRRIf(const std::string& name, const std::string& path, unsigned int fixPoint); void releaseAllBRRs(); const WavSrc* findSrcWithName(const std::string& name, int* pOutIndex = NULL) const; void buildSrcTable(); void dumpSrcTable(); void writeInstSrcNumbers(); void dumpInstTable(); BRRPtrList mBRRPtrs; BRRPacker mBRRPacker; SrcList mSrcList; }; #endif
def _default_cov_data(): default_data = {'material_id': [0], 'name': 'unassigned', 'user_option': 'A', 'user_text': 'Hello World!', 'texture': [1], 'red': [0], 'green': [0], 'blue': [0]} return pd.DataFrame(default_data).to_xarray()
Microsoft has more than $50 billion in cash and liquid investments, but more than 80% of it is parked overseas -- mostly in Ireland -- for tax reasons. If Microsoft brought some of that cash home -- say, to pay a bigger dividend -- it would face a big tax bill. But if Microsoft uses that cash to buy non-U.S. companies, it never has to be repatriated. In fact, Steve Ballmer specifically mentioned that Skype is based in Luxembourg in the press conference announcing the takeover, and said that it was an appropriate use of the company's cash. Add to that Microsoft's recent multibillion dollar deal with Nokia and its 2008 acquisition of Norway's FAST for $1.2 billion -- its last big buy before Skype -- and it's clear that Europe is on Microsoft's radar. So which other European companies might be a good use of the company's overseas assets?
FK778, a synthetic malononitrilamide. FK778 is a synthetic malononitrilamide (MNA) that has been demonstrated to have both both immunosuppressive and anti-proliferative activities. The MNAs inhibit both T-cell and B-cell function by blocking de novo pyrimidine synthesis, through blockade of the pivotal mitochondrial enzyme dihyroorotic acid dehydrogenase (DHODH), and the inhibition of tyrosine kinase activity. FK778 has been demonstrated to prevent acute allograft rejection in multiple experimental transplant models in rodents, dogs and primates and to be effective in the rat model of chronic renal allograft rejection. In addition, FK778 has been shown to prevent vascular remodeling after mechanical intimal injury via a mechanism which may be related to tyrosine kinase inhibitory activity in vascular smooth muscle cells. Another intriguing activity of the MNA family is the ability to block replication of members of the Herpes virus family with in vitro evidence that of efficacy against cytomegalovirus (CMV) and polyoma virus, important pathogens in the transplant recipient. FK778 is currently being explored in a number of trials in solid organ transplant recipients.
/*! ****************************************************************************** * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * ******************************************************************************/ package org.pentaho.di.cluster; import com.google.common.base.Strings; import com.google.common.cache.Cache; import com.google.common.cache.CacheBuilder; import org.pentaho.di.base.AbstractMeta; import org.pentaho.di.core.Const; import org.pentaho.di.core.logging.LogChannelInterface; import org.pentaho.di.core.parameters.NamedParams; import org.pentaho.di.core.parameters.UnknownParamException; import org.pentaho.di.core.variables.VariableSpace; import org.pentaho.di.job.JobMeta; import org.pentaho.di.trans.TransMeta; import org.pentaho.di.www.GetCacheStatusServlet; import org.pentaho.di.www.SlaveServerJobStatus; import org.pentaho.di.www.SlaveServerTransStatus; import org.pentaho.di.www.WebResult; import javax.servlet.http.HttpServletRequest; import java.net.URLEncoder; import java.util.AbstractMap; import java.util.Date; import java.util.HashMap; import java.util.Map; import java.util.concurrent.TimeUnit; /** * Cache for three key types of resource: transformation, job and data source. * * @author <NAME> */ public final class ServerCache { public static final boolean RESOURCE_CACHE_DISABLED = "Y".equalsIgnoreCase( System.getProperty("KETTLE_RESOURCE_CACHE_DISABLED", "N")); public static final int RESOURCE_CACHE_SIZE = Integer.parseInt(System.getProperty("KETTLE_RESOURCE_CACHE_SIZE", "100")); public static final int RESOURCE_EXPIRATION_MINUTE = Integer.parseInt(System.getProperty("KETTLE_RESOURCE_EXPIRATION_MINUTE", "1800")); public static final String PARAM_ETL_JOB_ID = System.getProperty("KETTLE_JOB_ID_KEY", "ETL_CALLER"); static final String KEY_ETL_CACHE_ID = System.getProperty("KETTLE_CACHE_ID_KEY", "CACHE_ID"); static final String KEY_ETL_REQUEST_ID = System.getProperty("KETTLE_REQUEST_ID_KEY", "REQUEST_ID"); // On master node, it's for name -> revision + md5; on slave server, it's name -> md5 private static final Cache<String, String> resourceCache = CacheBuilder.newBuilder() .maximumSize(RESOURCE_CACHE_SIZE) .expireAfterAccess(RESOURCE_EXPIRATION_MINUTE, TimeUnit.MINUTES) .recordStats() .build(); private static final String UNKNOWN_RESOURCE = "n/a"; private static void logBasic(SlaveServer server, String message) { LogChannelInterface logger = server == null ? null : server.getLogChannel(); if (logger != null) { logger.logBasic(message); } } private static String buildResourceName(AbstractMeta meta, Map<String, String> params, SlaveServer server) { StringBuilder sb = new StringBuilder(); if (RESOURCE_CACHE_DISABLED) { return sb.toString(); } // in case this is triggered by a Quartz Job String jobId = params == null ? null : params.get(PARAM_ETL_JOB_ID); if (Strings.isNullOrEmpty(jobId)) { if (meta != null) { sb.append(meta.getClass().getSimpleName()).append('-').append(meta.getName()); } else { sb.append(UNKNOWN_RESOURCE); } } else { sb.append(jobId.replace('\t', '-')); } Date modifiedDate = meta.getModifiedDate(); Date creationDate = meta.getCreatedDate(); sb.append('-').append( (modifiedDate != null && creationDate != null && modifiedDate.after(creationDate)) ? modifiedDate.getTime() : (creationDate == null ? -1 : creationDate.getTime())); String host = server == null ? null : server.getHostname(); String port = server == null ? null : server.getPort(); VariableSpace space = server.getParentVariableSpace(); if (space != null) { host = space.environmentSubstitute(host); port = space.environmentSubstitute(port); } return sb.append('@').append(host).append(':').append(port).toString(); } public static Map<String, String> buildRequestParameters(String resourceName, Map<String, String> params, Map<String, String> vars) { Map<String, String> map = new HashMap<String, String>(); if (!Strings.isNullOrEmpty(resourceName)) { map.put(KEY_ETL_CACHE_ID, resourceName); } if (params != null) { String requestId = params.get(KEY_ETL_REQUEST_ID); if (!Strings.isNullOrEmpty(requestId)) { map.put(KEY_ETL_REQUEST_ID, requestId); } } if (vars != null) { String requestId = vars.get(KEY_ETL_REQUEST_ID); if (!Strings.isNullOrEmpty(requestId)) { map.put(KEY_ETL_REQUEST_ID, requestId); } } return map; } public static void updateParametersAndCache(HttpServletRequest request, NamedParams params, VariableSpace vars, String carteObjectId) { String cacheId = request == null ? null : request.getHeader(KEY_ETL_CACHE_ID); String requestId = request == null ? null : request.getHeader(KEY_ETL_REQUEST_ID); if (!Strings.isNullOrEmpty(requestId)) { try { params.setParameterValue(KEY_ETL_REQUEST_ID, requestId); } catch (UnknownParamException e) { // this should not happen } if (vars != null) { vars.setVariable(KEY_ETL_REQUEST_ID, requestId); } } // update cache if (!Strings.isNullOrEmpty(cacheId) && !Strings.isNullOrEmpty(carteObjectId)) { cacheIdentity(cacheId, carteObjectId); } } /** * Retrieve a unique id generated for the given resource if it's been cached. * * @param resourceName name of the resource, usually a file name(for job and trans) * @return */ public static String getCachedIdentity(String resourceName) { return RESOURCE_CACHE_DISABLED ? null : resourceCache.getIfPresent(resourceName); } public static Map.Entry<String, String> getCachedEntry( AbstractMeta meta, Map<String, String> params, SlaveServer server) { String resourceName = buildResourceName(meta, params, server); String identity = getCachedIdentity(resourceName); if (Strings.isNullOrEmpty(identity)) { // don't give up so quick as this might be cached on slave server try { String reply = server.execService(GetCacheStatusServlet.CONTEXT_PATH + "/?name=" + URLEncoder.encode(resourceName, "UTF-8")); WebResult webResult = WebResult.fromXMLString(reply); if (webResult.getResult().equalsIgnoreCase(WebResult.STRING_OK)) { identity = webResult.getId(); // cache the missing entry in local to reduce future network calls cacheIdentity(resourceName, identity); logBasic(server, new StringBuilder() .append("Found ").append(resourceName).append('=').append(identity) .append(" on remote slave server").toString()); } } catch (Exception e) { // ignore as this is usually a network issue } } // let's see if the slave server still got this if (!Strings.isNullOrEmpty(identity)) { try { if (meta instanceof JobMeta) { SlaveServerJobStatus status = server.getJobStatus(meta.getName(), identity, Integer.MAX_VALUE); if (status.getResult() == null) { // it's possible that the job is still running logBasic(server, new StringBuilder() .append(resourceName).append('=').append(identity) .append(" is invalidated due to status [") .append(status.getStatusDescription()).append(']').toString()); invalidate(resourceName); identity = null; } } else if (meta instanceof TransMeta) { SlaveServerTransStatus status = server.getTransStatus(meta.getName(), identity, Integer.MAX_VALUE); if (status.getResult() == null) { // it's possible that the trans is still running logBasic(server, new StringBuilder() .append(resourceName).append('=').append(identity) .append(" is invalidated due to status [") .append(status.getStatusDescription()).append(']').toString()); invalidate(resourceName); identity = null; } } // who knows if someday there's a new type... } catch (Exception e) { // ignore as this is usually a network issue } } return new AbstractMap.SimpleImmutableEntry<String, String>(resourceName, identity); } /** * Cache the identity. * * @param resourceName resource name * @param identity identity */ public static void cacheIdentity(String resourceName, String identity) { if (!RESOURCE_CACHE_DISABLED) { resourceCache.put(resourceName, identity); } } public static void cacheIdentity(AbstractMeta meta, Map<String, String> params, SlaveServer server, String identity) { cacheIdentity(buildResourceName(meta, params, server), identity); } public static void invalidate(AbstractMeta meta, Map<String, String> params, SlaveServer server) { invalidate(buildResourceName(meta, params, server)); } public static void invalidate(String resourceName) { resourceCache.invalidate(resourceName); } public static void invalidateAll() { resourceCache.invalidateAll(); } public static String getStats() { StringBuilder sb = new StringBuilder(resourceCache.stats().toString()); try { Map<String, String> map = resourceCache.asMap(); for (String key : map.keySet()) { sb.append(Const.CR).append(key); } } catch (Exception e) { // ignore } return sb.toString(); } private ServerCache() { } }
DEPARTMENT OF ECONOMICS This article characterizes the set of correlated equilibria that result from open negotiations, which players make prior to playing a strategic game. A negotiationproof correlated equilibrium is defined as a correlated strategy in which the negotiation process among all of the players prevents the formation of any improving coalitional deviation. Additionally, this notion of equilibrium is adapted to general games with incomplete information.
package com.main.hyj.pattern.singleton.seriable; import java.io.Serializable; /** * create by flytohyj 2019/7/14 **/ public class SeriableSingleton implements Serializable { //序列化就是说把内存中的状态通过转换成字节码的形式 //从而转换一个IO 流,写入到其他地方(可以是磁盘、网络IO) //内存中状态给永久保存下来了 //反序列化 //讲已经持久化的字节码内容,转换为IO 流 //通过IO 流的读取,进而将读取的内容转换为Java 对象 //在转换过程中会重新创建对象new public final static SeriableSingleton INSTANCE = new SeriableSingleton(); private SeriableSingleton() { } public static SeriableSingleton getInstance() { return INSTANCE; } private Object readResolve() { return INSTANCE; } }
Earlier this month, V for Vendetta illustrator David Lloyd was present as an international guest at Mumbai Comic-Con. We had the chance to sit down with the veteran comic book artist for an interview and also got to hear what he thinks about V coming to the small screen. A report from October by Bleeding Cool stated that channel 4 was developing a V for Vendetta TV series. Furthermore, they claim to have confirmed the news with industry sources. During our chat with Lloyd, we questioned the veteran comic creator about the rumored V for Vendetta TV series in development. Here’s what he had to say (jump to 3:20 runtime in the video) : “No, I only know what you know. Nobody has contacted me about it. I know nothing about it, Apart from what I have read. I haven’t been contacted. It would be nice to be contacted about it if they go ahead. But I’m not holding my breath for that. If they don’t contact me or Alan, I just hope they make a good job of it,” said Lloyd. When asked if the story could be better narrated as a TV series, Lloyd weighed in his thoughts about bringing V for Vendetta to the TV. “You can cover more of the materials from the book in a series. But I don’t think you need to do more than one season (laughs). I’m interested in the idea it’s going to be adapted. You got more spice than in a movie for a start. And they can be truer to the original source. Hopefully, they’ll do a good job with it, if it happens,” explained Lloyd. Since 2007, the year when V for Vendetta was brought to the big screen, Warner Bros., Marvel Studios and Fox have brought many more fictional characters to life in the cinematic universe. But it looks like there’s just one movie that has managed to surprise Lloyd by far. DC fans would be glad to know that Lloyd was “very pleasantly surprised” after watching Wonder Woman and found it to be “great” while he didn’t seem to have the same opinion for other titles in the superhero genre. The entire interview where we talk about Lloyd’s interest towards Jack Kirby’s character Nick Fury and more can be found below.
<gh_stars>0 from preprocessing import Preporcess from training import Training from postprocessing import Postprocess from build_partitions import Build_partitions prp = Preporcess('StateNames.csv','queries.csv') t = Training(prp.training_data,3) pop = Postprocess(prp.training_data,t.model) b = Build_partitions(prp.data_path)
The man suspected of killing five people in Aurora, Ill., on Friday should not have had the weapon given his criminal record, Aurora police confirmed Saturday. Gary Martin, 45, was carrying a Smith & Wesson handgun when he reported to work Friday at a manufacturing plant, according to police. He had a lengthy criminal record with six prior arrests, including one for a felony aggravated assault charge that prevented him from legally owning the weapon. “He was not supposed to be in possession of a firearm,” police Chief Kristen Ziman told reporters Saturday, noting police were investigating why the firearm had not been removed from Martin’s possession. Martin, a 15-year employee of the plant, opened fire after attending a meeting in which his employment was terminated, authorities said. The five deceased victims are believed to have been in that meeting or nearby in the plant. It is unclear if he was aware beforehand he was being fired, though authorities said they had no other information as to a possible motive. Martin died after a shootout with officers, police said. “He reported for work and during this meeting he was terminated. From my understanding from witnesses is that he opened fire right after the termination,” Ziman said Saturday. Police said officers from 25-35 different agencies responded to the shooting Friday, in which five police officers were also injured. Their injuries are not life-threatening, police said. Local officials held a press conference Friday to update the public on the incident and offer condolences for the victims, vowing to heal as a community. “We will heal. We will come together as one community and stand by those in pain from today’s great loss. We will stand together with those officers shot in the line of duty. We will come together and heal as one Aurora,” Mayor Richard Irvin said.
Effect of Rotating Shift on Biomarkers of Metabolic Syndrome and Inflammation among Health Personnel in Gaza Governorate Background: It's been suggested that shift employment is linked to an increased risk of metabolic syndrome (MetS). It is a complex syndrome that has been linked to the development of cardiovascular disease and/or type 2 diabetes mellitus. The goal of the study was to determine the prevalence of MetS among health-care workers and to investigate the impact of rotating shift work on MetS biomarkers and inflammation. Methods: 100 current daytime workers were compared to 210 rotating shift workers in comparative analytical cross-sectional research involving 310 health care personnel. A questionnaire on socio-demographic (sex, age, marital status, job), health-related behaviors such as a physical activity) and occupational history about shift work, as well as a health examination with anthropometric and arterial blood pressure measurements, and laboratory investigations. For the diagnosis and determination of MetS, we used the Adult Treatment Protocol III National Cholesterol Education Program of America (ATPIII) indicators. SPSS version 20 was used for the statistical analysis. Results: The overall prevalence of MetS among healthcare workers was 8.4% (9.0% among current daytime workers and 8.1% among rotating shift workers) with no significant difference between males and females, and shift category. Elevated C-reactive protein (44.5%) was the most commonly altered component among healthcare workers, followed by high triglyceride (35.5%), raised total cholesterol (24.8%), and elevated BMI>30 (20.6 %). In descending order, the following were the main risk factors for MetS in both sexes among rotating shift workers: high blood pressure (OR = 59.5; 95 percent CI, 16.4- 215.8), high fasting blood sugar (OR = 43.9; 95 percent CI, 12.9- 149.1), high triglyceride (OR = 42.3; 95 percent CI, 5.5- 326.6), obesity (BMI >30) (OR = 11.8; 95 percent CI, 4- 34.6), and low HDL cholesterol (OR = 1.6; 95 percent CI, 0.3- 6.1) Conclusion: MetS was prevalent amongst Gaza Strip healthcare workers, with a consistent increase in prevalence as people were older and had a higher BMI. There was no direct relationship between shift category and the occurrence of MetS and inflammation; however, other factors such as genetics, lifestyle factors, and the work itself may have a greater impact than shift category.
package incremental import ( "github.com/cloudfoundry-incubator/s3-blobstore-backup-restore/blobpath" ) type BackedUpBlob struct { Path string BackupDirectoryPath string } func (b BackedUpBlob) LiveBlobPath() string { return blobpath.TrimPrefix(b.Path, b.BackupDirectoryPath) } func joinBlobPath(prefix, suffix string) string { return blobpath.Join(prefix, suffix) }
John Strong Sr. Biography John Strong was born on November 26, 1798, in Wroxton, Oxfordshire, England. He studied for the ministry, but did not find it to his liking. After his father's death, he emigrated to the United States, arriving in the early 1820s, likely in 1825. He settled in Greenfield Township, Michigan, by 1826, after a year spent living near Chatham, Ontario. He was a farmer, and as one of the few educated men in the area, he assisted the Native American and French residents with business and legal matters, and he also sold many small tracts of land to incoming German colonists. Strong was a Democrat, and was elected to the Michigan House of Representatives in the first election following adoption of the state constitution in 1835, and served through 1836. He was supervisor of Greenfield Township in 1836 and 1836. Strong died in Greenfield Township on February 23, 1881, Family In 1827, Strong married Isabella Campbell, who was born in Scotland on January 25, 1810. They had six children: John Jr., Ann, George, Isabella, Elizabeth, and Sarah. Isabella Strong died on October 29, 1840. John Strong Jr. was elected to both the state house and senate, and also served as lieutenant governor of Michigan. Elizabeth's son John S. Haggerty served as Michigan Secretary of State.
from datetime import timedelta from django.shortcuts import render_to_response # Create your views here. from django.template import RequestContext def home(request): return render_to_response( 'home.html', { 'dt': timedelta(seconds=4 * 3600 + 3 * 60 + 21), # 4h3m21s 'dt2': timedelta(seconds=7 * 365 * 24 * 3600 + 65 * 24 * 3600 + 4 * 3600 + 3 * 60 + 21), # 5y 4h3m21s }, RequestContext(request) )
Tesla’s behaviour in the aftermath of news that a driver died while using the car’s autopilot feature criticised by crisis management experts as ‘error-filled’ Tesla CEO Elon Musk has been bullish in his defense of his company following the death in May of a driver using the car’s “autopilot” feature, even claiming that in the last year “500,000 people would have been saved” if the feature was widely available. But experts in crisis management have said the handling of events by both Musk and Tesla has been a “case study” of how not to do it. Joshua Brown was killed in Florida in May after neither he nor the car’s autopilot system detected a truck crossing the highway ahead of him. Nearly one month later, on the same day as the National Highway Traffic Safety Administration (NHTSA) announced an investigation into the accident, Tesla released a blogpost which opened by defensively pointing out that Brown’s death was “the first known fatality in just over 130 million miles”. The following day, Musk retweeted Vanity Fair columnist Nick Bilton saying: “1.3 million people die a year in car accidents. Yet, 1 person dies in a Tesla on autopilot and people decry driverless cars as unsafe”. And on Wednesday in another blogpost, the company described Brown’s death a “statistical inevitability”. Musk earned his fortune through the payments site PayPal. The billionaire owns the private space company SpaceX, and has occasionally been compared to Tony Stark, the billionaire from Marvel’s Iron Man comics. This week, Musk also joined the fray in emails to a journalist from Fortune magazine, claiming that half a million lives would have been saved last year if all cars were fitted with autopilot. The journalist was asking questions as to why Tesla notified the NHTSA immediately of the accident but waited roughly a month – until regulators opened an investigation – before announcing it to the public. When there’s a death associated with a company, the CEO should respond with compassion, competence and confidence Jonathan Bernstein of Bernstein Crisis Management As Fortune reported, that delay is key because 11 days after the accident but just over two weeks before it was made public, Tesla and Musk sold $2bn of Tesla stock. After the magazine published the article, Musk posted on Twitter again to attack it, saying that the accident “was material to you – BS article increased your advertising revenue. Just wasn’t material to TSLA, as shown by market”. Jonathan Bernstein, president of the consulting firm Bernstein Crisis Management, said Musk’s behaviour was a perfect case study in the wrong way to handle this sort of crisis. “What a CEO should do when there’s a death associated with one of his company’s products is respond, first and foremost, with compassion, and then with words that express competence and confidence,” Bernstein said. “Musk seems to prefer angry defensiveness.” “Quoting statistics that explain why the death isn’t so bad in the big picture has been proven time and time again to be quite ineffective in influencing public opinion,” he added. In a second blogpost released on Wednesday, Tesla addressed the Fortune article directly. “When Tesla told NHTSA about the accident on May 16th, we had barely started our investigation. Tesla informed NHTSA because it wanted to let NHTSA know about a death that had taken place in one of its vehicles,” the post said, adding that they had not been able to send a Tesla investigator to Florida until 18 May – the day of the SEC filing. The post also reiterates the statistical claims about the relative safety of the autopilot system. Musk, famously, does not respond well to criticism. In 2013, in response to a poor review in the New York Times for the Tesla Model S in which the car stalled on the freeway, Musk tweeted that the article was “fake”. In 2011, Musk and Tesla sued the BBC for libel and malicious falsehood following a bad review on the motoring show Top Gear. The suit was later thrown out. “Unfortunately, I think Tesla has lost a lot of credibility in how they’ve handled this crisis and the death of one of their customers,” said Sam Singer, the president of crisis communications firm Singer Associates. “It’s been an error-filled response since the very beginning.” He said that the company should have immediately issued a warning. Now, he said, Tesla is in a mess of its own making. “I think the way the public views this whole situation is one of a cover-up, rightly or wrongly,” he said. When something goes wrong, Singer said, the best thing to do is disclose it immediately. According to Bernstein, resorting – as Musk did – to statistics to try to put an accident or malfunction which resulted in a death into a wider context, however well-meaning, was ill-judged. “I haven’t seen anybody foolish enough to try the statistics approach in a long time,” he said. Asked what advice he would give to Musk, Bernstein said that he should “take a step back, take a deep breath, and practice delivering a message that communicates compassion, confidence, and competence”. “And if you can’t do that, keep your mouth shut,” he added.
Effects of transcriptional noise on estimates of gene and transcript expression in RNA sequencing experiments RNA sequencing is widely used to measure gene expression across a vast range of animal and plant tissues and conditions. Most studies of computational methods for gene expression analysis use simulated data to evaluate the accuracy of these methods. These simulations typically include reads generated from known genes at varying levels of expression. Until now, simulations did not include reads from noisy transcripts, which might include erroneous transcription, erroneous splicing, and other processes that affect transcription in living cells. Here we examine the effects of realistic amounts of transcriptional noise on the ability of leading computational methods to assemble and quantify the genes and transcripts in an RNA sequencing experiment. We show that the inclusion of noise leads to systematic errors in the ability of these programs to measure expression, including systematic underestimates of transcript abundance levels and large increases in the number of false-positive genes and transcripts. Our results also suggest that alignment-free computational methods sometimes fail to detect transcripts expressed at relatively low levels. Over the past decade, many computational methods have been developed to analyze data from RNA sequencing (RNA-seq) experiments (;;;;). The primary data from these experiments consist of a large collection of short sequencing reads, usually 100-150 bp in length, that themselves derive from transcribed RNA molecules in a tissue sample. Genome-guided transcriptome assembly methods (;;) map these reads to the genome of the target organism and then reconstruct and quantify full-length RNA molecules from the alignments. Alignment-free methods ((Patro et al., 2017) use annotated transcripts to construct an index for exact lookup of short subsequences (k-mers). K-mers in each sequenced read are then matched against the index to determine which transcripts produced each read. These methods run much faster because they skip the alignment step, but they give up the ability to detect any genes or transcripts that are not already present in the annotation. Both types of algorithms produce, for each transcript detected, an estimate of the level of expression of that transcript. These expression-level estimates are, in turn, used to determine expression values of full genes and to compute which genes and transcripts are differentially expressed in different experimental samples. In testing and evaluating methods for RNA-seq data analysis, many published reports have relied on simulated data (;Patro et al., 2017;Shao and Kingsford 2017;). For example, Patro et al. used simulated data sets generated by Polyester () and RSEM-sim (Li and Dewey 2011), Bray et al. used RSEM, and Kovaka et al. used FluxSimulator () to generate reads. The reads themselves included sequencing errors, but all transcripts produced by the simulators were considered to be correct for the purposes of evaluating the RNA-seq abundance estimations. The need for simulations arises because we do not have an RNA-seq data set for which the ground truth is known, that is, for which we know precisely which genes and transcripts were expressed and at what levels they were present. Most evaluations therefore rely on a combination of real and simulated data to estimate the accuracy of these methods. Several biases in RNA-seq protocols have been investigated as potential confounding factors in the downstream analysis (;Ma and Kingsford 2019). These observations led to the development of targeted simulation protocols for accurate comparison of abundance quantification methods (Li and Dewey 2011;). Additionally, recent interest in the analysis of prespliced mRNA molecules (La ) led to modifications of quantification methods to account for the presence of unspliced isoforms for specific protocols (). Recent studies have shown that the human transcriptome has many "noisy" transcripts, that is, transcribed RNA sequences that do not represent functional genes (Struhl 2007;). These noisy transcripts have been estimated to comprise up to one-third of the RNA molecules in a cell, although most of them are present at very low levels (Van ;;Palazzo and Lee 2015;). Despite this phenomenon, no previous study has simulated noisy transcripts as part of an evaluation of RNA-seq assembly or quantification programs. We wanted to test the hypothesis that the presence of noise might have a significant effect on the output of these programs. In this study, we first developed a collection of methods to quantify the amount of noise in bulk RNA-seq experiments and to create simulated data sets that would reflect the types and abundance of transcriptional noise that were observed in large studies such as GTEx (The GTEx Consortium 2013). This noise includes intergenic transcripts, erroneous splicing, and incompletely spliced transcripts. We then generated a set of simulated RNA-seq experiments and analyzed them with three of the leading programs for estimating gene expression: StringTie2, Salmon, and kallisto. Properties of transcription in GTEx and simulation In this analysis, we investigated four distinct types of biological and technical variation, by partitioning transcriptome assemblies previously computed from the GTEx data set into known transcripts, erroneous transcripts caused by retained introns ("intronic noise"), erroneous transcripts caused by the use of the wrong splice site ("splicing noise"), and erroneous transcripts caused by transcription in intergenic regions ("intergenic"), as summarized in Table 1. In our analysis, we found that most known genes were expressed in at least one sample of a typical tissue (Fig. 1A). In contrast, fewer than half of both known loci and isoforms were actively expressed in a typical sample (Fig. 1B,C). We also found that known transcripts were more likely to occur in multiple samples of the same tissue (∼26%) compared with noisy transcripts (1.8% for intergenic noise, 0.5% for intronic noise, and 1.4% for splicing noise). Thus, although the complete GTEx data set contained a much higher number of noisy transcripts overall, at the level of a particular tissue, the number of noisy transcripts was generally lower than the number of real ones (Fig. 1B,C). Each type of transcription in our analysis displayed distinct expression properties within the data set. Importantly, we found known isoforms to dominate the expression within annotated genes. For a typical gene in our simulation, 80%-90% of the reads derived from known isoforms, similar to the proportion observed in the GTEx data set. As shown in Figure 1E, between 17% and 32% of transcription in our simulation comes from noisy isoforms ( = ∼25%). This is comparable to the average of 25.7% noisy expression observed across samples in GTEx, with intergenic, intronic, and splicing noisy transcripts comprising on average 4.8%, 2.1%, and 18.8%, respectively, of the total expression (Supplemental Fig. S1). We applied our simulation protocol (Methods) to create a data set composed of three tissues, each represented by 10 samples. Our comparisons between inter-and intra-sample properties of the simulated data set revealed that all targeted properties of the GTEx data set were preserved, while providing a high degree of randomization. Abundance estimation In our analysis below, we present results based on the cumulative contribution of different types of noisy expression to the output of RNA-seq analysis tools. A breakdown of the results by specific type of noise is presented in Supplemental Figures S2 through S8. Transcript-level effects For all methods considered here, the introduction of noisy expression led to a consistent increase in the number of transcripts falsely identified as expressed ( Fig. 2A). False-positive rates (FPRs) and false-negative rates (FNRs) both in the absence and presence of noise are reported in Supplemental Figures S7 and S8. We observed that StringTie2 had both the smallest number of false positives (FPs) in the absence of noise ( = 18,844; FPR = 7%) and the smallest increase in FPs, bringing its average up to 23,494 (∼25% increase; FPR = 8%). In comparison, Salmon had a slightly higher number of FPs in the absence of noise ( = 21,546; FPR = 8%), but these had a much greater increase of ∼70% ( = 36,677; FPR = 13%) in the presence of noise. The number of FP observations for kallisto was highest with noise-free data ( = 34,316; FPR = 12%), and when noise was added, it produced the largest number of false-positive (FP) transcripts, averaging more than 51,000 (∼50% increase; FPR = 18%). On average, methods reported similar sets of FP transcripts across simulated samples with greater similarity observed between Salmon and kallisto (Supplemental Fig. S9). A common strategy to reduce FPs from RNA-seq analysis is to eliminate isoforms with low expression using predefined thresholds. To account for this in our analysis, we examined abundance estimates of the FP transcripts (Fig. 2B). We observed that StringTie2's FPs had the lowest median abundance, at 0.14 transcripts per million (TPM) with noise and 0.15 TPM without noise. Salmon and kallisto, in contrast, assigned substantially greater abundance to their FPs. Specifically, their median abundance estimates in the absence of noise were 0.4 TPM (Salmon) and 0.19 TPM (kallisto), and when noise was included, these increased to 0.85 and 0.39, respectively. In addition, Salmon and kallisto's abundance estimates showed a larger statistical dispersion, with many FPs having an expression level above 2.0. If we used a minimum TPM threshold of one, as is sometimes done in RNA-seq analysis, then across 30 simulated samples in the absence of noise, Salmon and kallisto reported 262,085 and 290,537 FP transcripts, respectively, whereas StringTie2 reported 126,735. When noisy transcripts were added, all of these numbers went up, but the increases were much greater for Salmon and kallisto. In particular, across the 30 samples with noise, StringTie2 reported 171,087 FP transcripts with expression >1 TPM, whereas Salmon and kallisto reported 524,694 and 588,177, respectively. We then evaluated the number of false negatives (FNs), that is, the number of transcripts that appeared in the simulated data but that each program failed to identify. For kallisto, we observed the smallest number of FNs in the absence of noise ( = 1233; FNR = 5%) with an increase of ∼41% (FNR = 7%) after the introduction of noise (Fig. 2C). StringTie2 had more FNs ( = 2109; FNR = 8%), but this number actually decreased by ∼1.1% when noise was added. We found that Salmon had the greatest number of FNs ( = 3061; FNR = 12%), which increased ∼12% (FNR = 13%) after the introduction of noise. In contrast with the FPs, however, the FNs reported by StringTie2 were consistently different from those reported by Salmon and kallisto (Supplemental Fig. S9). As with the FPs, StringTie2's FNs were expressed at very low levels, with nearly all of them having TPM < 1 (Fig. 2D). By looking at simulated abundance of FNs, we observed that both in the absence and presence of noise, StringTie2's FNs had a median expression of 0.4 TPM. The FNs for Salmon and kallisto, in contrast, had much higher median expression, at 2.02 and 1.84 TPM, respectively (for noise-free data), and slightly higher in the presence of noise. Additionally, among all transcripts with TPM > 1 across all 30 samples, Salmon and kallisto failed to identify 66,659 and 24,064 transcripts, respectively, compared with StringTie2 missing just 14,079 transcripts. When noise was introduced, this number increased to 14,289 for StringTie2, whereas Salmon's and kallisto's total FNs increased to 77,644 and 36,871, respectively. We hypothesized that the introduction of transcriptional noise into the samples might increase abundance estimates proportionally with the number of noisy reads that overlapped annotated sequences. For all three methods, we observed reported abundances to be, on average, 20% lower than the estimate in the absence of noise (Supplemental Fig. S6). We suspect that the decrease occurs because noisy transcripts led the programs to report a greater number of FPs (as shown in Fig. 2A), which then absorbed many of the reads that instead should have been assigned to true-positive transcripts and altered the normalization factor of the TPM calculation. Lastly, by analyzing the contributions of the three types of transcriptional noise, we found that >99% of the effects on RNAseq analysis programs are caused by splicing noise. Transcriptional noise that is entirely contained within introns or that is purely intergenic had little effect on the ability Gene-level effects Transcriptional noise produces similar effects when we consider gene-level (as opposed to transcript-level) abundance estimates ( Fig. 3; Supplemental Figs. S10, S11). A tradeoff between specificity and sensitivity was observed for all methods in our comparison: StringTie2 had the fewest FPs but the highest number of FNs, whereas Salmon and kallisto each had far more FPs but very few completely missed genes. The addition of transcriptional noise increased the number of FPs for all the methods (Fig. 3A), but in contrast to the transcript-level results, noise had almost no effect on the rate of FNs (Fig. 3B). Likewise, all three evaluated methods tended to report similar sets of genes as FPs on average, whereas overlaps between FNs were generally much smaller (Supplemental Fig. S12). Furthermore, our analysis confirmed an expected relationship between the accuracy of expression estimates and the amount of noise relative to the expression of a locus (Fig. 3C): All methods were affected more in regions where more reads came from noise. We observed that the introduction of noise had a much greater effect on the accuracy of gene-level quantification of pseudoalignment algorithms, even though the estimates for loci where only a small fraction of expression came from noise were better than for alignment-based assembly methods. Discussion Our understanding of transcriptional processes in complex genomes is still incomplete. In particular, we do not yet know the extent of erroneous transcription, whether it is caused by splicing errors, read-through events, or other factors (;Palazzo and Lee 2015;). Until the transcriptome is studied and understood more thoroughly, relying too strongly on a predefined set of expressed sequences may lead to substantial errors in the downstream analysis (). The experiments described here show that perfect, noise-free simulations present an inaccurate picture of the performance of methods for assembly and quantification of RNA-seq experiments. Although other biases in RNA-seq experiments have been shown to confound results (;;), the presence of transcriptional noise-that is, transcripts that do not represent functional genes-in the data may lead to both under-and overestimates of expression. High numbers of FPs in expression analysis may propagate downstream in unexpected ways. Even in the absence of noise, our analysis showed that the leading programs generated thousands of FP transcripts, and the addition of noise added thousands more. These FPs, in turn, seemed to absorb many of the reads from true positives, with the result that all methods reduced their average estimates of expression levels by ∼20% when realistic amounts of transcriptional noise were present (Supplemental Fig. S6). Although we noticed a similar reduction of expression for all methods, the reasons for this change are probably different between methods. Although alignment-free methods incorrectly allocated reads to other annotated isoforms that were not expressed, StringTie2 used those reads to assemble novel isoforms, mostly at low expression levels. After adding noisy transcription to our simulated data, we observed increases in both FPs and FNs as compared with noise-free controls. We speculate that such observations are primarily explained by the fact that the majority of loci in a given tissue express only a small number of functional molecules (), while producing many overlapping nonfunctional splicing variants (a,b). Reads from these noisy transcripts are sometimes counted toward expression of nonexpressed isoforms, creating FP results, such as the one illustrated in Supplemental Figure S13. Repetitive elements in the genome could also explain some of the FPs that we observed. We found that close to 50% of bases in intronic and intergenic isoforms overlap regions masked by RepeatMasker (), whereas splicing noise transcripts overlap a much lower proportion of repeats, similar to the fraction of repeats in known transcripts (Supplemental Fig. S14). However, because our analysis of the GTEx data set revealed that very few intronic and intergenic transcripts exist within a sample (Fig. 1C), the high percentage of repeats contained in those regions is unlikely to significantly alter the results. In our analysis, we also observed a slight decrease in FN transcripts for StringTie2 after noise was added, in contrast with the increases observed for alignment-free methods. Because assembly methods are dependent on sufficient and complete coverage of sequenced molecules, the introduction of reads from noisy transcripts that overlap true positives may have helped cover some of the problematic areas, aiding in the assembly process. At the gene level, StringTie2 had fewer than 1000 FPs, and this number increased only modestly when noise was present. In contrast, both Salmon and kallisto reported 6000-8000 FP genes, and these numbers increased by approximately 5000 in the presence of noise. However, the number of completely missed genes (FNs) for both Salmon and kallisto was very low, regardless of the presence of noise. This makes sense as both programs rely heavily on a predefined gene list that, in our simulations, contained all the true-positive genes. Thus, they were able to detect the true genes accurately, even those expressed at low levels. Another phenomenon observed here was that when abundance was measured in TPM, transcriptome assembly methods such as StringTie2 inherently produced lower abundance estimates than did alignment-free methods, because TPMs are normalized with respect to the effective length of transcripts. Annotation-dependent methods will always have the same total length of expressed transcripts, which is provided by the reference annotation. In contrast, assembly methods produce a unique transcriptome for each experiment, which affects the TPM A B C Figure 3. Effects of noisy transcription on gene-level abundance estimation. (A) Distributions of the number of FP genes per sample, that is, the number of reported gene loci at which no actual transcripts were expressed. (B) Distributions of the number of FN genes per sample, that is, the number of gene loci for which the simulated data contained at least one expressed transcript but where the program failed to report any. (C ) Percentage of change in the number of reads assigned to a gene as a function of the fraction of expression at that locus that comes from unannotated transcripts. Percentage of change was computed relative to the total number of reads simulated for all annotated transcripts at each locus. Only loci with more than zero reads from annotated transcripts are shown. Effects of noise on transcript abundance estimates Genome Research 305 www.genome.org normalization for length and results in lower TPM values when the fragmentation of the transcriptome is increased. This finding is in agreement with previous reports (), and although this phenomenon tends to result in an underestimate of expression values, the same property may aid in the filtering of FPs. We should point out that some of the transcripts that were considered noisy in the CHESS data set, and from which we modeled our simulated noisy data, might in fact come from rarely expressed but functional isoforms. The results presented here were computed relative to the set of annotated transcripts, and therefore, the conclusions about how their abundances are affected should not change if some of those isoforms later turn out to be real. Although our findings indicate that all methods are challenged by the presence of transcriptional noise, effects on accuracy differ among the methods. For applications that require higher specificity, Salmon and kallisto might be preferred (Figs. 2, 3). With 90% of total expression coming from real isoforms in a typical gene (Fig. 1D), all methods showed highly accurate abundance estimation (Fig. 3C). Similar observations extend to the transcript level. However, in applications in which one is interested in knowing the precise isoform mixture at each locus, a lower rate of FPs might be preferred, in which case StringTie2 has an edge because of its ability to assemble unannotated isoforms and because its FPs have lower abundances. It is important to understand that our analysis might in fact be underestimating the scope of the problem. Our filtering criteria, which we used to remove redundancy and less prevalent types of noisy transcription from GTEx, resulted in the removal of approximately 10 million assembled molecules, nearly one-third of the total number. Because nonintergenic noisy transcripts not included in our analysis would by definition have overlapped annotated features, the bias introduced by the reads contained in those transcripts would likely lead to a greater effect on the accuracy of RNAseq analysis programs. Finally, we hope that the approach used in this study will guide future assessments of RNA-seq abundance quantification methods by providing a set of simulated data sets that were based on curated experimental data and that include realistic amounts of transcriptional noise outside of the annotated transcripts. If new tools and protocols continue to be evaluated without accounting for unannotated transcription, we will be left with an incomplete and possibly erroneous perception of their performance. Methods The tools described here were designed to create realistic simulations at the multitissue level by computing parameters from nearly 10,000 RNA-seq experiments produced as part of the GTEx project (The GTEx Consortium 2013). We computed gene-and transcriptlevel expression values from the simulated data using three stateof-the art quantification methods: StringTie v.2.1.2, Salmon v.0.14.0, and kallisto v.0.46.1. Data filtering We examined transcriptome assemblies of the GTEx data set from the CHESS project () at the level of individual samples, at the level of individual tissues, and across the full data set. To reduce the number of confounding factors in our analysis, we removed any noisy isoforms that overlapped annotated loci on the opposite strand, contained annotated loci within their introns, or were in close proximity of known genes but did not overlap their exons. After filtering, we retained 20,748,278 assembled isoforms out of the initial set of about 30 million. Typing of transcripts Transcripts were compared to the full database of CHESS genes and transcripts using gffcompare (Pertea and Pertea 2020). We labeled a transcript as real if all its introns matched a transcript found in the CHESS annotation (small differences in the positions at the beginning and end of transcription were disregarded). If an isoform was contained within an intronic region of a known gene, it was labeled as intronic noise. Splicing variants that overlapped known loci but that contained unannotated exons, introns, or exon chains were labeled as splicing noise. Transcripts sharing no overlap with the annotated loci were labeled as intergenic noise (Table 1). Quantification By using the mappings between annotated and noisy transcripts across levels of assembly, we quantified the following parameters for the GTEx data set: Simulated tissue and sample generation After choosing a set of transcripts from randomly selected loci to be expressed in a tissue, we proceeded by generating a set of possible expression values for each transcript. In this step, observations from different types of transcripts (real + noisy) were treated jointly for each locus in a simulated sample. Sample-level observations were similarly grouped and treated jointly at the tissue level. This step was required to preserve the inherent relationships between transcripts of different types in a single sample and expression of the same transcript in different samples from the same tissue. Annotations and expression values for each sample were simulated next by randomly picking one set of possible transcript observations for each locus. The order of transcripts and corresponding expression values were shuffled before being linked and remained constant for each sample. This guaranteed preservation of any relationships between expressions of transcripts in different samples of the same tissue, observed in the modeled data set (Fig. 1A). Read counts per transcript We then calculated the expression values to be used with the Polyester simulator. Polyester requires coverage to be provided for each transcript in a simulation (). To compute the target number of reads to be simulated, we reversed the TPM calculation: where C i is the coverage of transcript i, E i is its expression measured in TPM, L i is length, N is the number of reads in the sample, and l is the read length (Li and Dewey 2011). Simulation parameters For our analysis, we simulated three hypothetical tissues, each containing 10 samples. Single-end 101-bp reads for each sample were generated using Polyester with an error rate of 0.4%. In our preliminary analysis, we noticed Polyester was unable to accurately model paired-end sequences. In particular, for transcripts shorter than the fragment length, Polyester left gaps in the coverage. We also observed that Polyester was unable to extend read coverage to the end of the last exon in the transcript when simulating single-end reads. We were able to bypass these issues by simulating single-end reads with Polyester and setting the fragment length to be the same as the read length, with a standard deviation of zero. We combined reads generated from real transcripts, splicing noise, intronic noise, and intergenic noise transcripts together for each sample. Analysis For quantification of genes and transcripts, we used three of the most widely used current methods for transcriptome quantification: StringTie2, Salmon, and kallisto. Each method was run using the recommended parameters, as described in the Supplemental Methods. For StringTie2, alignments were produced using HISAT2 v.2.2 (). To avoid unnecessary complexity, we restricted our analysis to the primary chromosomes of the GRCh38.12 human assembly (), excluding all alternative scaffolds and patches. For annotation, we used the version of CHESS 2.2 human gene catalog tailored for transcriptome assembly, which is also restricted to main scaffolds only. Additionally, in our analysis we took care to avoid creating differences in gene expression that might be owing to the normalization method. TPM is widely used to measure gene expression because it is more stable than other abundance metrics (); however, they are dependent on the cumulative effective length of the underlying transcriptome being quantified. Because our comparison includes a method (StringTie2) that discovers novel isoforms, the normalization factor in TPM computation is very different from the one used by Salmon and kallisto, which rely exclusively on a predefined annotation. These differences result in different TPM values, even where the read counts and inferred transcript lengths are the same. Wherever possible, therefore, normalized expression values were compared as a percentage of change from estimates obtained in the absence of noise to estimates computed in the presence of noise within each method. For fairness when evaluating FNs, simulated TPMs were computed based on all transcripts present in a sample (real, splicing, intronic, intergenic) as well as all transcripts in the sample that matched the annotation. Gene-level abundance Each method in our analysis estimates abundances at the transcript level by default. Computing abundances at the gene level involves using separate tools for different methods. Abundance measurements such as TPM typically factor cumulative effective length of the transcribed sequences into the equation (Li and Dewey 2011). This presents a distinct challenge for comparing annotation-agnostic methods such as StringTie2 to pseudomapping approaches like Salmon and kallisto, which always rely on a predefined set of transcribed sequences. To reduce the impact of the difference in normalization factors, we performed gene-level abundance comparisons based on the raw read-count aggregation.
/** * Returns whether an in-progress EntityAIBase should continue executing */ public boolean canContinueToUse() { if (this.canScare()) { if (this.mob.distanceToSqr(this.player) < 36.0D) { if (this.player.distanceToSqr(this.px, this.py, this.pz) > 0.010000000000000002D) { return false; } if (Math.abs((double)this.player.getXRot() - this.pRotX) > 5.0D || Math.abs((double)this.player.getYRot() - this.pRotY) > 5.0D) { return false; } } else { this.px = this.player.getX(); this.py = this.player.getY(); this.pz = this.player.getZ(); } this.pRotX = (double)this.player.getXRot(); this.pRotY = (double)this.player.getYRot(); } return this.canUse(); }
A man is accused of breaking into a Houston, Texas couple's home and then getting into their bed with them. Jarred Morris said his girlfriend felt someone in bed next to her. At first, she thought it was Morris' son or maybe the cat. But then, she saw the intruder slumped down in the corner. Morris said the man eventually bolted out the front door. He still hasn't been caught.
Information and the Tortured Imagination "Information and the Tortured Imagination": The logic of torture is such that the practice of torture necessarily entails its institutionalization. The way in which the moral argument for torture is structured also requires that torture's productin the present case, information or intelligencenecessitates a re-description of realitynecessary to deflect any charge of brute oppression or sadismthat breaches the very logic of information-gathering, reducing it to absurdity. When such power of re-description lies in the hands of exclusive political actors, regardless of their intent, truth resides at the boundary of political tyranny formed out of a re-descriptive pseudo-reality. It is not simply the power to torture that is the sign of tyranny. It is the capacity to re-describe reality to fit immoral desires which include torture.
/* * Copyright 2011 Red Hat, Inc, and individual contributors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.projectodd.stilts.stomp.server.protocol; import org.jboss.logging.Logger; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.jboss.netty.channel.ChannelHandlerContext; import org.jboss.netty.channel.ExceptionEvent; import org.jboss.netty.channel.SimpleChannelUpstreamHandler; import org.projectodd.stilts.stomp.protocol.StompFrame; import org.projectodd.stilts.stomp.protocol.StompFrames; import org.projectodd.stilts.stomp.spi.StompConnection; import org.projectodd.stilts.stomp.spi.StompProvider; public abstract class AbstractProviderHandler extends SimpleChannelUpstreamHandler { private static Logger log = Logger.getLogger(AbstractProviderHandler.class); public AbstractProviderHandler(StompProvider provider, ConnectionContext context) { this.provider = provider; this.context = context; } public StompProvider getStompProvider() { return this.provider; } public ConnectionContext getContext() { return this.context; } public StompConnection getStompConnection() { return this.context.getStompConnection(); } protected ChannelFuture sendFrame(ChannelHandlerContext channelContext, StompFrame frame) { return channelContext.getChannel().write( frame ); } protected ChannelFuture sendError(ChannelHandlerContext channelContext, String message, StompFrame inReplyTo) { return sendFrame( channelContext, StompFrames.newErrorFrame( message, inReplyTo ) ); } protected void sendErrorAndClose(ChannelHandlerContext channelContext, String message, StompFrame inReplyTo) { getContext().setActive( false ); ChannelFuture future = sendError( channelContext, message, inReplyTo ); future.addListener( ChannelFutureListener.CLOSE ); } @Override public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception { log.error( "An error occurred", e.getCause() ); } private StompProvider provider; private ConnectionContext context; }
Scimitar Syndrome: Late Presentation and Conservative Management Partial anomalous pulmonary venous return (PAPVR) is a rare congenital malformation. The infracardiac variant with the right lobe of the lung draining to the inferior vena cava (IVC) is called Scimitar syndrome. The infantile subtype presents before one year of age and the adult variant is also usually diagnosed in childhood. A 70-year-old woman presented with worsening shortness of breath. An echocardiogram suggested severe pulmonary hypertension that was confirmed by right heart catheterization. A computed tomography (CT) without contrast revealed an anomalous vein from the right upper lobe suggestive of Scimitar syndrome. The patient did not have any other associated congenital heart defects (CHD) (incomplete Scimitar syndrome). A surgical treatment approach was avoided due to the incomplete nature of the Scimitar syndrome. Incomplete Scimitar syndrome may present later and with less severity than the typical Scimitar syndrome with left to right shunting occurring only in the lung and may be managed nonsurgically. Introduction Infracardiac partial anomalous pulmonary venous return (PAPVR) is a rare congenital venous anomaly (one in a million live births) in which the right pulmonary veins drain to the inferior vena cava (IVC). This anomalous vein appears similar to a Turkish curved sword (the eponymous Scimitar) on a chest x-ray. The infantile variant presents before one year of age. The adult variant presents later and a median age of presentation has not been reported. Diagnosis of this anomaly and determining the management strategy can be challenging especially in patients who present at an older age. Case Presentation Presentation A 71-year-old lady had originally presented in 2015 with chest pain radiating to the back. An echocardiogram performed at that admission had noted elevated right ventricular systolic pressure and she had been advised to follow up for further evaluation. She presented in 2017 with dyspnea on exertion for two weeks. She did not have any chest pain, fever, diaphoresis, cough, wheezing, orthopnea, palpitations, syncope or presyncope. Past medical history included hypertension, hyperlipidemia, diabetes mellitus and chronic kidney disease stage III. She did not have a history of recurrent respiratory infections. She had worked in the car industry with some exposure to chemical fumes. The patient denied ever having smoked or used recreational drugs. Her home medications included pantoprazole, clopidogrel, simvastatin, levothyroxine, venlafaxine, and glyburide. The patient's blood pressure was elevated at 153/103. The other vital signs were within normal limits. She had a regular rhythm and pulse at a normal rate. Jugular venous distension was absent. No murmurs or bruits were heard but the patient had a loud P2 over the pulmonic area. Leg edema was absent. Auscultation of the lungs revealed coarse breath sounds over the right lower lobe but the rest of the lung fields were normal without any crackles or wheezes. Investigations A chest X-ray revealed stable cardiomegaly with prominence of the bilateral hila consistent with enlarged pulmonary arteries. A Scimitar vein was also seen ( Figure 1). FIGURE 1: Chest X-ray demonstrating the Scimitar vein (marked by yellow arrow). A computed tomography (CT) scan of her chest without contrast identified mosaic interstitial lung disease and the presence of an anomalous vessel in the right lower lobe which was draining into the IVC (Figures 2, 3). Magnetic resonance angiography confirmed the presence of isolated infracardiac PAPVR (incomplete Scimitar syndrome). An echocardiogram revealed preserved ejection fraction and grade II diastolic dysfunction. A moderately dilated left atrium was seen without a patent foramen ovale on bubble study. Right atrial enlargement, severe tricuspid regurgitation and right ventricular systolic pressure of 91 mmHg indicated a high likelihood of severe pulmonary hypertension. Right heart catheterization confirmed the diagnosis of severe pulmonary hypertension with right atrial pressure of 30 mmHg, right ventricular pressures of 112/20 mmHg, pulmonary artery pressures of 113/44 mmHg with a mean of 73. The pulmonary capillary wedge pressure was 28 mmHg with a transpulmonary gradient of 45 mmHg and pulmonary vascular resistance of 11.2 wood units. The ratio of pulmonary blood flow to systemic blood flow was 3.6 confirming the presence of left to right shunting. These results were consistent with mixed etiology of pulmonary hypertension with predominant pulmonary vascular disease. Differential diagnosis Other causes of pulmonary hypertension were ruled out by negative serology for hepatitis, autoimmune pathology, markers of inflammation and hypercoagulability. A ventilationperfusion scan was negative for the thromboembolic disease. Pulmonary hypertension associated with Scimitar syndrome would be classified as Group 1 pulmonary hypertension associated with congenital cardiovascular malformations. The interstitial lung disease may have been due to her exposure to irritant fumes during her employment and would cause a component of Group 3 pulmonary hypertension. The patient had mixed etiology of pulmonary hypertension. Treatment Due to the absence of shunting, a small number of anomalous veins draining only a small part of the right lung, the absence of associated valvular abnormalities and the lack of any concomitant cardiac or pulmonary disease, medical management was chosen in our patient. The patient was started on nifedipine and low dose aspirin. Later the patient developed paroxysmal atrial flutter. She was started on apixaban and her nifedipine was switched to diltiazem. The patient was referred to a pulmonary hypertension clinic at a tertiary center. Discussion PAPVR is a rare congenital defect in which some of the pulmonary veins drain into the right side of the circulation. The right pulmonary veins are more often affected for unknown reasons. There are four types of PAPVR that have been described as supracardiac, infracardiac, cardiac or mixed. Infracardiac PAPVR involves most of the right lung draining into the inferior vena cava just above or below the right hemidiaphragm via the anomalous vein which appears as a Turkish curved sword on a chest X-ray (the eponymous Scimitar). This syndrome as originally described by Cooper in 1836 involved other coexisting cardiovascular and pulmonary malformations. This is also called as the infantile variant of Scimitar syndrome as the patients present with significant symptoms during infancy. Presence of the Scimitar vein without any other malformation is called as incomplete Scimitar syndrome or adult variant of Scimitar syndrome with patients presenting after one year of age. Although patients with the incomplete Scimitar syndrome develop symptoms later due to small volume shunting of blood, it is rare to present in the seventies. Surgical repair is necessary in infantile Scimitar syndrome and is often undertaken in most patients presenting in childhood with incomplete Scimitar syndrome. A surgical strategy was not pursued in this case considering the low likelihood of benefit considering she had already developed severe pulmonary hypertension and did not have other factors that would indicate a potential for benefit from surgical repair (as mentioned in the Treatment section). Conclusions Incomplete Scimitar syndrome is a form of infracardiac anomalous pulmonary venous return with the right lung draining into the inferior vena cava without any other cardiovascular malformations. It is a rare congenital anomaly that may present in late adulthood with dyspnea on exertion due to development of pulmonary hypertension. Medical management of pulmonary hypertension is the mainstay of therapy. Surgical interventions may not be necessary due to the low volume of the blood being shunted. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
The human platelet antigen-1b (Pro33) variant of IIb3 allosterically shifts the dynamic conformational equilibrium of this integrin toward the active state Integrins are heterodimeric cell-adhesion receptors comprising and subunits that transmit signals allosterically in both directions across the membrane by binding to intra- and extracellular components. The human platelet antigen-1 (HPA-1) polymorphism in IIb3 arises from a Leu → Pro exchange at residue 33 in the genu of the 3 subunit, resulting in Leu33 (HPA-1a) or Pro33 (HPA-1b) isoforms. Although clinical investigations have provided conflicting results, some studies have suggested that Pro33 platelets exhibit increased thrombogenicity. Under flow-dynamic conditions, the Pro33 variant displays prothrombotic properties, characterized by increased platelet adhesion, aggregate/thrombus formation, and outside-in signaling. However, the molecular events underlying this prothrombotic phenotype have remained elusive. As residue 33 is located >80 away from extracellular binding sites or transmembrane domains, we hypothesized that the Leu → Pro exchange allosterically shifts the dynamic conformational equilibrium of IIb3 toward an active state. Multiple microsecond-long, all-atom molecular dynamics simulations of the ectodomain of the Leu33 and Pro33 isoforms provided evidence that the Leu → Pro exchange weakens interdomain interactions at the genu and alters the structural dynamics of the integrin to a more unbent and splayed state. Using FRET analysis of fluorescent proteins fused with IIb3 in transfected HEK293 cells, we found that the Pro33 variant in its resting state displays a lower energy transfer than the Leu33 isoform. This finding indicated a larger spatial separation of the cytoplasmic tails in the Pro33 variant. Together, our results indicate that the Leu → Pro exchange allosterically shifts the dynamic conformational equilibrium of IIb3 to a structural state closer to the active one, promoting the fully active state and fostering the prothrombotic phenotype of Pro33 platelets. Integrins are heterodimeric cell-adhesion receptors comprising ␣ and ␤ subunits that transmit signals allosterically in both directions across the membrane by binding to intra-and extracellular components. The human platelet antigen-1 (HPA-1) polymorphism in ␣ IIb ␤ 3 arises from a Leu 3 Pro exchange at residue 33 in the genu of the ␤ 3 subunit, resulting in Leu 33 (HPA-1a) or Pro 33 (HPA-1b) isoforms. Although clinical investigations have provided conflicting results, some studies have suggested that Pro 33 platelets exhibit increased thrombogenicity. Under flow-dynamic conditions, the Pro 33 variant displays prothrombotic properties, characterized by increased platelet adhesion, aggregate/thrombus formation, and outside-in signaling. However, the molecular events underlying this prothrombotic phenotype have remained elusive. As residue 33 is located >80 away from extracellular binding sites or transmembrane domains, we hypothesized that the Leu 3 Pro exchange allosterically shifts the dynamic conformational equilibrium of ␣ IIb ␤ 3 toward an active state. Multiple microsecondlong, all-atom molecular dynamics simulations of the ectodomain of the Leu 33 and Pro 33 isoforms provided evidence that the Leu 3 Pro exchange weakens interdomain interactions at the genu and alters the structural dynamics of the integrin to a more unbent and splayed state. Using FRET analysis of fluorescent proteins fused with ␣ IIb ␤ 3 in transfected HEK293 cells, we found that the Pro 33 variant in its resting state displays a lower energy transfer than the Leu 33 isoform. This finding indicated a larger spatial separation of the cytoplasmic tails in the Pro 33 variant. Together, our results indicate that the Leu 3 Pro exchange allosterically shifts the dynamic conformational equi-librium of ␣ IIb ␤ 3 to a structural state closer to the active one, promoting the fully active state and fostering the prothrombotic phenotype of Pro 33 platelets. Integrins are heterodimeric cell-adhesion receptors formed of ␣ and ␤ subunits. Each subunit is divided into three parts: a large extracellular domain (ectodomain), a single-pass transmembrane domain, and a short cytoplasmic tail connecting the extracellular to the intracellular environment. In addition to their biomechanical role, integrins transmit signals allosterically in both directions across the membrane (termed "outside-in" and "inside-out" signaling) by binding to intra-and extracellular components. In the present study, we focused on ␣ IIb ␤ 3, which is expressed on the platelet surface and essential for platelet aggregation. The ectodomain can be divided into two parts ( Fig. 1). The "head" of the receptor is formed by the ␤-propeller and ␤A domains, and the "legs" are formed by the thigh and calf domains (␣ IIb subunit) as well as EGF domains together with the ␤-tail domain (␤ 3 subunit). The genu located between the thigh and calf-1 domain as well as the EGF-1 and EGF-2 domains in the ␣ IIb and ␤ 3 subunits, respectively, forms a region of interdomain flexibility. Integrin structural dynamics is characterized by at least three states: a closed, bent, low-affinity state; a closed, extended, low-affinity state; and an open, extended, high affinity state. Although the magnitude of conformational changes has remained a matter of discussion, the majority of crystal structures of ␣ v ␤ 3, ␣ IIb ␤ 3, and ␣ x ␤ 2 integrins show their ectodomain in a bent conformation. Here, the head is flexed toward the membrane at an angle of 135° related to the legs, with the genu being the angle's vertex. According to current models, the genu plays a critical role in conformational transitions between the three structural states as a straightening in the genu region leads to a separation of the head from the legs and thus an unbending of the conformation. This motion is associated with reduced interactions between the two subunits, resulting in a spatial separation ("splaying") of the ␣ and ␤ subunit legs. With respect to our study, the role of the plexin-semaphorin-integrin (PSI) 4 domain, which is part of the ␤ 3 genu, is of particular interest in integrin activation. Located 80 away from the extracellular binding site and 90 away from the membrane (estimated from Protein Data Bank code 3FCS), the domain's involvement in integrin activation has been demonstrated. Specifically, the domain is believed to have a biomechanical role in the allosteric signal transmission across the structure. The human platelet antigen-1 (HPA-1) polymorphism of the ␤ 3 gene of ␣ IIb ␤ 3 arises from a Leu 3 Pro exchange at residue 33 of the mature ␤ 3 subunit, resulting in Leu 33 (HPA-1a) or Pro 33 (HPA-1b) platelets. This amino acid exchange, located within the PSI domain, leads to an inherited dimorphism that can be of clinical relevance. For example, the HPA-1b allele was significantly more frequent among young patients with acute coronary syndrome than among agematched healthy subjects. In the Ludwigshafen Risk and Cardiovascular Health (LURIC) trial, an association study including more than 4,000 individuals, we documented that patients with coronary artery disease (CAD), who are carriers of the HPA-1b allele, experience their myocardial infarction 5 years earlier in life than CAD patients who are HPA-1bnegative. In a prospective study on CAD patients undergoing saphenous-vein coronary-artery bypass grafting, we demonstrated that HPA-1b is a hereditary risk factor for bypass occlusion, myocardial infarction, or death after coronary-artery bypass surgery. These results suggest that the Leu 3 Pro exchange may modulate functional properties of ␣ IIb ␤ 3, resulting in a prothrombotic integrin variant. Prothrombotic properties are also displayed by Pro 33 platelets under flow-dynamic conditions. However, the molecular mechanism underlying the suggested prothrombotic phenotype of the Pro 33 (HPA-1b) variant has remained elusive. We hypothesized that the Leu 3 Pro exchange allosterically shifts the dynamic conformational equilibrium of ␣ IIb ␤ 3 toward an active state. This shift, in turn, would facilitate reaching the fully active state in the presence of integrin ligands. To examine this hypothesis, we performed multiple microsecond-long allatom molecular dynamics (MD) simulations of the ectodomain and Frster resonance energy transfer (FRET) measurements of ␣ IIb ␤ 3 -transfected HEK293 cells expressing either the Leu 33 (HPA-1a) or Pro 33 (HPA-1b) isoform. Our MD simulations provide evidence that the Leu 3 Pro exchange weakens interdomain interactions at the genu and alters the structural dynamics of the integrin to a more unbent and splayed state, resulting in overall conformational changes that have been linked to integrin activation. In accord with these results, FRET analyses of ␣ IIb ␤ 3 transfectants reveal that the Pro 33 (HPA-1b) variant in the resting state displays a significantly lower energy transfer than the Leu 33 (HPA-1a) variant. Platelet thrombus size in relation to ␣ IIb ␤ 3 HPA-1 isoforms under flow conditions in vitro Given the prothrombotic phenotype of Pro 33 platelets, we initially focused on platelet thrombus formation under arterial flow conditions comparing Leu 33 (HPA-1a) with Pro 33 (HPA-1b) platelets. To study the dynamics of platelet thrombus formation, mepacrine-labeled citrated whole blood from healthy volunteers genotyped for HPA-1 of ␣ IIb ␤ 3 and ␣ 2 C807T of ␣ 2 ␤ 1 (see supporting methods) was perfused at shear rates 500 s 1 through a flow chamber coated with collagen type I. Image acquisition was achieved by a series of stacks corresponding to confocal sections from the bottom to the apex of forming platelet thrombi. For quantitation of thrombus formation in vitro, we applied a voxel-based method for 3D visualization of real time-resolved volume data using ECCET software (www. eccet.de) 5. As depicted in Fig. 2A, ECCET allows determination of the number, bottom area, height, and volume of single platelet thrombi formed in vitro. Using these tools, we detected that, upon perfusion over 10 min, platelets homozygous for Pro 33 (HPA-1b) formed single thrombi that were significantly higher than those of platelets homozygous for Leu 33 (HPA-1a) (Fig. 2B). This difference in mean single thrombus volume was due to an increased thrombus height, whereas the number and bottom area of thrombi 4 2D), with increasing time, the flow path of the perfusion chamber becomes narrowed as the thrombi are growing. Consequently, shear rates gradually increase, and formed platelet thrombi, especially at their apex, are exposed to higher shear than initially applied. Thus, the difference in mean single thrombus volumes between the HPA-1 isoforms can be indicative of a higher thrombus stability of Pro 33 (HPA-1b) than Leu 33 (HPA-1a) platelets as reported before. Structural variability of ␣ IIb ␤ 3 HPA-1 isoforms in MD simulations of the integrin ectodomain To provide an atomistic view on the effect of the Leu 3 Pro exchange, the Leu 33 (HPA-1a) and Pro 33 (HPA-1b) isoforms were investigated by all-atom MD simulations using the respective integrin ectodomains in the bent conformation as starting structures. The quality of the crystal structure used as a starting structure for the Leu 33 isoform and to model the Pro 33 isoform was validated by MolProbity, yielding a percentile score of 1.70, equal to a 99th percentile rank, where a 0th percentile rank indicates the worst and a 100th percentile rank indicates the best structure among structures with comparable resolution (2.55 in the case of Protein Data Bank code 3FCS). For generating the Pro 33 isoform from the crystal structure, the Leu 33 side chain was mutated using the best rotamer of Pro at this position according to Swiss-PdbViewer. For this structure, the quality of the PSI domain in complex with the EGF-1 A rectangular flow chamber coated with collagen type I (3 mg/ml) at the lower surface was perfused with mepacrine-labeled citrated whole blood for 10 min at an initial near-wall shear rate of 500 s 1, simulating arterial flow conditions. Fluorescence signals were detected by confocal laser scanning microscopy, and digital imaging was processed as described under "Experimental procedures". Volumetry of forming platelet thrombi was assessed by real-time 3D visualization. A, a reconstruction of formed platelet thrombi obtained from a stack of 30 images by confocal laser scanning microscopy and subsequent data processing by ECCET. B and C, initial platelet thrombus formation and subsequent thrombus growth were recorded in 25-s intervals for each single thrombus. Addition of abciximab (4 g/ml), an inhibitory antibody to ␣ IIb ␤ 3, abrogated any platelet thrombus formation. B shows the mean single platelet thrombus volume, and C shows the corresponding thrombus bottom area. D, schematic illustrating the narrowing of the flow path within the perfusion chamber with a resulting increase in shear rates upon apical thrombus growth. Blue diamonds, homozygous Leu 33 (HPA-1a) platelets (n 8); red squares, homozygous Pro 33 (HPA-1b) platelets (n 8); black rectangles, control in the presence of abciximab (n 2). Error bars indicate mean S.E. Asterisks indicate statistical significance (*, p 0.05). Allosteric changes of ␣ IIb ␤ 3 induced by the Pro 33 variant and EGF-2 domains (genu of the ␤ 3 subunit) was assessed by MolProbity, yielding a percentile score of 1.30, equal to a 98th percentile rank. Three independent MD simulations of 1-s length each were carried out. The convergence of the MD simulations was tested by computing the root-mean-square deviation (RMSD) average correlation as described previously (Fig. S1). The rootmean-square average correlation is a measure of the time scales on which structural changes occur in MD simulations. From the bumps in the curves, we can estimate that observed structural changes occur within 50 -200 ns. For time intervals 200 ns, the curves are smooth, suggesting that no large structural changes happen during the investigation period. In addition, we analyzed the overlap of histograms of principal component (PC) projections obtained in a pairwise manner from each simulation for a given isoform as a function of time (Fig. S2). The PC analysis was performed on the whole protein after a mass-weighted fitting on the ␤-propeller and ␤A domains. The results reveal that, overall, the Kullback-Leibler divergence between histograms of the respective first three PCs becomes small (0.02) after 600 ns of simulation time, whereas values in the first 100 ns can be as high as 0.1. Hence, the analyses indicate that, in the given simulation times, rather similar conformational spaces were sampled by MD simulations of one isoform. However, in some cases, a small increase in the Kullback-Leibler divergence is observed toward the end of the simulation time; this behavior is not unexpected because the MD simulations were started from bent conformations of the isoforms that can relax to more open conformations. Given that, in the absence of force, the timescale of integrin activation is on the order of 10 3 to 1 s, one cannot expect that the MD simulations are converged with respect to the bent-open conformational equilibrium of ␣ IIb ␤ 3 integrin. In total, differences in structural parameters between both isoforms that we report below relate to differences in the tendency of the ectodomains to go from a bent to an open state. Unless stated otherwise, all results of the MD simulations are expressed as arithmetic means calculated over time, and we considered only uncorrelated instances for S.E. calculations (see "Experimental procedures"). The ␤A domain contains three metal ion-binding sites (Fig. S3). To assess their structural integrity during the MD simulations, we monitored the time evolution of distances between the SyMBS, MIDAS, and ADMIDAS metal ions and the respective coordinating residues (Table S3 and Fig. S3). The results reveal that during the production runs the distances remain almost unchanged with S.E. 0.1 in almost all cases. Thus, the local geometry of the metal ion-binding sites is well preserved throughout the MD simulations. The structural similarity of the conformations obtained by MD simulations with respect to the starting structure was explored in terms of the RMSD of C ␣ atoms after massweighted superimpositioning. Similar to our previous MD studies performed on integrin ␣ 5 ␤ 1, the simulations revealed minor structural changes of the single domains as mirrored by RMSD values that were largely below 3 with the exception of the calf-2 and EGF domains and the ␤-tail (RMSD up to 5 ) (Table S4 and Fig. S4). Although the ␤-tail has been characterized as highly flexible, the larger RMSD of the calf-2 domain, in part, is due to the presence of long flexible loops ; furthermore, the larger RMSD may result from simulating the ectodomain only; i.e. at the C-terminal ends of each subunit, the transmembrane domains are missing. As to the EGF domains, visual inspection of the MD trajectories revealed that the larger RMSD resulted in part from motions of the domains relative to each other. In contrast, when aligning only the head region, the mean RMSD increased up to 17 (Table S5 and Fig. S5) with the highest values found for the calf-2 and ␤-tail domains of the legs. Hence, these larger structural changes must arise from relative movements of the domains (or subunits) with respect to each other, considering that the single domains were structurally rather invariant. Comparing both isoforms of ␣ IIb ␤ 3, a larger mean RMSD (9.2 0.34 ) was found for Pro 33 than Leu 33 (6.6 0.83 ) (Fig. 3A). In accord with that, the mean radius of gyration (R g ) of the overall structure was larger for the Pro 33 (40.3 0.22 ) than the Leu 33 isoform (39.6 0.09 ) (Fig. 3A). Taken together, the sampled conformational space of both ␣ IIb ␤ 3 isoforms varied significantly with respect to the difference of the mean values of these structural parameters (Table S6). To conclude, the Pro 33 variant displayed significantly larger structural deviations from the bent starting structure and became less compact than the Leu 33 isoform during MD simulations. Conformational changes of the ectodomains of ␣ IIb ␤ 3 HPA-1 isoforms toward a more open, extended conformation To further characterize the structural differences between the ␣ IIb ␤ 3 isoforms, we monitored geometric parameters along the MD trajectories that have been linked with conformational changes of the ectodomain from an inactive to an active state (Fig. S6 and Table S7). First, we investigated possible variations in the region of the center of helix ␣1 and the N terminus of helix ␣7. This region was shown to form a "T-junction" upon activation. We computed the kink angle of helix ␣1 (Fig. 3B), which revealed a mean value over three MD trajectories that is larger by 15°in the Pro 33 (153 1.5°) than the Leu 33 isoform (138 2.8°) (Fig. 3B). Hence, helix ␣1 straightens more in Pro 33 and thus shows a stronger tendency to form the T-junction than in Leu 33. The spread in the mean values found for Pro 33 (Table S7) resulted from a rapid and pronounced increase of the kink angle, which was initially 143°(calculated from Protein Data Bank code 3FCS), within the first 200 ns in two of the three MD simulations (Fig. S6A). Second, we evaluated the unbending of the structure in terms of the separation of the head region and the terminal part of the legs (calf-2 domain and ␤-tail) (Fig. 3C). Furthermore, we monitored the spatial separation (splaying) of the integrin's legs (Fig. 3D). Similar parameters were successfully used previously. The bending angle was 7°larger in the Pro 33 (49 1.8°) than in the Leu 33 isoform (42 1.4°) (Fig. 3C, Fig. S6B, and Table S7). The splaying angle was 3°larger in Pro 33 (28 0.5°) than in Leu 33 (25 0.2°) (Fig. 3D, Fig. S6C, and Table S7). In the latter case, in two MD simulations, the time evolution of the splaying angle revealed a decrease of 22°within the last 200 ns Allosteric changes of ␣ IIb ␤ 3 induced by the Pro 33 variant of the simulation (Fig. S6C). The differences between angles in the Leu 33 and Pro 33 isoforms were significant in all cases (Table S7). As additional indicators of structural changes, we evaluated the opening of the structure in terms of changes in internal distances between the N and C termini of each subunit and between the C termini of the two subunits ( Fig. S6D). All evaluated distances were larger in the Pro 33 than the Leu 33 isoform, and the differences between respective distances were significant in all cases (Table S8). To conclude, our results revealed significant differences in the conformational states of both ␣ IIb ␤ 3 isoforms with the ectodomain of Pro 33 displaying a stronger tendency to move toward the extended conformation with more splayed legs. Experimental evidence for spatial rearrangements of the cytoplasmic tails of ␣ IIb ␤ 3 upon Leu 3 Pro exchange To investigate a possible influence of the Leu 3 Pro exchange on the spatial separation of ␣ and ␤ subunits, we performed FRET acceptor photobleaching (APB) analyses in individual cells transfected with ␣ IIb mVenus and ␤ 3 Leu 33 mCherry (HPA-1a) (Table S1) or ␤ 3 Pro 33 mCherry (HPA-1b) (Table S2) plasmids, respectively. Using FRET, the spatial separation of the subunits is inferred from the amount of energy transferred between the fluorescent proteins mVenus and mCherry attached to the cytoplasmic tails of the subunits. By fluorescence microscopy performed 48 h after transfection, we verified that both subunits were colocalized at the cell membrane ( Fig. 4A). Concordant with the presence of the integrin at the plasma membrane, we detected the complete ␣ IIb ␤ 3 receptor (recognized by a complex-specific anti-␣ IIb ␤ 3 antibody, anti-CD41, clone MEM-06) by flow cytometry. Functional integrity of both integrin isoforms and correct membrane insertion were documented by intact activation of ␣ IIb ␤ 3 in transfected cells upon phorbol 12-myristate 13-acetate-induced stimulation of protein kinase C and specific binding of Alexa Fluor 647-fibrinogen to ␣ IIb ␤ 3 upon inside-out activation. Notably, flow cytometry measurements of CD41 expression upon five independent transfection experiments indicated that the levels of ␣ IIb ␤ 3 expressing either the Leu 33 (HPA-1a) or the Pro 33 (HPA-1b) isoform did not differ more than 10% from each other (Fig. 4B). Using these transfectants, photobleaching of mCherry at 561 nm on a defined cellular region (region of interest) encompassing part of the cell membrane led to a complete loss of energy transfer and, consequently, to an increase in mVenus fluorescence intensity (Fig. 4C). For a control, cells were transfected with ␣ IIb mVenus and ␤ 3 Leu 33 or ␤ 3 Pro 33 plasmids (without mCherry), a condition that abrogated any energy transfer (data not shown). To focus on non-activated ␣ IIb ␤ 3 transfectants, as evidenced by absence of binding of Alexa Fluor 647-fibrinogen or PAC1, an activation-dependent anti-␣ IIb ␤ 3 monoclonal antibody (data not shown), cells were left resting on chamber slides with culture medium for 24 h prior Allosteric changes of ␣ IIb ␤ 3 induced by the Pro 33 variant to FRET analyses, all of which were subsequently carried out with minimal manipulation of the cells to prevent any possible cellular activation. FRET-APB analyses were performed in a total of 249 single cells: 91 Leu 33 cells, 88 Pro 33 cells, 35 Leu 33 donor control cells, and 35 Pro 33 donor control cells. FRET-APB efficiency was computed according to Equation 2 (see "Experimental procedures" and Refs. 39 and 40). Notably, FRET-APB efficiency between mVenus and mCherry in Leu 33 cells (mean S.E., 18.20 0.276) was significantly higher (p 0.0001) than in HPA-1b cells (15.74 0.395) (Fig. 4D). This difference in energy transfer upon photobleaching of both ␣ IIb ␤ 3 isoforms suggested a larger spatial separation in the Pro 33 than the in Leu 33 isoform when both isoforms were examined in their bent conformation. This observation is indicative of a state more prone to activation as a consequence of the Leu 3 Pro exchange at residue 33 in the ectodomain of the ␤ subunit of ␣ IIb ␤ 3. Short-and mid-range structural, dynamics, and stability changes induced by the Leu 3 Pro exchange The two-dimensional (2D) RMSD of C ␣ atoms of the EGF-1/EGF-2/EGF-3 domains along the MD trajectories was computed after mass-weighted superimposition onto the respective starting structures of the domains. The 2D RMSD values indicated that the domains showed larger differences from the initial starting structure in the Pro 33 than in the Leu 33 isoform (see also Table S4) but also that the two isoforms adopted conformational states that largely deviated from each other (RMSD up to 8 ) (Fig. 5A). Next, we computed the residue-wise rootmean-square fluctuations (RMSFs) of the PSI domain, a measure of atomic mobility, to identify differences in the conformational variations associated with the Leu 3 Pro exchange. The results revealed a marked increase in atomic mobility for residues Glu 29 -Pro 37 of the loop between strands A and B in the PSI domain (Fig. 5B) with a 0.6- difference found at residue Of note, the transfectants displayed less than 10% difference in ␣ IIb ␤ 3 expression of either Leu 33 (HPA-1a) or Pro 33 (HPA-1b) isoform. Values represent mean fluorescence intensity after staining of the transfectants with APC-conjugated CD41 antibody, a complex-specific anti-␣ IIb ␤ 3 antibody. C, FRET-APB measurements in a representative HEK293 cell transfected with ␣ IIb mVenus and ␤ 3 Leu 33 mCherry plasmids. D, results of FRET efficiency of fused individual Leu 33 (HPA-1a) or Pro 33 (HPA-1b) cells and respective donor controls. To determine the efficiency of energy transfer, the fluorescence of mVenus was measured in a defined region of the membrane (red circled) before and after photobleaching of mCherry at 561 nm. Details are given under "Experimental procedures." The error bars indicate mean S.E. Allosteric changes of ␣ IIb ␤ 3 induced by the Pro 33 variant did not affect the atomic mobility (Fig. 5B). Likewise, we did not detect significant differences in the secondary structure propensity of the AB loop residues between the Leu 33 and Pro 33 isoform except for a small decrease of the ␣-helix propensity in the helix C-terminal to the loop (Fig. S7). To conclude, in both isoforms, the PSI domain did not undergo marked changes in structure (see also Table S4) as a consequence of the polymorphism at residue 33 of the ␤ 3 subunit. This was in contrast to the EGF domains, which revealed marked structural changes in Pro 33. However, the structural dynamics of the AB loop of the PSI domain increased in the Pro 33 variant. As this loop faces the EGF-1 and EGF-2 domains, the Leu 3 Pro exchange may also impact the structure, interactions, and stability of this interface. Therefore, we monitored the time evolution of the distance between the C ␣ atoms of residue Leu 33 or Pro 33 and Ser 469 and Gln 481 to investigate the level of compactness of the interface between the PSI domain and the EGF-1/EGF-2 domains (Fig. 5C). In the bent conformation of ␣ IIb ␤ 3, the C ␣ atom at residue 33 is separated by 9.4 and 15.8 from the C ␣ atoms of Ser 469 and Gln 481 (calculated from Protein Data Bank code 3FCS), respectively. Comparing both isoforms of ␣ IIb ␤ 3, we found a mean value for the Leu/Pro 33 ⅐⅐⅐Ser 469 distance that is smaller by 3.7 in Leu 33 (8.1 0.40 ) than in Pro 33 (11.8 0.79 ). A mean value smaller by 5.8 in Leu 33 (6.6 0.37 ) than in Pro 33 (12.4 1.02 ) was found for the Leu/Pro 33 ⅐⅐⅐Gln 481 distance. The differences between distances in the Leu 33 and Pro 33 isoforms were significant in all cases ( Fig. 5D and Table S9). The pronounced decrease from the initial structure observed in the Leu 33 isoform (9 ) for the Leu 33 ⅐⅐⅐Gln 481 distance is in line with the description of a contact area between these two domains in the closed, low-affinity, bent state. This contact is lost in the extended conformation. These results indicated that the interface between the PSI domain and the EGF-1/EGF-2 domains is more tightly packed in the Leu 33 than in the Pro 33 isoform. In addition, we computed the number of contacts present in the starting structure ("native contacts") and those formed over the course of the MD simulations ("non-native contacts"). Contacts were evaluated between the nine residues of the AB loop and residues of the adjacent EGF-1 and EGF-2 domains, applying a distance cutoff of 7 between the side-chain atoms. In all three MD simulations of the Pro 33 variant, the total number of Allosteric changes of ␣ IIb ␤ 3 induced by the Pro 33 variant contacts was significantly lower than in the Leu 33 isoform ( Fig. 6A and Table S10). This difference became even more pronounced when only non-native contacts were considered (2-fold decrease). The same holds true for specific interactions (hydrogen bonds and salt bridges) that were conserved in the Leu 33 isoform only (Fig. S8). In the segment connecting the EGF-1 domain with the EGF-2 domain, Gln 481 is hydrogenbonded to Ser 469 with a high occupancy (70% along the MD trajectories) and/or with Gln 470 (27%). Additional stable intradomain hydrogen bond interactions (60%) were found within the EGF-2 domain that involve Cys 492, which also forms a disulfide bridge with Cys 473 of the EGF-1 domain (Fig. S8). To conclude, the Leu 3 Pro exchange leads to a less compact interface between the PSI domain and EGF-1/EGF-2 domains. Moreover, fewer interactions across the interface and within the EGF-1/EGF-2 domains were found in the Pro 33 variant compared with the Leu 33 isoform. Changes in structural stability of the EGF domains occur at long range from residue 33 To analyze changes in the structural stability of the interface between the PSI domain and EGF-1/EGF-2 domains resulting from the Leu 3 Pro exchange, we performed Constraint Net-work Analysis (CNA) on the ␤ 3 leg (hybrid domain/PSI and EGF domains) of both ␣ IIb ␤ 3 isoforms, Leu 33 and Pro 33. In CNA, a molecular system is represented as a network of nodes (atoms) connected by constraints (non-covalent bonds). This network is analyzed applying rigidity theory, revealing rigid (i.e. structurally stable) clusters and flexible links in between. By rigidity analysis, long-range effects on the stability of distant structural parts due to a local structural change can be detected. Performing a constraint dilution simulation, a stability map, rc ij (where i and j are residue numbers), is obtained that reports on the hierarchy of structural stability of the molecular system. The difference stability map calculated as rc ij (Leu 33 ) rc ij (Pro 33 ) then reports on the influence on structural stability due to the Leu 3 Pro exchange (blue (red) colors in Fig. 6, B and C, indicate residues that were less stable in the Leu 33 (Pro 33 ) isoform, respectively). The AB loop showed a local increase in structural stability, which results from the overconstrained five-membered ring of Pro 33 compared with the flexible side chain of Leu 33 (Fig. 6, B and C). By contrast, the loop connecting the EGF-1 to the EGF-2 domain and pointing toward the AB loop, which is 15 apart from residue 33, became less stable in the Pro 33 variant (Fig. 6, B and C; the. Changes within the PSI/EGF domain interface and in the structural stability between the Leu 33 and Pro 33 isoforms. A, shown are the active contacts (left) and non-native contacts (right) formed between the AB loop (PSI domain) and all the side chains located within a distance range of 7. Mean values were computed over three MD simulations of the Leu 33 isoform (blue histograms) and Pro 33 variant (red histograms). Asterisks denote a significant difference (***, p 0.0001) between the two isoforms of ␣ IIb ␤ 3. B, difference stability map generated by CNA and averaged over three MD simulations showing the difference in structural stability between both isoforms, focusing on the ␤ 3 genu region. The color gradient indicates residues with lower structural stability in the Leu 33 (blue) or Pro 33 isoform (red). C, enlargements of three areas highlighted within the difference stability map by black boxes (B) and corresponding to the AB loop (PSI domain), residues Ser 469 -Asp 484 (loop connecting the EGF-1 domain to the EGF-2 domain), and residues Gly 519 -Cys 536 (EGF-3 domain), exemplifying changes in structural stability due to the Leu 3 Pro exchange. The results for the latter two regions are also displayed on the structure of the hybrid (yellow), PSI (green), EGF-1 (firebrick)/EGF-2 (light blue)/EGF-3 (purple) domains of ␣ IIb ␤ 3 (green sphere, C ␣ atom of residue 33) in terms of lines connecting residues whose mutual stability has decreased in the Pro 33 isoform (⌬rc ij 1.5 kcal mol 1 ). Allosteric changes of ␣ IIb ␤ 3 induced by the Pro 33 variant segment formed by residues Ser 469 -Gln 481 is highlighted). The EGF-3 domain, although not directly interacting with the PSI domain, has been suggested to be important for keeping the integrin in its bent conformation. Residues Gly 519 -Cys 536 of the EGF-3 domain 30 apart from residue 33 became less structurally stable in the Pro 33 variant. To conclude, the Leu 3 Pro exchange leads to long-range decreases in the structural stability of the EGF domains. Discussion In this study, we provide evidence that indicates that the Pro 33 variant of ␣ IIb ␤ 3 allosterically shifts the dynamic conformational equilibrium of the integrin toward a more active state. This finding can provide an explanation for the prothrombotic phenotype of Pro 33 platelets that has been suggested in several clinical association studies but also in experimental settings. Both clinical and laboratory data regarding a possible impact of the HPA-1 polymorphism of ␣ IIb ␤ 3 on modulating platelet function have been discussed controversially. Specifically, it has been debated whether or not the Leu 3 Pro exchange at residue 33 of the ␤ 3 subunit induces an increased thrombogenicity of Pro 33 platelets. We therefore initially studied the dynamics of platelet thrombus formation using a collagen type I matrix in an established perfusion system, simulating arterial flow conditions. Quantitation of thrombus growth in vitro demonstrated that the mean volume of single thrombi formed by Pro 33 platelets is significantly higher than that of the Leu 33 platelets (Fig. 2). The initial adhesion of circulating platelets with a collagen matrix is complex, involving platelet capture ("tethering") by immobilized VWF via GPIb␣ of the platelet GPIb-IX-V complex, subsequent GPIb-IX-V-dependent signaling, and direct interaction with collagen via ␣ 2 ␤ 1 and GPVI, the platelet collagen receptors, inducing platelet activation. To block some of these interactions, we therefore used specific monoclonal antibodies such as LJ-Ib1 that completely inhibits VWF binding to the platelet GPIb-IX-V complex or 5C4 that blocks the platelet GPVI receptor (data not shown). The expression of ␣ 2 ␤ 1 on the platelet surface is genetically controlled and modulated by nucleotide polymorphisms in the ␣ 2 gene. Moreover, because the ␣ 2 807TT genotype of ␣ 2 ␤ 1 has also been suggested to be a prothrombotic integrin variant, volunteers of this series of experiments were carefully selected by recruiting only carriers of the ␣ 2 807CC genotype. A specific feature of the experiments summarized in Fig. 2 is that the difference in single thrombus volumes between Pro 33 and Leu 33 platelets is due to differences in apical thrombus growth (Fig. 2B). This is remarkable, especially because apical thrombus segments become exposed to increasing shear over time, exceeding an initial near-wall shear rate of 500 s 1 (Fig. 2D). Our finding is indicative of a higher thrombus stability of Pro 33 than Leu 33 platelets as reported before. By contrast, considering the fact that neither the number nor the bottom area of formed thrombi differs between both isoforms of ␣ IIb ␤ 3, it appears rather unlikely that the initial adhesive interactions between the collagen matrix and platelets under flow had a significant effect on the results. Assuming that the difference in thrombus volumes between both ␣ IIb ␤ 3 isoforms is indeed due to increased thrombus stability in the Pro 33 variant, it would be an attractive assumption that the Leu 3 Pro exchange has an impact on the mechanotransduction mediated by the integrin. Such a contention is in line with previous observations documenting a significantly more stable adhesion of Pro 33 than of Leu 33 platelets onto immobilized fibrinogen at shear rates ranging from 500 to 1,500 s 1. Moreover, it has been shown that the Pro 33 variant displays increased outside-in signaling. These findings suggest that the HPA-1 polymorphism of ␣ IIb ␤ 3 modulates the function and activity of the integrin. However, the molecular nature underlying this modulation has remained elusive so far. In this context, a marked concern in the past has been that the point mutation at residue 33 of the ␤ 3 subunit is located 80 away from relevant functional domains of ␣ IIb ␤ 3 such as extracellular binding sites or transmembrane domains. Conversely, due to its distant location, it appears quite appropriate to exclude that the Leu 3 Pro exchange would directly influence interactions with ligands at the extracellular or even intracellular binding sites. It is more likely that an increased activity of ␣ IIb ␤ 3 results from a change in the structural dynamics of the integrin. To probe this assumption, we performed microsecond-long MD simulations on the ectodomains of both ␣ IIb ␤ 3 isoforms, Leu 33 and Pro 33. The ectodomains of either isoform initially only differed in the side chains of residue 33. Ectodomains of integrins have been successfully used by us and others in previous studies as model systems to explore possible influences of structure and solvent on integrin activation. For the MD simulations, we used established parameterizations for the solvent and the protein, which we had applied successfully in other integrin simulations, although we note that more recent protein force fields have become available. The impact of force field deficiencies on our results is expected to be small due to cancellation of errors when comparatively assessing the two isoforms. Furthermore, we expect the deficiency of the ff99SB force field to destabilize helical structures to not have a decisive influence on our results because the mutation site (residue 33) is located in a loop region. Finally, ff99SB was also shown to have some issues with side-chain torsions. As Leu 33 is located at the vertex of the AB loop, with the side chain facing away from the ␤ 3 subunit, we do not expect imperfect leucine torsions to impact structural properties markedly however. The present simulations were started from the bent conformation with closed legs as present in the crystal structure, representing a low-affinity, inactive state of the integrin. As depicted, our simulation findings reveal that the Pro 33 variant displays significantly larger structural deviations from the bent starting structure and becomes less compact than the Leu 33 isoform (Fig. 3). Furthermore, we evaluated geometric parameters within the ␤A domain ("T-junction formation" between helices ␣1 and ␣7; Fig. 3B) and variables characterizing the bending and splaying of the structure (Fig. 3, C and D), which had been used successfully in related studies to characterize inactive-to-active transitions. The results display Allosteric changes of ␣ IIb ␤ 3 induced by the Pro 33 variant significant differences in the conformational states of both isoforms of ␣ IIb ␤ 3 with the ectodomain of the Pro 33 variant showing a stronger tendency to move toward an open, extended conformation with more splayed legs than the Leu 33 isoform. We performed triplicate MD simulations for both isoforms, which allows probing for the influence of the starting conditions and determining the significance of the computed results by statistical testing and rigorous error estimation. As to the latter, we paid close attention to only consider uncorrelated instances for the S.E. calculations (Equations 1 and 2). The results are consistent across three independent MD simulations for each isoform. This demonstrates the robustness of our approach. We are aware that the magnitudes of the changes of the bending or splaying angles do not correspond to those described for a fully open, extended ectodomain conformation. However, in consideration of the simulation times used here, this finding is in complete accord with the timescale of integrin activation in the absence of biomechanical forces, ranging from microseconds to seconds. As an independent approach to explore the impact of the Leu 3 Pro exchange on the structural dynamics of full-length ␣ IIb ␤ 3 integrin, FRET measurements on ␣ IIb mVenus and ␤ 3 Leu 33 mCherry or ␤ 3 Pro 33 -mCherry transiently cotransfected in HEK293 cells were performed (Fig. 4, A-C). HEK293 cells have previously been shown to be a suitable cellular model for functional studies involving ␣ IIb ␤ 3. The transfectants display a significantly higher efficiency of energy transfer between the ␣ and ␤ subunits in the Leu 33 than in the Pro 33 isoform. This difference is indicative of a smaller spatial separation between the cytoplasmic tails of the Leu 33 isoform in its resting state. Conversely, the lower energy transfer obtained in the Pro 33 variant reflects a larger spatial separation of its cytoplasmic domains that is already present in the resting state (Fig. 4D). A limitation of the FRET method is that it furnishes indirect information. However, the level of evidence is consistent, and the observation is in good agreement with the findings of the MD simulations. A direct study comparing activity and stability of both receptor isoforms using purified protein would provide complementary information about receptor conformations but was beyond the scope of the present work. Taken together, both the MD simulations and FRET experiments reveal structural changes in the ectodomain of ␣ IIb ␤ 3 or the full-length integrin for the Pro 33 variant that relate to a conformational change from a closed, bent structural state with closed legs to a more open, extended state with splayed legs. According to current models, such a conformational change is required for integrin activation. Considering that in both the MD simulations and FRET measurements the integrin has been examined in the resting state, our results provide evidence that the Leu 3 Pro exchange can shift the dynamic conformational equilibrium of ␣ IIb ␤ 3 in such a way that a structural state more similar to the active conformation is present. The effect of the Leu 3 Pro exchange appears to have some similarity to stimulatory monoclonal antibodies, which have been suggested to shift the dynamic conformational equilib-rium in favor of those forms that lead to an increase in the proportion of a high-affinity integrin. As the effect induced by the amino acid substitution manifests in regions far away from the mutation site, the influence of the Leu 3 Pro exchange must be allosteric. Our results clearly go beyond a previous study that used MD simulations of the ␤ 3 subunit only to investigate possible effects of the HPA-1 polymorphism on the structure of the ␤ 3 subunit. To explore a possible mechanism of how the Leu 3 Pro exchange exerts an allosteric effect, applying MD simulations and rigidity analyses, we examined short-and mid-range structural, dynamics, and stability changes in the PSI domain and neighboring domains. Although the overall architecture of the PSI domain remains largely unchanged by the amino acid substitution, particularly the EGF domains show marked structural alterations in the Pro 33 variant (Fig. 5). The EGF-1 and EGF-2 domains, although sequentially distant from the mutation located at residue 33, are spatially close to the AB loop of the PSI domain in the bent state, which carries the HPA-1 polymorphism. Parts of the AB loop are markedly more mobile in the Pro 33 variant (Fig. 5B). Related to these changes, our analyses reveal that the Leu 3 Pro exchange leads to a less compact interface between the PSI domain and EGF-1/EGF-2 domains (Fig. 5, C and D). Specifically, fewer native and nonnative contacts are formed across the interface and within the EGF-1/EGF-2 domains in the Pro 33 variant than in the Leu 33 isoform (Fig. 6A). These conformational and dynamic alterations are related to a change in the structural stability of the EGF-1 and EGF-2 domains that percolates from the interface region through these domains (Fig. 6, B and C). Notably, similar changes in these regions have been related to integrin activation before. For example, the displacement of the PSI domain of about 70, described to act as a mechanical lever upon outside-in signaling, alters the interface formed with the EGF-1 and EGF-2 domains. Furthermore, activating mutations have been identified in the N-terminal part of the PSI domain, the EGF-2 domain, and parts of the EGF-3 domain of ␣ x ␤ 2 integrin. These regions are thought to form the area of contact between ␣ and ␤ subunits in the bent conformation. Finally, when generating an integrin chimera by combining ␣ and ␤ subunits from different species, direct interactions between the subunits could not be formed, and the integrin no longer appeared locked in the closed conformation. The results of this study provide an explanation for the prothrombotic phenotype of the Pro 33 variant of ␣ IIb ␤ 3. Specifically, the shift of the dynamic conformational equilibrium of the Pro 33 variant toward an active state may promote a fully active state in the presence of immobilized adhesive ligands such as fibrinogen or VWF and, consequently, favor outside-in signaling. This, in turn, may facilitate and accelerate platelet aggregation and subsequent formation of stable platelet thrombi. Thus, our results lend support to previous clinical and experimental findings suggesting that the Leu 3 Pro exchange confers prothrombotic properties to ␣ IIb ␤ 3. Blood collection Blood was collected through a 21-gauge needle from 15 healthy, medication-free volunteers into vacutainer tubes (BD Biosciences) containing sodium citrate (0.38%, w/v). The volunteers were recruited by the Dsseldorf University Blood Donation Center. Written informed consent was obtained from the volunteers according to the Helsinki Declaration. The Ethics Committee of the Faculty of Medicine, Heinrich Heine University Dsseldorf, approved the study (study number 1864). Parallel plate flow chamber A custom-made rectangle flow chamber was used (flowchannel width, 5 mm; height, 80 m; length, 40 mm). Glass coverslips forming the lower surface of the chamber were flame-treated, cooled, and coated with 0.04 ml/mm 2 collagen type S (concentration, 3 mg/ml) containing 95% type I and 5% type III collagen (Roche). The perfusion system was flushed and filled with PBS buffer (pH 7.3) containing 2% BSA to block unspecific adhesion onto the glass slides. A syringe pump (Harvard Apparatus Inc., Holliston, MA) was used to aspirate mepacrine-labeled citrated whole blood through the flow chamber for 10 min at a constant flow rate of 9.6 ml h 1, producing an initial near-wall shear rate of 500 s 1. Labeling of platelets Platelets were stained in whole blood by direct incubation with the fluorescent dye mepacrine (quinacrine dihydrochloride; 10 M final concentration). Although this dye also labels leukocytes, these cells could be readily distinguished from platelets by their relatively large size and sparsity; moreover, leukocyte attachment to the surface tested was negligible under the conditions used. Mepacrine accumulates in the dense granules of platelets and had no effect on normal platelet function at the concentration used. Platelet secretion after adhesion did not prevent their visualization. Furthermore, mepacrine did not affect platelet adhesion or platelet aggregate/thrombus formation. Picture acquisition and digital image processing The fluorescence signal of mepacrine-stained platelets was detected by a Zeiss Axiovert 100 M/LSM 510 confocal laser scanning microscope (Jena, Germany). During the flow period of 10 min, 25 stacks of images were acquired. One stack consisted of 30 slices with a height of 30 m. Digitized images had a standard size of 512 512 pixels and an optical resolution of 1 m. Volumetry of single platelet thrombi The stacks were reconstructed three-dimensionally and analyzed with the custom-made software package ECCET (www.eccet.de). 5 The software integrated the slices of every stack and divided the three-dimensional space into multiple "voxels" (three-dimensional equivalent to a pixel). All fluorescence signals were smoothed by a separate linear Gaussian filter in all three planes (filter 2). Voxels with a gray value 10 were marked as thrombus; voxels with lower gray values were disregarded. Thus, background noise of fluorescence signals from adjacent focus planes and single platelets was suppressed. Thrombi were then categorized by volume, and only platelet aggregates exceeding the cutoff volume of 100 m 3 were assessed to avoid interference by non-stationary objects, e.g. moving platelets. Starting structures for molecular dynamics simulations The starting structure for MD simulations of ␣ IIb ␤ 3 in the bent, closed form representing the inactive state of the Leu 33 isoform was obtained from the coordinates of the X-ray structure of the ectodomain of ␣ IIb ␤ 3 integrin (Protein Data Bank code 3FCS). In the Protein Data Bank entry, the ␣ IIb subunit contains two unresolved regions within the calf-2 domain (residues 764 -774 (AB loop) and 840 -873 (XY loop)), and the ␤ 3 subunit has two unresolved regions within the EGF domains (residues 75-78 and 477-482). Residues unresolved in the ␣ IIb subunit were not included in the starting structures, consistent with our previous studies on integrin. The apparently high flexibility of these residues implies that they will not contribute significantly to stabilizing the bent conformation of the ␣ IIb ␤ 3 integrin. The short regions of unresolved residues of the ␤ 3 subunit were modeled and refined using the automatic loop refinement server ModLoop. The structure was finally refined by reverting the engineered residues Cys 598 and Cys 688 to the natural residues Leu 598 and Pro 688, respectively. MOD-ELLER version 9.9 was applied, allowing the modeling of the two Cys residues only. The Pro 33 variant was obtained by mutating residue Leu 33 to Pro 33, using Swiss-PdbViewer, without changing the coordinates of any of the other amino acids. As a final step, we capped the charges at the N-terminal residues Glu 764 and Gly 840 and the C-terminal residues Asp 774 and Gln 873 using the leap module of Amber 12. All structural ions present in the protein were modeled as Mg 2 ions. Integrin sequence numbers used throughout this study are according to UniProt. Molecular dynamics simulations Each starting structure of the two HPA-1 isoforms, Leu 33 and Pro 33, was subjected to three replicates of all-atom MD simulations of 1-s length each in explicit solvent summing up to 6 s of aggregate simulation time for production. MD simulations were performed with the Amber 12 suite of programs using the force field ff99SB, initially described by Cornell et al. and modified according to Simmerling and co-workers. Parameters for the Mg 2 ions were taken from Aqvist. The total charge of the system was neutralized by adding eight Na counter-ions with the leap module of Amber 12, and the solutes were placed into an octahedral period box of TIP3P water molecules. The distance between the edges of the water box and the closest atom of the protein was at least 11, resulting in systems of 200,000 atoms. The particle mesh Ewald method was used to treat long-range electrostatic interactions, and bond lengths involving bonds to hydrogen atoms were constrained using the SHAKE algorithm. The time step for integrating Newton's equations of motion was 2 fs with a direct-space, non-bonded cutoff of 8. Applying har-monic restraints with force constants of 5 kcal mol 1 2 to all solute atoms, MD simulations in the NVT (constant number of particles, volume, and temperature) ensemble was carried out for 50 ps, during which the system was heated from 100 to 300 K. Subsequent MD simulations in the NPT (constant number of particles, pressure, and temperature) ensemble were used for 150 ps to adjust the solvent density. Finally, the force constant of the harmonic restraints on solute atom positions was gradually reduced to zero during 100 ps of NVT MD simulations. Subsequently, we performed a 1-s unrestrained simulation; the first 200 ns were discarded, and the following 800 ns were used for analysis with the programs ptraj/cpptraj with conformations extracted every 20 ps. The production MD simulations were performed with the graphics processing unit (GPU) version of the program pmemd. Analysis of the trajectories For the analysis of the trajectories, ptraj/cpptraj of the AmberTools suite of programs were applied. For investigating structural deviations along the MD trajectories, the RMSD of all C ␣ atoms was computed after minimizing the mass-weighted RMSD of the C ␣ atoms of the ␤A and ␤-propeller domains with respect to the starting structure. In addition, to investigate the structural changes of a domain, the C ␣ atom RMSD of each domain was computed after superimposing the respective domain. To evaluate the level of compactness of the structure, the R g was calculated with respect to the complete ectodomain. To examine atomic mobility, RMSFs were computed for the backbone atoms of the PSI domain. An analysis of the secondary structure of the PSI domain was carried out to monitor variations in the content of the two helices bordering the AB loop. Structural changes in the ectodomain were characterized as reported previously. The kinking of the helix ␣1 was measured by the three points (center of mass of C ␣ atoms of Lys 112 and Ile 118, center of mass of C ␣ atoms of Gln 119 and Lys 125, and center of mass of C ␣ atoms of Leu 126 and Leu 132 ). The unbending of the structure was evaluated using the angle formed by the centers of mass of the ␤-propeller, ␤A, and PSI domains, and the splaying of the two legs was evaluated using the angle formed by the centers of mass of the calf-2, thigh, and ␤-tail domains. Changes in the ␤ 3 genu region were first quantified by computing the distances between the C ␣ atom of residue 33 and the C ␣ atom of Ser 469 (EGF-1 domain) and with the C ␣ atom of Gln 481 (EGF-2 domain). To identify a network of interactions keeping the interdomain interface stable, a maximal distance of 3.5 and a minimal angle of 120°w ere used as exclusion criteria to identify hydrogen bond formation. The CNA software package was used to provide a link between structure and rigidity/flexibility of the HPA-1 isoforms. To derive information of the effect of Pro 33 on a local level, we first generated an ensemble of 400 equally distributed structures from the 200 -1,000-ns intervals of each MD simulation, considering only the hybrid, PSI, and EGF block domains. Thermal unfolding simulations of the Leu 33 and Pro 33 isoforms were then carried out with CNA to identify differences in the structural stability within the ␤ 3 genu region following established protocols. For each isoform, we gen-erated three different stability maps and three different neighbor stability maps; from them we calculated the mean values used to build a final stability map and neighbor stability map for Leu 33 and Pro 33. Finally, a difference stability map was calculated as rc ij (Leu 33 isoform) rc ij (Pro 33 isoform). Statistical analysis Results from three independent MD simulations are expressed as arithmetic means S.E. calculated over the time. The overall S.E. for each simulated system was calculated according to the law of error propagation (Equation 1), where the subscripts i {1, 2, 3} indicate the three trajectories. S.E. i was computed following Ref. 78 and applying the multiple Bennett acceptance ratio method, which allows detecting the decorrelation time of an investigated variable along each MD simulation. From it, the effective sample size is established, and the S.E. i is derived. In the case of hydrogen bond and contact analyses, S.E. is calculated from the S.D. of the three means of the three MD simulations according to Equation 2, assuming that the three MD simulations are statistically independent. S.E. S.D. / 3 (Eq. 2) Differences between mean values are considered statistically significant if p 0.05 and p 0.001 (indicated as * and **, respectively, in figures and tables) and highly statistically significant if p 0.0001 (indicated as ***) according to the Student's t test for parametric testing. The statistical analysis was performed using R software and the pymbar module for multiple Bennett acceptance ratio. The FRET efficiency results obtained performing the FRET-AB experiments are expressed as means S.E. For statistical analysis, the unpaired t test was applied using GraphPad Prism version 6.00 for Windows (GraphPad Software, La Jolla, CA). Figure preparation The crystal structure of the ␣ IIb ␤ 3 integrin (Protein Data Bank code 3FCS) was used to represent the protein together with conformations extracted from the MD trajectories. PyMOL was used to generate molecular figures, and graphs were prepared using Gnuplot. Live-cell imaging of ␣ IIb ␤ 3 -transfected HEK 293 cells expressing either isoform Leu 33 or Pro 33 Live-cell imaging was performed to examine the cellular distribution of ␣ IIb ␤ 3 -transfected HEK 293 cells expressing either isoform Leu 33 or Pro 33. 24 h after transfection, 3.7 10 4 cells in complete culture medium were allowed to settle for more 24 h in individual chambers in a -slide 4-well ibiTreat chamber slide (Ibidi, Martinsried, Germany) previously coated with 50 g/ml fibrinogen from human plasma (Sigma-Aldrich) in PBS without Ca 2 and Mg 2 for 1 h at 37°C. Live-cell imaging was performed with an Axiovert S100 inverted fluorescence microscope (Zeiss) equipped with a 12.0 Monochrome without IR-18 Allosteric changes of ␣ IIb ␤ 3 induced by the Pro 33 variant monochromatic camera (Diagnostic Instruments, Inc, Sterling Heights, MI) and an LEJ EBQ 100 isolated lamp (Leistungselektronik Jena GmbH, Jena, Germany). Images were obtained with a 63 oil immersion objective lens using a 5,000-ms exposure time for mVenus, 100 ms for mCherry, and 300 ms for brightfield. Image acquisition was performed with MetaMorph software (version 7.7.7.0). Background subtraction and image processing were performed using Adobe Photoshop CS3 software (Adobe, San Jose, CA). Flow cytometry Transfected cells at 70 -80% confluence were harvested 24, 48, and 72 h after transfection. Subsequently, cells were pelleted by centrifugation at 400 g for 7 min and suspended again in 100 l of Dulbecco's phosphate-buffered saline (PBS). Staining with allophycocyanin (APC)-conjugated mouse anti-human CD41 monoclonal antibody (clone MEM-06; Exbio, Praha, Czech Republic; 0.15 g/ml) was performed for 30 min at room temperature protected from light. After staining, cells were washed once in Dulbecco's PBS and analyzed on a FACS Canto II flow cytometer (BD Biosciences) equipped with 488 and 633 nm lasers for excitation and FITC, phycoerythrin, and APC filters for detection of mVenus, mCherry, and APC, respectively. The collected data were analyzed with FACSDiva software version 6.1.3 (BD Biosciences). PAC1 was obtained from BD Biosciences, and Alexa Fluor 647-fibrinogen was from Thermo Fisher Scientific (Dreieich, Germany). FRET measurements using APB 24 h after transfection, cells were harvested and seeded in a -slide 8-well ibiTreat chamber slide (Ibidi). Subsequently, 24 h later (48 h after transfection) and before measuring FRET efficiency, the culture medium was substituted by identical medium but containing phenol red-free Fluorobrite TM DMEM (Thermo Fisher, formerly Life Technologies). Live cells were examined with an LSM 780 (Zeiss) inverted microscope equipped with a C-Apochromat 40/1.20 W Corr (from correction ring) M27 water-immersion objective lens, an AxioCam camera, and an HPX 120C lamp. FRET acceptor photobleaching experiments including image acquisition, definition of regions of interest for bleaching, and data readout were performed using the LSM software package ZEN 2012 (Zeiss). The chamber slide containing the live cells was mounted on a heating frame within a large incubation chamber (PeCon, Erbach, Germany) set to 37°C. mVenus was excited with the 488 nm line of an argon multiline laser and detected between 513 and 558 nm using a gallium arsenide phosphide detector, whereas mCherry was excited at 561 nm using a diode-pumped solid-state laser and detected between 599 and 696 nm. The beam splitter was MBS 488/561/633. In total, a time series of 20 frames (128 128 pixels; pixel size, 0.33 m) at a pixel time of 2 s/pixel was acquired for each FRET experiment. The entire measurement including bleaching of mCherry was finished within 3.5 s. After the fifth frame, an area corresponding to half of a cell, with a constant dimension of 42 42 pixels (region of interest), was bleached by 30 iterations of the mCherry excitation wavelength (561 nm) using 100% laser power. After bleaching, 15 additional frames were recorded. The mean intensity of mVenus fluorescence at the cell membrane within the bleached area was extracted and analyzed according to Equation where I before (intensity of mVenus before bleaching) and I after (intensity of mVenus after bleaching) correspond to the mean intensity values of mVenus fluorescence of five images before and after bleaching within the bleached area at the cell membrane.
<gh_stars>1-10 from re import sub, search, DOTALL, IGNORECASE, UNICODE from random import shuffle from bs4 import BeautifulSoup from base import Target from fetchers import Proxy class FamousBirthdays(Target): MAIN_URL = 'https://www.famousbirthdays.com/' PERSONS_SLICE = 11 MONTH_DAY_MAP = { 'january': 31, 'february': 29, 'march': 31, 'april': 30, 'may': 31, 'june': 30, 'july': 31, 'august': 31, 'september': 30, 'october': 31, 'november': 30, 'december': 31, } DEFAULT_LIMIT = None MIN_RANK = 20000 def __init__(self, image, wiki, logger, keepers, limit): super().__init__(image, wiki, logger, keepers, limit) self._fetcher = Proxy(logger) def _get_items(self): items = [] for month, days in self.MONTH_DAY_MAP.items(): for day in range(1, days): query = f'{self.MAIN_URL}{month}{day}.html' response = self._dead_fetch(query) response = BeautifulSoup(response.content, 'lxml') persons = response.find_all('a', {'class': 'person-item'}) for list_item in persons[:self.PERSONS_SLICE]: url = list_item['href'] title = list_item.div.div.string if ',' in title: title = title.split(',')[0].strip() else: title = sub( '(.*) \(.*\)', '\\1', title, flags=DOTALL | IGNORECASE | UNICODE, ).strip() subtitle = list_item.find('div', {'class': 'hidden-xs'}) if subtitle is not None: subtitle = subtitle.string.strip() title = f'{title} ({subtitle})' items.append({ 'url': url, 'title': title, 'category': 'Person', }) shuffle(items) return items[:self._limit] def _from_target(self, url, title): response = self._dead_fetch(url) response = BeautifulSoup(response.content, 'lxml') rank = response.find('div', {'class': 'rank-no'}) if rank is None or int(rank.text[1:]) > self.MIN_RANK: self._logger.warning( f'target option rank too small {int(rank.text[1:])}', ) return None bio = response.find('div', {'class': 'bio'}) description = bio.text return { 'name': title, 'description': description, 'link': url, 'source': 'famousbirthdays', } def _fix_option(self, option): if option is None or \ option['name'] is None or \ option['name'] is '' or \ option['description'] is None or \ option['description'] is '': return None if search('musical\.ly', option['name'], IGNORECASE): self._logger.warning('target skipping musical.ly') return None if option['source'] == 'famousbirthdays': option['description'] = sub( '^About\s', '', option['description'], ) return super()._fix_option(option)
. EMPIRICAL STRATEGY: Antibiotic therapy of upper and lower respiratory tract infections is based on an empirical strategy. However, arguments favoring the probability of a given bacteria may be lacking and, since resistance of Streptococcus pneumoniae and Haemophilus influenzae against conventional antibiotics is becoming increasingly frequent, therapeutic strategies must be revisited. SINUSITIS: H. influenzae and S. pneumoniae are the most frequent causal agents in acute maxillary sinusitis. For chronic sinusitis, beta-lactamase producing anaerobic bacteria, S. aureus and peni-resistant pneumococci and H. influenzae must also be considered. ACUTE EXACERBATIONS OF CHRONIC BRONCHITIS: The main causal agents are H. influenzae and S. pneumoniae, followed by M. catarrhalis, S. aureus, enterobacteriaceae, and beta-hemolytic streptococci. COMMUNITY ACQUIRED PNEUMONIA: There are a wide range of pathogens, half of which are identified in different studies. RESISTANCES: For pneumococci, penicillin resistance is currently evidenced in 48% of the strains. For H. influenzae, 30% of the strains are ampicillin resistant.
Nutritional composition in leaves of some mulberry varieties:- A comparative study The silkworm derives nutritional requirement from mulberry leaves for their growth and development, variations on the components of mulberry leaves may have some influence in the growth and development of silkworm. The nutritional parameters of mulberry leaves and silk production are directly proportional to each other. The nutritional composition of mulberry leaves vary depending on the soil and other environmental factors of the locality, moreover the genotypic characteristics. An effort has been made to determine, identify and analyse, the nutritional composition of different varieties of mulberry leaves, namely S-13, S-146, S-1635, S-1, AR-12, AR-14, S-36, S-54, TR-10 and BR-2 mostly found in Lucknow region of Uttar Pradesh, India. The current paper deals with quantification of protein, carbohydrate, total carotenoids, chlorophyll a, chlorophyll b, total chlorophyll contents of different varieties of mulberry grown in this region. The quantitative determination of nutritional composition indicates that BR-2 variety contains significantly higher concentration of protein content (0.308 mg/ml), carotenoids (0.0334 mg/gm), along with S-36 variety containing significantly higher quantity of proteins (0.3029 mg/ml), chlorophyll a (0.0858 mg/gm), chlorophyll b (0.0329 mg/gm), total chlorophyll (0.1034 mg/gm) and total carotenoids (0.0479mg/gm). The carbohydrate content of S-1635 variety (0.4341 mg/ml) was significantly found highest as compared to other varieties. The statistical data analyses showed that all the nutritional parameters are found to have highly significant differences between them (p≪0.01). Finally S-36 and BR-2 varieties hence are regarded as suitable with consistent performance over many characters.
Presence of Poly(A) Tails at the 3'-Termini of Some mRNAs of a Double-Stranded RNA Virus, Southern Rice Black-Streaked Dwarf Virus Southern rice black-streaked dwarf virus (SRBSDV), a new member of the genus Fijivirus, is a double-stranded RNA virus known to lack poly(A) tails. We now showed that some of SRBSDV mRNAs were indeed polyadenylated at the 3' terminus in plant hosts, and investigated the nature of 3' poly(A) tails. The non-abundant presence of SRBSDV mRNAs bearing polyadenylate tails suggested that these viral RNA were subjected to polyadenylation-stimulated degradation. The discovery of poly(A) tails in different families of viruses implies potentially a wide occurrence of the polyadenylation-assisted RNA degradation in viruses. Introduction RNA of many eukaryotic viruses, ranging from DNA to RNA viruses, have 3' poly(A) tails, which are synthesized not only posttranscriptionally, but also by direct transcription from the poly(U) stretched template strand. Regardless of synthesis mechanism used, the viral poly(A) tails have been considered to play crucial roles in RNA stability and translation, resembling roles of the stable poly(A) tails in eukaryotic mRNA. Until recently, the function of poly(A) tails in destabilizing the viral RNA was revealed. The viral mRNA containing poly(A) or poly(A)-rich tails were detected in HeLa cells infected with Vaccinia virus (a double-stranded DNA virus). Furthermore, the polyadenylate tails were also found in Tobacco mosaic virus (TMV), Cucumber mosaic virus (CMV), Odontoglossum ring-spot virus (ORSV), Cucumber green mottle mosaic virus (CGMMV), Tobacco rattle virus (TRV), Turnip crinkle virus (TCV) and Tobacco necrosis virus (TNV), seven positive-strand RNA viruses known to lack poly(A) tails and terminate 3'-termini with tRNA-like structure (TLS) or non-TLS heteropolymeric sequence. The presence of poly(A) tails suggests that these viral RNAs are subjected to poly(A)-stimulated degradation. In this paper, the poly(A) and poly(A)-rich tails were first reported at the 3'-termini of the mRNAs of a dsRNA virus, Southern rice black-streaked dwarf virus (SRBSDV), generally recognized to lack poly(A) tails. SRBSDV has been proposed as a new member in the genus Fijivirus of the family Reoviridae, which causes a serious rice disease in South China and Vietnam in recent years. SRBSDV is most closely related to but distinct from Rice black-streaked dwarf virus (RBSDV), which is also a member of the Fijivirus genus. SRBSDV genome contains 10 segments, named as S1-S10 in the descending order of molecular weight. Comparison of 10 genomic segments of SRBSDV with their counterparts in RBSDV suggests that SRBSDV encodes 13 open reading frames (ORFs) and possesses 6 putative structural proteins (P1, P2, P3, P4, P8, and P10) and 7 putative nonstructural proteins (P5-1, P5-2, P6, P7-1, P7-2, P9-1 and P9-2). At present, the functions of partial genes have been studied. The P6, encoded by S6, has been identified as an RNA silencing suppressor. P7-1 induces the formation of tubules as vehicles for rapid spread of virions through basal lamina from midgut epithelium in its vector, the white-backed planthopper. P9-1 is essential for viroplasm formation and viral replication in non-host insect cells and vector insects. However, no reports are available to date to assign functions to the proteins encoded by other ORFs. The putative function of these proteins can only be postulated based on their RBSDV homologs. P1, P2, P3 and P4 are putative RNA-dependent RNA polymerase (RdRp), core protein, capping enzyme and outer-shell B-spike protein, respectively. P8 and P10 are putative core and major outer capsid proteins, respectively. SRBSDV mRNAs were considered to lack of poly(A) tails at the 3'-ends. However, in previous experiments, all 13 ORFs of the 10 RNA segments could be amplified via RT-PCR using oligo(dT)18 to prime cDNA synthesis as templates, suggesting that each SRBSDV mRNA might bear a potential poly(A) tail at the 3' terminus. In this paper, we confirmed that some of SRBSDV mRNAs were indeed polyadenylated at the 3' terminus in plant hosts. Virus and RNA Extraction SRBSDV isolate used in the experiment was obtained from rice and maize plants showing typical dwarf symptoms with white waxy galls in 2014 in 8 counties of 4 provinces in China, including Yunnan, Guizhou, Hunan, and Jiangxi provinces. Total RNA from infected rice and maize leaf and stem tissue were extracted following the standard protocol of TRIzol reagent (Invitrogen, Carlsbad, CA, USA). The isolate was identified as SRBSDV excluding RBSDV by reverse transcription RT-PCR using specific primers for distinguishing the two viruses. Rapid Amplification of cDNA End (RACE) PCR To confirm characterization of the polyadenylate tails associated with viral mRNAs, the 3' Rapid Amplification of cDNA End (RACE) PCR was performed using BD SMART™ RACE cDNA Amplification Kit (TaKaRa, Dalian, Liaoning, China). In this case, reverse transcription reactions were performed using total RNA (respectively from infected rice and maize) as templates and adapter-oligo(dT) primer (P1) ( Table 1) to prime first cDNA strand synthesis. 10 specific upstream primers and 10 nested primers respectively corresponding to SRBSDV each mRNA were designed according to China isolate HuNyy sequence information (GenBank No. JQ034348-JQ034357) ( Table 1). Each of upstream primers was paired with adapter primer P2 (as downstream primer) for the 1st PCR amplification using PrimeSTAR HS DNA polymerase (TaKaRa) and cDNA as template. The PCR products from the 1st PCR reaction were subjected to a subsequent the 2nd PCR run with nested primers and adapter primer P3 ( Figure 1A). The amplified products were analyzed by 1.5% agarose gel electrophoresis, and the resulting bands, in agreement with the predicted sizes, were individually cloned into pGEM-T Easy vector (Promega, Madison, USA) and subjected to sequence analysis. Approximately 5-10 clones from each isolate were randomly selected and sequenced. Results and Discussion After 3' RACE, the 3'-termini sequences of viral mRNAs were obtained, and the results indicated that SRBSDV mRNAs indeed possessed ploy(A) or poly(A)-rich tails in plant hosts. Taking S10-mRNA as an example to analyze the nature of poly(A) and poly(A)-rich tails, a total of 42 polyadenylated viral mRNA molecules were cloned from rice and maize plants. In addition to 10 mRNAs bearing poly(A) tails exclusively comprised of adenosines, a large number of mRNAs possessed poly(A)-rich tails ( Figure 1B). Notably, the heterogeneity of these poly(A)-rich tails was confined to their 5' ends, and they all terminated in homogenous adenosines (17-23 nt) ( Figure 1B), which was possibly due to the 3' bias of oligo(dT)-dependent reverse transcription. Most poly(A)-rich tails were not at the downstream of S10-mRNA entire 3' untranslated region (UTR), and replaced partial 3' UTR sequences. For example, the tail of isolate LX-1 replaced 3' UTR sequence of S10-mRNA from the nucleotide 1753 ( Figure 1B). In some poly(A)-rich tails (isolate JH-1, LX-1, PT-1, PT-5, YJ-1 and YJ-4), there were more non-viral nucleotides (35-208 nt) preceded polyadenylates, which was considered to originate from host plants. In order to further certify the presence of poly(A) tails and exclude non-specificity of reverse transcription reaction, these non-viral nucleotides was used to design downstream primers (e.g., S10-NVP) to perform PCR with upstream primer from S10 ( Figure 1A), and the result of amplification was positive (data no shown), indicating sufficiently the existence of mRNA bearing ployadenylate tails. Moreover, poly(A) or poly(A)-rich tails were also discovered at the 3'-ends of viral S1-S9 mRNAs ( Figure 2). All amplified products based on 3' RACE were weak (data no shown), implying that a small fraction of SRBSDV mRNAs was polyadenylated. (Table 1), and the gray box, black box and red box indicate respectively partial ORF, 3' UTR and non-viral nucleotides in S10-mRNA. To our knowledge, dsRNA viruses are lack of poly(A) tails at the 3'-ends of the genome segments and their mRNAs. Interestingly, in this paper, we demonstrated that some viral mRNA molecules were polyadenylated at their 3'-terminus in plant cells infected with SRBSDV (a dsRNA virus). Besides their crucial roles for mRNA stability and translation efficiency, the polyadenylate tails were recently described as involved in viral RNA degradation. The Poly(A)-stimulated RNA degradation occurs throughout the prokaryotic and eukaryotic cells. Generally, the degradation process comprises three sequential steps: endonucleolytic cleavage, addition of polyadenylate tails to the cleavage products, and exonucleolytic degradation. The transient poly(A) or poly(A)-rich stretches can act as landing sites to recruit 3'-5' exoribonucleases for further degradation, which might be one of ancestral roles of polyadenylation. This evolutionarily conserved mechanism has been confirmed to play critical roles in rapidly removing redundant RNAs in cells, thereby maintaining the stability of gene expression. In this study, the non-abundant presence of SRBSDV mRNAs bearing polyadenylate tails was considered to represent degradation intermediates of an RNA decay pathway, rather than to convey protection to mRNAs. Recently, a dsDNA virus, Vaccinia virus, was linked with the conserved RNA degradation mechanism, and non-abundant, fragmented viral mRNAs bearing poly(A) or poly(A)-rich tails were detected in human cells infected with this virus. Such polyadenylation-stimulated RNA degradation was also found in seven positive-strand RNA viruses from distinct virus families and genera known to lack poly(A) tails. The discovery of poly(A) tails in three different types of viruses (positive-strand RNA virus, dsDNA and dsRNA virus) implies potentially a wide occurrence of the polyadenylation-assisted RNA degradation in viruses, which might represent a yet-unknown interaction between virus and host.
import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { NgbModule } from '@ng-bootstrap/ng-bootstrap'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; import { WINDOW_PROVIDERS } from './services/window/window.service'; import { AboutComponent } from './about/about.component'; import { ContactComponent } from './contact/contact.component'; import { DashboardComponent } from './dashboard/dashboard.component'; import { TwoWayBindingExampleDirective } from './twowaybindingexample.directive'; import { BlogServicesModule } from './modules/blog-services/blog-services.module'; @NgModule({ declarations: [ AppComponent, AboutComponent, ContactComponent, DashboardComponent, TwoWayBindingExampleDirective ], imports: [ BrowserModule, AppRoutingModule, NgbModule, BlogServicesModule.forRoot({ mock: true }) ], providers: [WINDOW_PROVIDERS], bootstrap: [AppComponent] }) export class AppModule {}
def init(self): self.acknowledgements = ''.join(( 'The original DINEOF model code may be found at ', 'http://modb.oce.ulg.ac.be/mediawiki/index.php/DINEOF. ', 'pyDINEOFs is stored online in a private repository at ', 'https://github.com/PhotonAudioLab/pyDINEOF')) self.references = ''.join(( 'J.-M. Beckers and M. Rixen. EOF calculations and data filling from ', 'incomplete oceanographic data sets. Journal of Atmospheric and ', 'Oceanic Technology, 20(12):1839-­1856, 2003')) logger.info(self.acknowledgements) return
The H1 First Level Fast Track Trigger The HERA collider has recently been upgraded to produce a factor of approximately 5 more luminosity. In parallel, upgrades to the H1 detector and trigger have been necessary. One of the projects was the design of an improved Fast Track Trigger (FTT) which should replace the old track trigger and is based on the central drift chambers of the H1 experiment. This work describes the design and expected performance of the FTT. Special emphasis is put on the design of a fast pipelined track segment finding algorithm which is the heart of the first level of the trigger. With the FTT having been designed to perform online particle identification for the first time at H1, the triggering on exclusive final state particles is possible. The expected performance for exclusive vector meson production is investigated here. It is shown that the FTT is able to reconstruct diffractive J/ events in photoproduction with an efficiency of 94% with acceptable trigger rates. Diffractive electroproduction events with a four-momentum transfer squared, t = −15 GeV, can be triggered with an efficiency of 72% with acceptable trigger rates. Finally, the expected trigger performance of charged current events is investigated here. It is shown that the use of additional FTT tracking information allows a rate reduction of a factor of 5 for the present charged current triggers if a minor acceptable loss in efficiency with respect to the HERA I efficiency is taken into account. It is further shown, that a re-designed trigger built specifically to select charged current events with a low missing transverse momentum between 12 and 15 GeV is possible at an acceptable trigger rate.
package test import ( "net/http" "strings" "github.com/onsi/gomega" gock "gopkg.in/h2non/gock.v1" ) // EnsureGockRequestsHaveBeenMatched checks if all requests have been matched in the test func EnsureGockRequestsHaveBeenMatched() { gomega.Expect(gock.GetUnmatchedRequests()).To(gomega.BeEmpty(), "Have no unmatched requests") } // NonExistingRawGitHubFiles mocks any matching path suffix when calling "https://raw.githubusercontent.com" with 404 response func NonExistingRawGitHubFiles(pathSuffixes ...string) { for _, pathSuffix := range pathSuffixes { gock.New("https://raw.githubusercontent.com"). SetMatcher(fileRequested(pathSuffix)). Reply(404) } } func fileRequested(pathSuffix string) gock.Matcher { matcher := gock.NewBasicMatcher() matcher.Add(func(req *http.Request, _ *gock.Request) (bool, error) { // nolint:unparam return strings.HasSuffix(req.URL.Path, pathSuffix), nil }) return matcher } // SpyOnCalls checks the number of calls func SpyOnCalls(counter *int) gock.Matcher { matcher := gock.NewBasicMatcher() matcher.Add(func(_ *http.Request, _ *gock.Request) (bool, error) { // nolint:unparam *counter++ return true, nil }) return matcher }
from app.player.player_repository import AbstractPlayerRepository, PlayerRepository from app.player.player_service import PlayerService def get_player_repository() -> AbstractPlayerRepository: return PlayerRepository() def get_player_service() -> PlayerService: player_repository = get_player_repository() player_service = PlayerService(player_repository=player_repository) return player_service
//! This internal module consists of helper types and functions for dealing //! with setting the file times (mainly in `path_filestat_set_times` syscall for now). //! //! The vast majority of the code contained within and in platform-specific implementations //! (`super::linux::filetime` and `super::bsd::filetime`) is based on the [filetime] crate. //! Kudos @alexcrichton! //! //! [filetime]: https://github.com/alexcrichton/filetime use std::fs::{self, File}; use std::io; cfg_if::cfg_if! { if #[cfg(target_os = "linux")] { pub(crate) use super::linux::filetime::*; } else if #[cfg(any( target_os = "macos", target_os = "netbsd", target_os = "freebsd", target_os = "openbsd", target_os = "ios", target_os = "dragonfly" ))] { pub(crate) use super::bsd::filetime::*; } } /// A wrapper `enum` around `filetime::FileTime` struct, but unlike the original, this /// type allows the possibility of specifying `FileTime::Now` as a valid enumeration which, /// in turn, if `utimensat` is available on the host, will use a special const setting /// `UTIME_NOW`. #[derive(Debug, Copy, Clone)] pub(crate) enum FileTime { Now, Omit, FileTime(filetime::FileTime), } /// For a provided pair of access and modified `FileTime`s, converts the input to /// `filetime::FileTime` used later in `utimensat` function. For variants `FileTime::Now` /// and `FileTime::Omit`, this function will make two syscalls: either accessing current /// system time, or accessing the file's metadata. /// /// The original implementation can be found here: [filetime::unix::get_times]. /// /// [filetime::unix::get_times]: https://github.com/alexcrichton/filetime/blob/master/src/unix/utimes.rs#L42 fn get_times( atime: FileTime, mtime: FileTime, current: impl Fn() -> io::Result<fs::Metadata>, ) -> io::Result<(filetime::FileTime, filetime::FileTime)> { use std::time::SystemTime; let atime = match atime { FileTime::Now => { let time = SystemTime::now(); filetime::FileTime::from_system_time(time) } FileTime::Omit => { let meta = current()?; filetime::FileTime::from_last_access_time(&meta) } FileTime::FileTime(ft) => ft, }; let mtime = match mtime { FileTime::Now => { let time = SystemTime::now(); filetime::FileTime::from_system_time(time) } FileTime::Omit => { let meta = current()?; filetime::FileTime::from_last_modification_time(&meta) } FileTime::FileTime(ft) => ft, }; Ok((atime, mtime)) } /// Combines `openat` with `utimes` to emulate `utimensat` on platforms where it is /// not available. The logic for setting file times is based on [filetime::unix::set_file_handles_times]. /// /// [filetime::unix::set_file_handles_times]: https://github.com/alexcrichton/filetime/blob/master/src/unix/utimes.rs#L24 pub(crate) fn utimesat( dirfd: &File, path: &str, atime: FileTime, mtime: FileTime, symlink_nofollow: bool, ) -> io::Result<()> { use std::ffi::CString; use std::os::unix::prelude::*; // emulate *at syscall by reading the path from a combination of // (fd, path) let p = CString::new(path.as_bytes())?; let mut flags = libc::O_RDWR; if symlink_nofollow { flags |= libc::O_NOFOLLOW; } let fd = unsafe { libc::openat(dirfd.as_raw_fd(), p.as_ptr(), flags) }; let f = unsafe { File::from_raw_fd(fd) }; let (atime, mtime) = get_times(atime, mtime, || f.metadata())?; let times = [to_timeval(atime), to_timeval(mtime)]; let rc = unsafe { libc::futimes(f.as_raw_fd(), times.as_ptr()) }; return if rc == 0 { Ok(()) } else { Err(io::Error::last_os_error()) }; } /// Converts `filetime::FileTime` to `libc::timeval`. This function was taken directly from /// [filetime] crate. /// /// [filetime]: https://github.com/alexcrichton/filetime/blob/master/src/unix/utimes.rs#L93 fn to_timeval(ft: filetime::FileTime) -> libc::timeval { libc::timeval { tv_sec: ft.seconds(), tv_usec: (ft.nanoseconds() / 1000) as libc::suseconds_t, } } /// Converts `FileTime` to `libc::timespec`. If `FileTime::Now` variant is specified, this /// resolves to `UTIME_NOW` special const, `FileTime::Omit` variant resolves to `UTIME_OMIT`, and /// `FileTime::FileTime(ft)` where `ft := filetime::FileTime` uses [filetime] crate's original /// implementation which can be found here: [filetime::unix::to_timespec]. /// /// [filetime]: https://github.com/alexcrichton/filetime /// [filetime::unix::to_timespec]: https://github.com/alexcrichton/filetime/blob/master/src/unix/mod.rs#L30 pub(crate) fn to_timespec(ft: &FileTime) -> libc::timespec { match ft { FileTime::Now => libc::timespec { tv_sec: 0, tv_nsec: UTIME_NOW, }, FileTime::Omit => libc::timespec { tv_sec: 0, tv_nsec: UTIME_OMIT, }, // `filetime::FileTime`'s fields are normalised by definition. `ft.seconds()` return the number // of whole seconds, while `ft.nanoseconds()` returns only fractional part expressed in // nanoseconds, as underneath it uses `std::time::Duration::subsec_nanos` to populate the // `filetime::FileTime::nanoseconds` field. It is, therefore, OK to do an `as` cast here. FileTime::FileTime(ft) => libc::timespec { tv_sec: ft.seconds(), tv_nsec: ft.nanoseconds() as _, }, } }
<filename>pkg/dwarf/dwarfbuilder/loc.go package dwarfbuilder import ( "bytes" "github.com/derekparker/delve/pkg/dwarf/op" "github.com/derekparker/delve/pkg/dwarf/util" ) // LocEntry represents one entry of debug_loc. type LocEntry struct { Lowpc uint64 Highpc uint64 Loc []byte } // LocationBlock returns a DWARF expression corresponding to the list of // arguments. func LocationBlock(args ...interface{}) []byte { var buf bytes.Buffer for _, arg := range args { switch x := arg.(type) { case op.Opcode: buf.WriteByte(byte(x)) case int: util.EncodeSLEB128(&buf, int64(x)) case uint: util.EncodeULEB128(&buf, uint64(x)) default: panic("unsupported value type") } } return buf.Bytes() }
package com.schoolcloud.schoolshop.bean.result; import com.schoolcloud.schoolshop.bean.product.Product; import com.schoolcloud.schoolshop.bean.user.Comment; public class ProductComment { private Product product; private Comment comment; public Product getProduct() { return product; } public void setProduct(Product product) { this.product = product; } public Comment getComment() { return comment; } public void setComment(Comment comment) { this.comment = comment; } }
How Can We Help Our K-12 Teachers?: Using a Recommender to Make Personalized Book Suggestions K-12 teachers, especially the ones who teach reading and literacy, are expected to guide their students to read in order to learn. Teachers can promote good reading habits among K-12 readers by offering books that match their interests. Unfortunately, finding the right book for each individual or group of students is not an easy task due to the huge volume of books available these days that cover a diversity of topics at varied reading levels. To address this problem, we have developed BReT, a book recommender for K-12 teachers. BReT adopts a multi-dimensional strategy to suggest books that simultaneously match the interests, preferences, and reading abilities of K-12 students based on the content, topics, literary elements, and grade levels specified by a teacher. BReT is novel, since it recommends books to K-12 teachers tailored to their individual students or groups of students, either for pleasure reading or fulfilling their current instructional activities. Unlike existing book-searching tools currently being used by teachers, which adopt a "one-size-fits-all" strategy, BReT offers personalized suggestions. Conducted empirical studies using Mechanical Turk have verified the effectiveness of BReT in making book recommendations.
Ocean acidification alters skeletogenesis and gene expression in larval sea urchins Ocean acidification, the reduction of ocean pH due to the absorption of anthropogenic atmospheric CO2, is expected to influence marine ecosystems through effects on marine calcifying organisms. These effects are not well understood at the community and ecosystem levels, although the consequences are likely to involve range shifts and population declines. A current focus in ocean acidification research is to understand the resilience that organisms possess to withstand such changes, and to extend these investigations beyond calcification, addressing impacts on other vul- nerable physiological processes. Using morphometric methods and gene expression profiling with a DNA microarray, we explore the effects of elevated CO2 conditions on Lytechinus pictus echino- plutei, which form a calcium carbonate endoskeleton during pelagic development. Larvae were raised from fertilization to pluteus stage in seawater with elevated CO2. Morphometric analysis showed significant effects of enhanced CO2 on both size and shape of larvae; those grown in a high CO2 environment were smaller and had a more triangular body than those raised in normal CO2 con- ditions. Gene expression profiling showed that genes central to energy metabolism and biomineral- ization were down-regulated in the larvae in response to elevated CO2, whereas only a few genes involved in ion regulation and acid-base balance pathways were up-regulated. Taken together, these results suggest that, although larvae are able to form an endoskeleton, development at elevated CO2 levels has consequences for larval physiology as shown by changes in the larval transcriptome.
import { Component, OnInit, Input } from '@angular/core'; @Component({ selector: 'b4os-ro-card', templateUrl: './ro-card.component.html', styleUrls: ['./ro-card.component.css'] }) export class RoCardComponent implements OnInit { @Input() ro: any; @Input() source: string; constructor() { } ngOnInit() { } }
def store_to_dfbucket(filename, data): if not isinstance(data, (str, unicode)): raise TypeError( "Only strings can be saved in file, received %s." % str(type(data))) filename = '/' + BUCKET_NAME + '/' + filename gcs_file = gcs.open(filename, 'w', content_type='text/plain') gcs_file.write(data) gcs_file.close()
Originally published under the title "Leading Muslim Cleric Says Persecuted Coptic Christians Are "Criminal, Aggressive, and Oppressive." Yassir al-Burhami: Coptic Christians are "infidels who are on their way to hell." Dr. Yassir al-Burhami, Egypt's premiere Salafi, was once again videotaped inciting hate for and violence against the nation's Christians, the Copts—including by decrying the notion of giving them their full human and civil rights: "When you cooperate with a criminal, aggressive, oppressive, infidel minority, you attack the rights of the majority [Muslims]." We call them "our Christian brothers, the infidels." And I use the word "brother" in its common usage: we say, yes, they are our brothers in the nation—but they are infidels who are on their way to hell ... and we advise them to break away from the authority of the taghut [meaning "idolatrous," "tyrannical" or "oppressive"] Church. Discussing this video, Egyptian talk show host Yusif al-Hussayni made several important points, including that "among these Islamic groups, use of the word taghut is synonymous with 'go and kill!' That's how Anwar Sadat was killed—he was first described as a taghut. So when Burhami describes the leaders of the church as taghut, this is the same as if he were to incite against and kill them." Al Azhar doesn't accuse Burhami of inciting hatred and violence against minorities. Indeed, there isn't a single pious Muslim who is zealous over his religion, who does not permit his religion to be equated with hatred, violence, and incitement against other religions, who bothered to file a case against Yassir Burhami. Not one! Indeed, while riots and violence regularly erupt whenever a Christian is merely accused of liking a Facebook page critical of Islam, followed by the arrest and prosecution of the accused, no Muslim seems to care when Christians are mocked or attacked in the name of Islam, even though Egypt's "defamation of religions" law is supposed to protect Christianity and Christians as well in Egypt. Hussayni pointed out that by allowing people such as Burhami to continue to preach and disseminate their message, the Egyptian government, by default, supports this message of hate and violence against Christians. As others have indicated, it's not enough for Egyptian president Sisi merely to speak out against radicalism, or do PR visits to Coptic churches. So long as people like Burhami are permitted to disseminate such hate-filled messages, so long will Christians suffer.
<filename>AIPDebug/Display-Interface/w_colr.cpp<gh_stars>0 /******************************************************************************* * Copyright (c) 2016,青岛艾普智能仪器有限公司 * All rights reserved. * * version: 1.0 * author: link * date: 2016.01.20 * brief: 颜色选择界面 *******************************************************************************/ #include "w_colr.h" #include "ui_w_colr.h" /****************************************************************************** * version: 1.0 * author: link * date: 2016.01.20 * brief: 颜色选择界面初始化 ******************************************************************************/ const QColor colorset[2][7] = { { QColor(255, 255, 255, 255), QColor(240, 78, 153, 255), QColor(254, 69, 52, 255), QColor(106, 231, 71, 255), QColor(49, 134, 17, 255), QColor(246, 254, 0, 255), QColor(255, 0, 0, 255), }, { QColor(120, 120, 120, 255), QColor(249, 121, 0, 255), QColor(0, 0, 0, 255), QColor(128, 64, 0, 255), QColor(105, 0, 128, 255), QColor(64, 0, 124, 255), QColor(46, 48, 146, 255), }, }; w_Colr::w_Colr(QWidget *parent) : QDialog(parent), ui(new Ui::w_Colr) { this->setWindowFlags(Qt::Dialog | Qt::FramelessWindowHint); // 去掉标题栏 ui->setupUi(this); // cWin = 0; int i = 0, j = 0; QString str; Color_Group = new QButtonGroup; for (i = 0; i < 2; i++) { for (j = 0; j < 7; j++) { btnList.append(new QToolButton(this)); // 创建按钮 str = QString("border:3px groove white;border-radius:10px;"\ "height:90px;width:90px;background-color:%1").arg(colorset[i][j].name()); btnList[i*7+j]->setStyleSheet(str); Color_Group->addButton(btnList[i*7 + j], i*7 + j); // 添加按钮 ui->colorLayout->addWidget(btnList[i*7 + j], i, j); // 添加布局 } } connect(Color_Group, SIGNAL(buttonClicked(int)), this, SLOT(ButtonJudge(int))); } /****************************************************************************** * version: 1.0 * author: link * date: 2016.01.20 * brief: 颜色选择界面析构 ******************************************************************************/ w_Colr::~w_Colr() { delete Color_Group; delete ui;; } /****************************************************************************** * version: 1.0 * author: link * date: 2016.01.20 * brief: 点击按钮,发送颜色信息 ******************************************************************************/ void w_Colr::ButtonJudge(int id) { Signal_color_to_Main(colorset[id/7][id%7].name(), 2, 2); Signal_color_to_Main(QString(""), wConf_Surface, 1); } /****************************************************************************** * version: 1.0 * author: sl * date: 2016.05.20 * brief: 返回设置界面 ******************************************************************************/ void w_Colr::on_pushButton_clicked() { Signal_color_to_Main(QString(""), wConf_Surface, 1); } /****************************************************************************** END ******************************************************************************/
import java.time.YearMonth; /* * Checks if a given year is a leap year or a common year, * and computes the number of days in a given month and a given year. */ public class Calendar0 { // Gets a year (command-line argument), and tests the functions isLeapYear and nDaysInMonth. public static void main(String args[]) { int year = Integer.parseInt(args[0]); isLeapYearTest(year); nDaysInMonthTest(year); } // Tests the isLeapYear function. private static void isLeapYearTest(int year) { String commonOrLeap = "common"; if (isLeapYear(year)) { commonOrLeap = "leap"; } System.out.println(year + " is a " + commonOrLeap + " year"); } // Tests the nDaysInMonth function. private static void nDaysInMonthTest(int year) { // isLeapYear(year); int counter = 1; while (counter <= 12) { int monthLength = nDaysInMonth(counter, year); System.out.println("Month " + counter + " has " + monthLength + " days"); counter++; } } // Returns true if the given year is a leap year, false otherwise. // Notice that nDaysInMonthTest should call isLeapYear public static boolean isLeapYear(int year) { boolean isLeapYear = true; int min = 31; int counter = 1; while (counter <= 12) { int monthLength = nDaysInMonth(counter, year); if (monthLength < min) { min = monthLength; } counter++; } if (min == 28) { isLeapYear = false; } return isLeapYear; } // Returns the number of days in the given month and year. // April, June, September, and November have 30 days each. // February has 28 days in a common year, and 29 days in a leap year. // All the other months have 31 days. public static int nDaysInMonth(int month, int year) { // Replace the following statement with your code YearMonth yearMonthObject = YearMonth.of(year, month); int monthLength = yearMonthObject.lengthOfMonth(); return monthLength; } }
# Copyright 2021 ZTE corporation. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """Setup""" import setuptools INSTALL_REQUIRES = [ 'numpy', 'torch >= 1.8.1', 'torchvision >= 0.9.1', 'ptflops', 'tensorboard >= 1.15', 'horovod >= 0.22.1', 'apex' ] TEST_REQUIRES = [ 'bandit', 'flake8', 'mypy==0.812', 'pylint', 'pytest-cov', 'pytest-flake8', 'pytest-mypy', 'pytest-pylint', 'pytest-xdist' ] setuptools.setup( name='zen_nas', packages=setuptools.find_packages('src'), package_dir={'': 'src'}, install_requires=INSTALL_REQUIRES, extras_require={ 'test': TEST_REQUIRES }, python_requires='>= 3.7' )
Ovarian-endocrine-behavioural function in the domestic cat treated with exogenous gonadotrophins during mid-gestation. Treatment of pregnant cats with FSH on Days 33--37 and hCG on Days 38 and 39 induced development of vesicular follicles (mean 9.3 follicles/cat), ovulation (mean 3.4 corpora lutea/cat) and behavioural oestrus (5/7 cats). In the gonadotrophin-treated females, oestradiol-17 beta concentrations gradually increased but serum progesterone levels remained constant although in saline-treated females mean serum oestradiol-17 beta concentrations remained basal and progesterone concentrations gradually declined. The results indicated that the feline ovary and related mechanisms for inducing sexual receptivity were not refractory to exogenous gonadotrophic stimulation during mid-gestation and hCG administered after serial injections of FSH during pregnancy may potentiate ovarian oestradiol-17 beta secretion.
We’re all exhausted. After all, we’re college students, right? We’re supposed to be in a never-ending cycle of tired, caffeine-filled days and long, hardworking nights. If you type “tired college kid” into Google, you get dozens of results with titles such as “10 Struggles of the Exhausted College Student” as well as GIFs and memes that “perfectly describe the college experience.” So why are we supposed to be exhausted? And, more importantly, why do we glamorize being exhausted? Coming into college, you are told that you will be doing a lot more work and a lot more studying, which means that you are going to be tired. Coupled with extracurricular activities and parties, 70 percent of students report being unable to get sufficient sleep. But even before entering college, sleep was not stressed as important. In an op-ed for the New York Times, Frank Bruni writes about how sleep actually acts as an impediment to the dreams of high school students who are supposed to be succeeding academically and socially. Some joke that “sleep is for the weak,” but in reality, it may be the opposite. But it’s not only sleep. College students, in huge numbers, report feeling overwhelmed and exhausted. Why do we glamorize being exhausted? Why is it that every time someone replies that they are busy, another person tries to one-up them? We’re all busy, constantly busy. We joke about our dependency on caffeine to get us through the day. We glamorize all-nighters and shame those who want to sleep. We joke about the unhealthy habits we form in order to get by. I have heard many of my peers “brag” about how they haven’t eaten all day or how they forgot to eat because work and activities come first. I often joke about how I go to bed extremely early because I need to sleep. If I don’t sleep, I can’t function. If I am up past midnight, I know that I am in for a terrible day the next day (if I don’t get the chance to sleep in). But just because I practice what I consider to be healthy habits, doesn’t mean that I’m immune to the culture of exhaustion. I spent the whole day doing work one day, and I was sucked into this behavior. I found myself bragging and complaining about doing homework for ten hours, looking for the validation and pity that we as college students so desire. I also pushed my mental and physical abilities to the limit during a day like that, which led to me needing multiple days of relaxation and breaks to get back in the swing of things. This may not be the case for all students, but it demonstrates what a toll school work, in huge amounts, can take on a person. The bottom line is we have been conditioned to desire for our exhaustion to be recognized. But it’s important for us, as college students, to change the conversation around exhaustion in order to develop healthy habits. Exhaustion is not a joke. It has real-life effects on one’s mental and physical health. Exhaustion can result even when one attains adequate sleep. Burnout is a real issue that occurs among college students, especially with students who are very involved on campus. Burnout can lead to physical and mental symptoms far beyond yawning and headaches from lack of sleep. Of course, your immune system will suffer due to exhaustion and burnout, but your productivity and grades will also suffer. The exhaustion that you are taught you must have in order to succeed, actually hurts your chances of success. So why don’t we teach self-care to college students? No one ever taught us how important it is to take care of ourselves, despite the fact that a person who has slept more than eight hours and is able to take the time to practice healthier strategies is far better equipped to handle assignments than a sleep-deprived student. Our campus has been preaching self-care through Residential Assistants and free programs on campus such as the Sleep Fair and therapy dogs, but this issue must be tackled even before freshmen step on campus. These are only short-term solutions to the problem of exhaustion. The Student Life website for the Paws Program says what it has done to “improve [students’] overall well-being.” But this program is not a long-term solution. Having dogs come in twice a semester (an event I do thoroughly enjoy) is not going to cure the stress and lack of self-care plaguing UMass students. So what needs to change? The culture surrounding college exhaustion. A culture of working hard and pushing and pushing until you have nothing left is not what we should be striving for. We should not be comparing and contrasting how overworked and exhausted we are. Exhaustion is not a joke and should not be treated as such. One can preach self-care over and over, but self-care will not be useful until this harmful cycle of exhaustion and stress is overcome and students are properly prepared to handle their sleep, stress and work in a college environment. Emilia Beuger is a Collegian columnist and can be reached at [email protected] or on Twitter @ebeuger.
Recent advances in identifying and theorizing the role of immigrant entrepreneurs, ethnicity, and culture in industrial marketing This special issue of Industrial Marketing Management (IMM) features four articles that cover topics related to immigrant entrepreneurs, ethnicity, and culture in industrial marketing. This introductory paper summarizes the contributions of these articles and points out future research directions. Introduction Immigrants are growing fast in all over the world. According to the 2010 Census, 37.9% of the American population consisted of non-European ethnic groups; this proportion is expected to be at 48% in 2030 (US Census Bureau, 2011). On the contrary, the non-immigrant population is expected to have a lower growth rate of 4% to 12% over the same period. With the steady rise in immigrant population and subsequent diversity in the marketplace-particularly in North America, across Europe and Australia-immigrant entrepreneurship rose dramatically and has had a tremendous impact on the global economy. Immigrant entrepreneurial context is full of culturally diverse, sometimes even contradictory, expectations and norms that influence how ethnic entrepreneurs are linked to the mainstream markets. The topic of cultural influences on immigrants' industrial marketing such as ethnic marketing entrepreneurship, supply chain management, network capability development, buyer-seller relationships, and ethnical financial cushion is attracting increasing attention (Jamal, Pealoza, & Laroche, 2015;Lindgreen & Hingley, 2010;McGrath & O'Toole, 2014). More recently, Dabi et al. provided a systematic review of 514 articles and identified six major themes in the extant literature in this domain: motives and entrepreneurial intentions, competencies and identity building, ethnic networks, strategies and internationalization, resources, and intercultural relations. This synthesis of the literature points out the need to have a holistic and contextualized approach to study immigrant entrepreneurship. The purpose of this special issue is to use a holistic and contextualized approach to have a deeper understanding of the role that immigrant entrepreneurs, ethnicity, and culture independently or jointly play in industrial marketing. The call for papers generated interesting inquiries and submissions. Following the IMM guidelines and rigorous review processes, four articles were finally selected for this special issue. They are described next based on the topics covered in these articles. Social networks, embeddedness and opportunity creation Lassalle, Johanson, Nicholson, and Ratajczak-Mrozek (Migrant entrepreneurship and markets: The dynamic role of embeddedness in networks on the creation of opportunities) develop a framework of migrant entrepreneurship, using the principles from effectuation theory of "bird in hand" (using available resources) and "crazy quilt" (selected use of networks). These authors interviewed Polish migrant entrepreneurs in Scotland and UK, and identified three types of networks that are useful and relevant to creating opportunities: networks back in Poland, Polish migrant networks in the UK, and UK indigenous networks. Findings indicate that immigrant entrepreneurs first become relationally, socially, and structurally embedded, often relying on bridging agents to access UK indigenous networks, and then leverage resources to create opportunities within and beyond the migrant community. Some later purposefully reproximate to Polish migrant networks in the UK with a view to strengthen their competitive position. Getting socially and relationally embedded tends to be quicker than getting structurally embedded into the Polish migrant networks in the UK. The process of opportunity creation seems to be incremental and iterative, and relies on resources accessed through embeddedness in different networks. Their study reveals the role played by multidimensional and evolving embeddedness in different networks in the process of opportunity creation. Social networking ties, resources and innovation Chung, Yen, and Wang (The contingent effect of social networking ties on Asian immigrant enterprises' innovation) examine the moderating role of immigrant enterprises' social network resources (business ties, political ties, and immigrant entrepreneurs' ethnic ties) in the relationship between entrepreneurial orientation and innovation. Their findings show that immigrant entrepreneurs' business ties and ethnic ties strengthen the positive effect of entrepreneurial orientation on innovation, whereas political ties exert little influence on the link between entrepreneurial orientation and innovation. This article advances the understanding of the contingent effect of different social networking ties in the context of immigrant enterprises' innovation, both of which remain at the heart of immigrant entrepreneurial marketing practices. T Permeability and Hierarchy Legitimacy on Immigrant Entrepreneurs' Affective States, Exchange Strategies, and Intentions toward Suppliers) investigate the effect of social structure on immigrant entrepreneurs' affective states and exchange strategies toward supplier firms. Their findings show that boundary permeability and hierarchy legitimacy are the two key structural determinants of immigrant entrepreneurs' affective states (hope, stress, and anger) and exchange strategies (status quo acceptance, upward mobility, opportunism, rationalism, preference for ethnic supplies, commitment to mainstream suppliers). Specifically, perceptions of boundary permeability embolden immigrant entrepreneurs to look beyond status quo and engage in adaptive behaviors that can facilitate upward mobility, whereas perceptions of hierarchy legitimacy inspire immigrant entrepreneurs to adhere to relationship norms and refrain from opportunistic behaviors. Transnational entrepreneurship Guru, Dana, and Volovelsky (Spanning transnational boundaries in industrial markets: A study of Israeli entrepreneurs in China) address Israeli transnational entrepreneurs who provide B2B intermediation services in China. The study is unique in the sense that it focuses on immigrant entrepreneurs from a developed economy (Israel), who pursue professional careers and business opportunities in a transitional economy (China). Such immigrant entrepreneurs are often embedded in their home and host countries both economically and socially which means they are better placed in their ability and capacity to integrate and enact transnational resources for exploiting transnational business opportunities. Their findings show a gradual evolution of personal and professional profile among these entrepreneurs, determined by a dynamic interdependence among various forms of capital, entrepreneurial habitus, and circumstantial factors. Building upon these findings, the authors develop a comprehensive model, showing the various components and stages that lead to the development of transnational profiles and activities to enhance business success. In doing so, they point toward a gradual process of transnational development whereby Israeli entrepreneurs address organizational, country, cultural and stage boundaries and act as boundary spanners for individuals and organization to create and exploit transnational business opportunities. The research findings are quite useful for transnational entrepreneurs and organizations who are looking to achieve success in China and for scholars interested in investigating similar immigrant entrepreneurial phenomena in other countries. Conclusion and future research directions By adding to the body of knowledge in the ethnic B2B marketing, these articles advance our understanding of the role that immigration, ethnicity, and culture play in entrepreneurs' success. Collectively and in many different ways, these articles highlight the importance of networks for resource access and opportunity recognition for immigrant entrepreneurship. Moreover, the notions of resilience, competency and capabilities of entrepreneurs that underpin opportunity identification, along with social networking and resource access, become obvious to readers. Furthermore, implicit in these articles included in this special issue is the use of spatial metaphors (e.g., as reflected via the use of terms like social networks back home and in host country) that can sometimes undermine our ability to fully understand the dynamic role played by the notion of space in understanding immigrant and transnational entrepreneurship. Recent scholarly work (Jamal, Kizgin, Rana, Laroche, & Dwivedi, 2019;Visconti, 2015) points to the direction that space can not only be physical but also cultural, social, geopolitical, ideological and even virtual in nature. This is further supported by a significant trend toward globalization, global consumption culture, and the use of new digital technologies both by the immigrant and majority business enterprises and their stakeholders like suppliers, government agencies, and customers. Accordingly, one critical issue that has not been addressed by these articles is how technology advancement may facilitate or hinder ethnic B2B marketing. On top of immigration, the advancement of technologies and job outsourcing create another layer of opportunities and challenges for industrial marketing. On one hand, social networks and the prevalence of Internet lower the barriers for ethnic entrepreneurship and facilitate marketing decisions and strategy in global industrial and business-to-business markets. On the other hand, the contemporary business environment demands more market intelligence to take new lenses in understanding about the impact of migration, ethnicity and culture on industrial marketing, including the issues related to global outsourcing, distribution and promotion. We hope that future researchers can continue this stream of research, with a focus on such topics as effects of technology use, migration, and ethnicity on business-to-business relationship management, role of ethnicity and cultural diversity in value creation in industrial markets, sustainability and business-to-business marketing in ethnicity contexts, effects of new media and novel technology on industrial marketing across ethnic groups, and role of immigrant entrepreneurs in highly anticipated changes in the global supply chain of networks due to the impact of COVID-19 crisis. In light of increasing uncertainties in globalization and B2B markets, research on immigrant entrepreneurs is of greater importance. From a methodological perspective, we encourage future researchers to use more longitudinal, multilevel, and/or multi-method research designs in ethnic B2B marketing contexts.
# -*- coding: utf-8 -*- """ Created on Sun Mar 10 18:05:33 2019 @author: RGB """ import numpy as np import keras from keras.layers import Dense, Dropout, Flatten from keras.applications import VGG16 from keras.preprocessing.image import ImageDataGenerator from keras.models import Model, load_model from keras.models import Sequential from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau import keras.optimizers from model_utils import save_training_graph, plot_confusion_matrix, vis_activation from sklearn.metrics import classification_report, confusion_matrix from keras.applications.vgg16 import preprocess_input base_model=VGG16(weights=None,include_top=False, input_shape=(224,224,3)) top_model = Sequential() top_model.add(Flatten(input_shape=base_model.output_shape[1:])) top_model.add(Dense(1024,activation='relu')) top_model.add(Dropout(.50)) top_model.add(Dense(512,activation='relu')) top_model.add(Dropout(.25)) top_model.add(Dense(7,activation='softmax')) #final layer with softmax activation model=Model(inputs=base_model.input,outputs=top_model(base_model.output)) print(model.summary()) train_datagen=ImageDataGenerator( preprocessing_function=preprocess_input, shear_range=0.2, zoom_range=0.2, rotation_range=20, horizontal_flip=True, fill_mode='nearest') valid_datagen=ImageDataGenerator( preprocessing_function=preprocess_input) train_path = 'E:/Master Project/data_split/train_dir' valid_path = 'E:/Master Project/data_split/val_dir' test_path = 'E:/Master Project/data_split/test_dir' train_generator = train_datagen.flow_from_directory(train_path, target_size=(224,224), color_mode='rgb', batch_size=10, class_mode='categorical', shuffle=True) valid_gen = valid_datagen.flow_from_directory(valid_path, target_size=(224,224), color_mode = "rgb", class_mode="categorical", batch_size=10) # loss function will be categorical cross entropy # evaluation metric will be accuracy sgd = keras.optimizers.SGD() adam = keras.optimizers.Adam() model.compile(optimizer=sgd,loss='categorical_crossentropy',metrics=['accuracy']) learning_rate_reduction = ReduceLROnPlateau(monitor='val_acc', patience=3, verbose=1, factor=0.5, min_lr=0.00001) early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=0, mode='auto') checkpointer = ModelCheckpoint(filepath="best_weights_model.hdf5", monitor = 'val_acc', verbose=1, save_best_only=True) step_size_train=train_generator.n//train_generator.batch_size step_size_val=valid_gen.n//valid_gen.batch_size history = model.fit_generator(generator=train_generator, steps_per_epoch=step_size_train, callbacks=[learning_rate_reduction,early_stop], validation_data=valid_gen, validation_steps=step_size_val, epochs=50) base_model=VGG16(weights='imagenet',include_top=False, input_shape=(224,224,3)) top_model = Sequential() top_model.add(Flatten(input_shape=base_model.output_shape[1:])) top_model.add(Dense(1024,activation='relu')) top_model.add(Dropout(.50)) top_model.add(Dense(512,activation='relu')) top_model.add(Dropout(.25)) top_model.add(Dense(7,activation='softmax')) #final layer with softmax activation model=Model(inputs=base_model.input,outputs=top_model(base_model.output)) print(model.summary()) sgd = keras.optimizers.SGD(lr=1e-4, decay=1e-6, momentum=0.9, nesterov=True) model.compile(optimizer=sgd,loss='categorical_crossentropy',metrics=['accuracy']) history2 = model.fit_generator(generator=train_generator, steps_per_epoch=step_size_train, callbacks=[learning_rate_reduction,early_stop], validation_data=valid_gen, validation_steps=step_size_val, epochs=50) import matplotlib.pyplot as plt plt.plot(history.history['val_acc'], color='b', linestyle='solid', label="Without ImageNet Weights") plt.plot(history2.history['val_acc'], color='r', linestyle='dashed', label="With ImageNet Weights") legend = plt.legend(loc='best', shadow=True) plt.savefig('accuracy_with_and_without.png') plt.plot(history.history['acc'], color='b',linestyle='solid', label="Without ImageNet Weights") plt.plot(history2.history['acc'], color='r',linestyle='dashed',label="With ImageNet Weights") legend = plt.legend(loc='lower right', shadow=True) plt.savefig('accuracy_with_and_without2.png')
Down regulation of MiR-93 contributes to endometriosis through targeting MMP3 and VEGFA. OBJECTIVE This study aimed to explore the role of miRNAs in pathogenesis of endometriosis. METHODOLOGY Endometrial samples from 57 females with endometriosis and 44 non-endometriotic controls were compared for the expression of a selected group of miRNAs. The regulatory function on downstream target was also explored. RESULTS The expression of miR-93 and miR106a was significantly reduced in endometriotic samples compared to that in non-endometriotic samples. High levels of MMP3 and VEGFA were detected in more than 50% ectopic endometrium tissues. A negative association was found between the expression of miR-93 and the protein levels of MMP3 (Pearson correlation, r=-0.39, P=0.0025) or VEGFA (Pearson correlation, r=-0.37, P=0.0047) in samples from endometriosis patients. Mechanistically, miR-93 targeted MMP3 and VEGFA by directly binding to the 3'UTR of MMP3 and VEGFA mRNAs, and thereby inhibited the proliferation, migration and invasive capability of endometrial stromal cells (ESCs). CONCLUSION The finding of this study suggests that deregulation of miR-93 contribute to endometriosis by up-regulation of MMP3 and VEGFA and thus provide potential therapeutic targets for the treatment of endometriosis.
Randomized discontinuation design: application to cytostatic antineoplastic agents. PURPOSE Propose a phase II study design to evaluate the activity of a putative cytostatic agent, acknowledging heterogeneity of tumor growth rates in the population of patients. METHODS In the setting of renal cell carcinoma, some patients' tumors will grow slowly naturally. An appropriate design has to distinguish antiproliferative activity attributable to the novel agent from indolent disease. We propose a randomized discontinuation design that initially treats all patients with the study agent (stage 1) and then randomizes in a double-blind fashion to continuing therapy or placebo only those patients whose disease is stable (stage 2). This design allows the investigators to determine if apparent slow tumor growth is attributable to the drug or to selection of patients with naturally slow-growing tumors. RESULTS By selecting a more homogeneous population, the randomized portion of the study requires fewer patients than would a study randomizing all patients at entry. The design also avoids potential confounding because of heterogeneous tumor growth. Because the two randomly assigned treatment groups each comprise patients with apparently slow growing tumors, any difference between the groups in disease progression after randomization is more likely a result of the study drug and less likely a result of imbalance with respect to tumor growth rates. Stopping rules during the initial open-label stage and the subsequent randomized trial stage allow one to reduce the overall sample size. Expected average tumor growth rate is an important consideration when deciding the duration of follow-up for the two stages. CONCLUSION The randomized discontinuation design is a feasible alternative phase II study design for determining activity of possibly cytostatic anticancer agents.
<gh_stars>10-100 from triou import read_file_data,write_file_data,write_file_python,verify_all import sys from importlib import import_module def test_cas(file0): file=file0+".data" print("test_cas",file) listclass1=read_file_data(file) file1="test_"+file file2="test2_"+file file3="test3_"+file namepy=filepy="p_"+file0 if (namepy.find("-")>=0): namepy=namepy.replace("-","moins") pass if (namepy.find(".")>=0): namepy=namepy.replace(".","dot") pass filepy=namepy+".py" filepy2=namepy+".py2" write_file_data(file1,listclass1) write_file_python(filepy,listclass1) verify_all(listclass1) #cdm="from "+namepy+" import *" #print("cmd ",cdm) #exec(cdm) ze_mod = import_module(namepy) #print("@@@@@@", dir(ze_mod)) #print "ici",listclass write_file_data(file2,ze_mod.listclass) toto=read_file_data(file2) verify_all(toto) write_file_python(filepy2,toto) write_file_data(file3,toto) ti=open('lutil.py','w') ti.write("lutil="+str(ze_mod.lutil)+'\n') ti.close() s1=open(file1,'r') s2=open(file2,'r') s3=open(file3,'r') c1=s1.read() c2=s2.read() c3=s3.read() if (c1!=c2): msg=" cas "+file+" rate" raise Exception(msg) else: print("cas "+file+" Ok niveau 2") pass if (c1!=c3): msg=" cas "+file+" rate" raise Exception(msg) else: print("cas "+file+" Ok niveau 3") pass # raise Exception("expres") pass if __name__ == '__main__': test_cas(sys.argv[1]) pass
def search_frequency_limit(is_search, index, nfreq_limit, spectra, water_threshold): if abs(spectra[index]) < water_threshold and is_search: is_search = False nfreq_limit = index if abs(spectra[index]) > 10 * water_threshold and not is_search: is_search = True nfreq_limit = index return nfreq_limit
<filename>unit_02_confirmatory_data_analysis/mastery/punching-parkinsons/firmware-embeddedC/array.c #include "array.h" /*associate a ring buffer with a physical array */ void associate(ring_buffer* overlay, TYPE* buffer, uint64_t size) { overlay->head = buffer; overlay->tail = buffer; overlay->start = buffer; overlay->end = buffer + (size - 1); overlay->fp = 0; overlay->length = 0; overlay->isFull = false; } /* shift out a value from the beginning of a ring buffer */ TYPE shift(ring_buffer* array) { TYPE val; uint8_t i; if(array->length != 0) { val = *(array->head); array->head = (array->head == array->end) ? (array->head)++ : array->start; (array->length)--; if(array->length == 0) { array->isFull = false; } } return val; } /* push a value onto the end of a ring buffer */ void push(ring_buffer* array, TYPE val) { uint64_t max = array->end - array->start; if(array->isFull) { shift(array); } array->tail = (array->tail == array->end) ? (array->tail)++ : array->start; *(array->tail) = val; if(array->length == max) { array->isFull = true; } } /* Sort array in either increasing or decreasing order and store into another array */ void copy_sort(TYPE to_sort[], TYPE array[], uint8_t size, bool decreasingOrder) { uint8_t i, j; TYPE val; for (i = 0; i < size; i++) { to_sort[i] = array[i]; } for(i = 1; i < size; i++) { val = to_sort[i]; j = i - 1; if(decreasingOrder) { while( j >= 0 && to_sort[j] > val) { to_sort[j+1] = to_sort[j]; j = j - 1; } } else { while( j >= 0 && to_sort[j] < val) { to_sort[j+1] = to_sort[j]; j = j - 1; } } to_sort[j+1] = val; } } /*Get extrema from array pointed at by ring_buffer (assuming full/initialized array) */ TYPE get_extreme(ring_buffer *queue, bool look_for_min) { TYPE extreme_val = *(queue->start); if(look_for_min) { for (TYPE *start = queue->start; start <= queue->end; start++) { extreme_val = (extreme_val < *start) ? extreme_val : *start; } } else { for (TYPE *start = queue->start; start <= queue->end; start++) { extreme_val = (extreme_val > *start) ? extreme_val : *start; } } return extreme_val; } /* Get index of a given value from array pointed at by ring_buffer. Returns first matching value, either from the beginning or end. UINT64_MAX is returned as a false value */ uint64_t get_index(ring_buffer *queue, TYPE value, bool reverse, bool *found) { uint64_t index; if(queue->length == 0) { *found = false; return UINT64_MAX; } if(reverse) { index = queue->length - 1; TYPE *tail = queue->tail; while(tail != queue->head || index != UINT64_MAX) { if(*(tail) == value) { *(found) = true; return index; } tail = (tail == queue->start) ? queue->end : tail--; index--; } *(found) = false; return index; } else { index = 0; TYPE *head = queue->head; while(head != queue->tail || index != UINT64_MAX) { if(*(head) == value) { *(found) = true; return index; } head = (head == queue->end) ? queue->start : head++; index++; } *(found) = false; return index; } } /* get a value from array pointed at by ring_buffer by indexing, similar to an actual array. Assumes values out of bounds refer to last element */ TYPE get(ring_buffer *queue, uint64_t index) { if(index >= queue->length) { return *((queue->tail) - 1); } else if ( ((queue->head) + index) > queue->end ) { return *((queue->tail) - ((queue->length) - index)); } else { return *((queue->head) + index); } } /* set a value from array pointed at by ring_buffer by indexing, similar to an actual array. Assumes values out of bounds refer to last element */ void set(ring_buffer *queue, uint64_t index, TYPE value) { if(index >= queue->length) { *((queue->tail) - 1) = value; } else if ( ((queue->head) + index) > queue->end ) { *((queue->tail) - ((queue->length) - index)) = value; } else { *((queue->head) + index) = value; } } ring_buffer slice(ring_buffer *overlay, uint64_t start, uint64_t end) { ring_buffer new_slice = {0,0,0,0,0,0,false}; if(start >= overlay->length || end > overlay->length || start == end) { return new_slice; } new_slice.start = overlay->start; new_slice.end = overlay->end; new_slice.fp = 0; new_slice.length = end - start; new_slice.head = ((overlay->head + start) > overlay->end) ? overlay->tail - (overlay->length - start) : (overlay->head + start); new_slice.tail = ((overlay->tail - (overlay->length - end)) < overlay->start) ? overlay->head + end : (overlay->tail - (overlay->length - end)); return new_slice; } void reset_array(ring_buffer *overlay) { for(overlay->fp = overlay->start; overlay->fp <= overlay->end; (overlay->fp)++ ) { *(overlay->fp) = 0; } overlay->head = overlay->start; overlay->tail = overlay->start; overlay->length = 0; overlay->isFull = false; }
package com.heartmove.cglib.protoc; import com.google.protobuf.*; import net.sf.cglib.core.*; import org.objectweb.asm.ClassVisitor; import org.objectweb.asm.Label; import org.objectweb.asm.Opcodes; import org.objectweb.asm.Type; import java.beans.PropertyDescriptor; import java.lang.reflect.Method; import java.lang.reflect.Modifier; import java.security.ProtectionDomain; import java.util.HashMap; import java.util.Map; /** * @Classname ProtocBeanCopier * @Description 针对于Protoc Message和Java entity的属性拷贝 * @Date 2021/3/3 11:55 * @Created by yasheng.cai */ public abstract class ProtocBeanCopier{ private static final ProtocBeanCopierKey KEY_FACTORY = (ProtocBeanCopierKey)KeyFactory.create(ProtocBeanCopierKey.class); private static final Type CONVERTER = TypeUtils.parseType("net.sf.cglib.core.Converter"); private static final Type BEAN_COPIER = TypeUtils.parseType("cn.swiftpass.cglib.protoc.ProtocBeanCopier"); private static final Signature COPY = new Signature("copy", Type.VOID_TYPE, new Type[]{ Constants.TYPE_OBJECT, Constants.TYPE_OBJECT, CONVERTER }); private static final Signature CONVERT = TypeUtils.parseSignature("Object convert(Object, Class, Object)"); interface ProtocBeanCopierKey { public Object newInstance(String source, String target, boolean useConverter); } public static ProtocBeanCopier create(Class source, Class target, boolean useConverter) { ProtocGenerator gen = new ProtocGenerator(); gen.setSource(source); gen.setTarget(target); gen.setUseConverter(useConverter); return gen.create(); } abstract public void copy(Object from, Object to, Converter converter); public static class ProtocGenerator extends AbstractClassGenerator { private static final Source SOURCE = new Source(ProtocBeanCopier.class.getName()); private Class source; private Class target; private boolean sourceIsProtoClass; private boolean targetIsProtoClass; private boolean useConverter; private MethodInfo protocMessageBuild; public ProtocGenerator() { super(SOURCE); } public void setSource(Class source) { if(!Modifier.isPublic(source.getModifiers())){ setNamePrefix(source.getName()); } this.sourceIsProtoClass = ProtocReflectUtils.isProtoBuilderClass(source); if(this.sourceIsProtoClass){ try { Method method = source.getMethod("build"); this.protocMessageBuild = ReflectUtils.getMethodInfo(method); } catch (NoSuchMethodException e) { e.printStackTrace(); } } this.source = source; } public void setTarget(Class target) { if(!Modifier.isPublic(target.getModifiers())){ setNamePrefix(target.getName()); } this.targetIsProtoClass = ProtocReflectUtils.isProtoBuilderClass(target); this.target = target; } public void setUseConverter(boolean useConverter) { this.useConverter = useConverter; } @Override protected ClassLoader getDefaultClassLoader() { return source.getClassLoader(); } @Override protected ProtectionDomain getProtectionDomain() { return ReflectUtils.getProtectionDomain(source); } public ProtocBeanCopier create() { Object key = KEY_FACTORY.newInstance(source.getName(), target.getName(), useConverter); return (ProtocBeanCopier)super.create(key); } @Override public void generateClass(ClassVisitor v) { Type sourceType = Type.getType(source); Type targetType = Type.getType(target); ClassEmitter ce = new ClassEmitter(v); ce.begin_class(Constants.V1_8, Constants.ACC_PUBLIC, getClassName(), BEAN_COPIER, null, Constants.SOURCE_FILE); EmitUtils.null_constructor(ce); CodeEmitter e = ce.begin_method(Constants.ACC_PUBLIC, COPY, null); PropertyDescriptor[] getters = sourceIsProtoClass ? ProtocReflectUtils.findProtocPropertyDescriptor(source) : ReflectUtils.getBeanGetters(source); PropertyDescriptor[] setters = targetIsProtoClass ? ProtocReflectUtils.findProtocPropertyDescriptor(target) : ReflectUtils.getBeanSetters(target); Map<String, MethodInfo> fieldHasValueMethodMap = null; if(sourceIsProtoClass){ fieldHasValueMethodMap = ProtocReflectUtils.getProtoHasValueMethod(source); } Map names = new HashMap(); for (int i = 0; i < getters.length; i++) { names.put(getters[i].getName(), getters[i]); } Local targetLocal = e.make_local(); Local sourceLocal = e.make_local(); if (useConverter) { e.load_arg(1); e.checkcast(targetType); e.store_local(targetLocal); e.load_arg(0); e.checkcast(sourceType); e.store_local(sourceLocal); } else { e.load_arg(1); e.checkcast(targetType); e.load_arg(0); e.checkcast(sourceType); if(sourceIsProtoClass){ // 利用来源对象构建来源的message实例 e.invoke(protocMessageBuild); } } for (int i = 0; i < setters.length; i++) { PropertyDescriptor setter = setters[i]; PropertyDescriptor getter = (PropertyDescriptor)names.get(setter.getName()); if (getter != null) { MethodInfo read = ReflectUtils.getMethodInfo(getter.getReadMethod()); MethodInfo write = ReflectUtils.getMethodInfo(setter.getWriteMethod()); Class setterParamType = setter.getWriteMethod().getParameterTypes()[0]; Class getterReturnType = getter.getReadMethod().getReturnType(); if (useConverter) { Type setterType = write.getSignature().getArgumentTypes()[0]; e.load_local(targetLocal); e.load_arg(2); e.load_local(sourceLocal); e.invoke(read); e.box(read.getSignature().getReturnType()); EmitUtils.load_class(e, setterType); e.push(write.getSignature().getName()); e.invoke_interface(CONVERTER, CONVERT); e.unbox_or_zero(setterType); e.invoke(write); } else if (compatible(getter, setter)) { Label label0 = null; if(targetIsProtoClass && getterReturnType.equals(String.class)){ label0 = e.make_label(); e.dup(); e.invoke(read); e.ifnull(label0); } e.dup2(); e.invoke(read); e.invoke(write); if(targetIsProtoClass){ e.pop(); if(getterReturnType.equals(String.class)){ e.visitLabel(label0); } } }else if(targetIsProtoClass){ e.dup(); e.invoke(read); Label label0 = e.make_label(); e.ifnull(label0); e.dup2(); e.invoke(read); if(PROTOC_WRAPPER_CLASS_MAP.containsKey(setterParamType)){ e.unbox_or_zero(TypeUtils.getUnboxedType(TypeUtils.getType(getterReturnType.getName()))); e.invoke(ReflectUtils.getMethodInfo(PROTOC_WRAPPER_CLASS_MAP.get(setterParamType))); } e.invoke(write); e.pop(); e.visitLabel(label0); }else if(sourceIsProtoClass){ assert fieldHasValueMethodMap != null; MethodInfo hasValMethod = fieldHasValueMethodMap.get(setter.getName()); Label label0 = null; if(null != hasValMethod){ e.dup(); e.invoke(hasValMethod); label0 = e.make_label(); e.visitJumpInsn(Opcodes.IFEQ, label0); } e.dup2(); e.invoke(read); if(PROTOC_WRAPPER_CLASS_GETTER_MAP.containsKey(getterReturnType)) { e.invoke(ReflectUtils.getMethodInfo(PROTOC_WRAPPER_CLASS_GETTER_MAP.get(getterReturnType))); if (!TypeUtils.isPrimitive(TypeUtils.getType(setterParamType.getName()))) { e.box(TypeUtils.getUnboxedType(TypeUtils.getType(setterParamType.getName()))); } } e.invoke(write); if(null != hasValMethod){ e.visitLabel(label0); } } } } e.return_value(); e.end_method(); ce.end_class(); } private final static Map<Class, Method> PROTOC_WRAPPER_CLASS_MAP = new HashMap<>(); private final static Map<Class, Method> PROTOC_WRAPPER_CLASS_GETTER_MAP = new HashMap<>(); static { try { String ofMethod = "of"; PROTOC_WRAPPER_CLASS_MAP.put(Int32Value.class, Int32Value.class.getMethod(ofMethod, int.class)); PROTOC_WRAPPER_CLASS_MAP.put(Int64Value.class, Int64Value.class.getMethod(ofMethod, long.class)); PROTOC_WRAPPER_CLASS_MAP.put(StringValue.class, StringValue.class.getMethod(ofMethod, String.class)); PROTOC_WRAPPER_CLASS_MAP.put(FloatValue.class, FloatValue.class.getMethod(ofMethod, float.class)); PROTOC_WRAPPER_CLASS_MAP.put(DoubleValue.class, DoubleValue.class.getMethod(ofMethod, double.class)); PROTOC_WRAPPER_CLASS_MAP.put(UInt32Value.class, UInt32Value.class.getMethod(ofMethod, int.class)); PROTOC_WRAPPER_CLASS_MAP.put(UInt64Value.class, UInt64Value.class.getMethod(ofMethod, long.class)); PROTOC_WRAPPER_CLASS_MAP.put(BoolValue.class, BoolValue.class.getMethod(ofMethod, boolean.class)); String getValueMethod = "getValue"; PROTOC_WRAPPER_CLASS_GETTER_MAP.put(Int32Value.class, Int32Value.class.getMethod(getValueMethod)); PROTOC_WRAPPER_CLASS_GETTER_MAP.put(Int64Value.class, Int64Value.class.getMethod(getValueMethod)); PROTOC_WRAPPER_CLASS_GETTER_MAP.put(StringValue.class, StringValue.class.getMethod(getValueMethod)); PROTOC_WRAPPER_CLASS_GETTER_MAP.put(FloatValue.class, FloatValue.class.getMethod(getValueMethod)); PROTOC_WRAPPER_CLASS_GETTER_MAP.put(DoubleValue.class, DoubleValue.class.getMethod(getValueMethod)); PROTOC_WRAPPER_CLASS_GETTER_MAP.put(UInt32Value.class, UInt32Value.class.getMethod(getValueMethod)); PROTOC_WRAPPER_CLASS_GETTER_MAP.put(UInt64Value.class, UInt64Value.class.getMethod(getValueMethod)); PROTOC_WRAPPER_CLASS_GETTER_MAP.put(BoolValue.class, BoolValue.class.getMethod(getValueMethod)); // PROTOC_WRAPPER_CLASS_SET.put(BytesValue.class, BytesValue.class.getMethod("of")); } catch (NoSuchMethodException e) { e.printStackTrace(); } } private static boolean compatible(PropertyDescriptor getter, PropertyDescriptor setter) { // TODO: allow automatic widening conversions? return setter.getPropertyType().isAssignableFrom(getter.getPropertyType()); } @Override protected Object firstInstance(Class type) { return ReflectUtils.newInstance(type); } @Override protected Object nextInstance(Object instance) { return instance; } } }
WASHINGTON - A federal appeals court on Tuesday upheld the first-ever regulations aimed at reducing the gases blamed for global warming, handing down perhaps the most significant decision on the issue since a 2007 Supreme Court ruling that greenhouse gases could be controlled as air pollutants. A three-judge panel of the U.S. Court of Appeals in Washington said that the Environmental Protection Agency was "unambiguously correct" in using existing federal law to address global warming, denying two of the challenges to four separate regulations and dismissing the others. Michael Gerrard, director of the Center for Climate Change Law at Columbia University, said no one expected the "complete slam dunk" issued by the court Tuesday, and said the decision was exceeded in importance only by the Supreme Court ruling five years ago. President Barack Obama's administration has come under fierce criticism from Republicans, including Mitt Romney, for pushing ahead with the regulations after Congress failed to pass climate legislation, and after the Bush administration resisted such steps. In 2009, the EPA concluded that greenhouse gases endanger human health and welfare, triggering controls on automobiles and other large sources. But the administration has always said it preferred to address global warming through a new law. Carol Browner, Obama's former energy and climate adviser, said the decision "should put an end, once and for all, to any questions about the EPA's legal authority to protect us from dangerous industrial carbon pollution," adding that it was a "devastating blow" to those who challenge the scientific evidence of climate change. EPA Administrator Lisa Jackson called the ruling a "strong validation" of the approach the agency has taken. The court "found that EPA followed both the science and the law in taking common-sense, reasonable actions to address the very real threat of climate change by limiting greenhouse gas pollution from the largest sources," Jackson said in a statement. At a town hall meeting in New Hampshire last year Romney, the presumptive Republican presidential nominee, said it was a mistake for the EPA to be involved in reducing emissions of carbon dioxide, the chief greenhouse gas. "My view is that the EPA is getting into carbon and regulating carbon has gone beyond the original intent of the legislation, and I would not go there," he said. The court on Tuesday seemed to disagree with Romney's assessment when it denied two challenges to the administration's rules, including one arguing that the agency erred in concluding greenhouse gases endanger human health and welfare. Lawyers for the industry groups and states argued that the EPA should have considered the policy implications of regulating heat-trapping gases along with the science. They also questioned the agency's reliance on a body of scientific evidence that they said included significant uncertainties. The judges - Chief Judge David Sentelle, who was appointed by Republican President Ronald Reagan, and David Tatel and Judith Rogers, both appointed by Democrat Bill Clinton - flatly rejected those arguments. "This is how science works," the unsigned opinion said. "EPA is not required to re-prove the existence of the atom every time it approaches a scientific question." Industry groups vowed to fight on. "Today's ruling is a setback for businesses facing damaging regulations from the EPA," said Jay Timmons, president and CEO of the National Association of Manufacturers. "We will be considering all of our legal options when it comes to halting these devastating regulations. The debate to address climate change should take place in the U.S. Congress and should foster economic growth and job creation, not impose additional burdens on businesses." Environmentalists, meanwhile, called it a landmark decision for global warming policy, which has been repeatedly targeted by the Republican-controlled House. "Today's ruling by the court confirms that EPA's common-sense solutions to address climate pollution are firmly anchored in science and law," said Fred Krupp, president of the Environmental Defense Fund. The court also dismissed complaints against two other regulations dealing with pollution from new factories and other industrial facilities. The plaintiffs had argued that the EPA misused the Clean Air Act by only requiring controls on the largest sources, when the law explicitly states that much smaller sources should also be covered. The judges, when presented with these arguments in February, cautioned the industry groups and states to be careful what they wished for. If EPA chose to follow the letter of the law, they said, greenhouse gas regulations would place even more of a burden on industry and other businesses. Lawyers for the various states said that if that were to occur, Congress would pass a law to stop it. Citing a "Schoolhouse Rock" video, the judges in their opinion reminded petitioners that "It's not easy to become a law." They even provided a link to the popular video that explains how bills become laws. "We have serious doubts as to whether ... it is ever 'likely' that Congress will enact legislation at all," they said.
Construction of Information Service System for the Elderly Based on Cloud Computing. With the advent of an aging society, the absolute number of the elderly has accounted for a considerable proportion of the social population. With the development and listing of Internet technology and mobile devices, it has become a social trend to send and receive information through handheld terminals, servers, personal computers and other platforms, and mobile information came into being. However, due to their physical conditions, the elderly in vulnerable groups can not easily access information and participate in social communication better. Obtaining mobile information with real-time efficiency through mobile devices just solves this problem and meets their needs to a large extent. Based on cloud computing and big data, this paper investigates the information services of many large communities, analyzes and summarizes the problems in the information services of the elderly in the community by synthesizing the information needs of 500 elderly people. According to the analysis results, an information service model for the elderly is proposed. The calculation results show that the general information service architecture based on cloud computing can reduce the payment cost of information services by about 31.22%. Introduction The emergence of cloud computing has greatly improved the quality and efficiency of related services, greatly strengthened the network capacity, optimized a variety of functions, and greatly changed the mode of information services. With the support of cloud computing, users can better apply online services, operate effectively on the network side, and reduce operating costs very effectively. At the same time, users can also obtain more software applications with less expenditure, achieving direct communication between users and developers, reducing operating costs and improving software application efficiency. Building a set of elderly information service system with the help of cloud computing and mobile network business operation platform will reveal the trend of technology development in terms of system network, hardware platform, system software platform and other value-added business service technologies, and fully prove the practicality and operational stability of the elderly information service platform system when it is put into commercial operation in the future. Considering the goal of componentization, the common modules are designed uniformly, and different parts are designed as independent functional entity modules. Realize the specific needs of the elderly platform business through flexible combination. The functional modules usually use a multi-level structure, which fundamentally provides a stable framework for the elderly information platform and facilitates the implementation and update of business processes. The system mainly carries daily consultation and emergency assistance services for the elderly, including telephone traffic control, life information retrieval management, information release and other functions. At the same time, a positioning platform is built to realize remote positioning business management. Children can remotely locate the elderly through web, short message and multimedia message. Demand analysis of information service for the elderly Information service mainly collects, evaluates, selects, organizes, and stores scattered information, makes it regular and convenient for users, and studies users and information needs in order to provide them with valuable information. With the acceleration of global aging and the advent of the era of information explosion, the information needs of the elderly involve all areas of life. The information needs of the elderly include the stages of information consumption by the elderly, acquisition and possession of information by the elderly, absorption and processing of information by the elderly, and information creation stages. It includes four aspects: requirements for the content and form of information, requirements for information sources, requirements for information acquisition methods, and requirements for information acquisition methods. It involves aspects such as physiology, safety, social interaction, esteem, and self-actualization. Most of the elderly can't take care of themselves in their lives because of the decline of their physical functions. At the same time, with the rapid pace of life, the children of the elderly are not around for a long time, which leads to the phenomenon of "empty nest families". In order to provide better and more targeted information services to the elderly, a special group, researchers conducted in-depth field understanding of the information needs and information services of the elderly in 10 large communities in urban and rural areas by means of interview surveys and questionnaires. Through comprehensive analysis of the questionnaire, face-to-face interview and interview notes, we come to the problems in the existing information service mode of urban communities and put forward opinions on them (see Figure 1). Figure 1 General community information service model According to the information demand data of the elderly obtained from our survey, it can be seen that there are many types and varying degrees of information demand of the elderly. At the same time, their access to information is selective, and they will selectively find the information they really need. It can be seen from Figure 1 that the community information service system mainly includes two parts, namely, the community information service platform and the face-to-face consultation service. When the information services provided by the community through the information service platform can not meet the information needs of the elderly, they can go to the community service center for face-to-face manual consultation, and give information feedback through the Internet and the consultation center to put forward their needs and suggestions. However, before the implementation of information services in the community, there is little research and analysis on the information needs of the elderly, and some information content that the elderly need very much is ignored. The types and sources of information needs of the elderly are summarized in Table 1 and Table 2 below, which provide an application basis for the construction of information systems. Table 1 that the urban elderly have a wide range of information access channels and have certain applications to the network. The elderly in rural areas still mainly obtain information through traditional media such as TV and radio, and they also obtain information from other populations. Most of the elderly in cities use mobile phones and TVs, and they also have some applications for smart products. Most of the elderly in rural areas use TV or village run organizations. Most of the urban and rural elderly will not get information through the network, but the acceptance ability of the urban elderly is relatively strong. It can be seen that the health awareness of the elderly is weak. The state, government and other institutions should publicize health information through various means to enhance the health awareness of the elderly. Elderly people are not keeping pace with the times, so their ability to use mobile devices to obtain information needs to be improved. The attention rules of the cognitive process of the elderly are affected by the information needs of the elderly. In more cases, they will intentionally and actively seek information according to their own information needs, or unintentionally shift their attention to the direction of information they need. In short, their access to information is still relatively traditional, without the application of new technologies and the Internet environment. In view of the above analysis, this paper intends to use cloud computing to innovate and build the information service system for the elderly. Cloud computing, an emerging technology, is developing at an amazing speed with its excellent computing power, strong storage capacity and extremely low cost. Its principle is to connect a large number of personal computers and servers through the network, and users can use network services to perform supercomputing and obtain super large capacity storage space. More powerful computing power can be obtained by making full use of existing resources through cloud computing. In the process of market-oriented cloud computing design, we must take into account the diverse needs of these users, so that cloud computing can provide universal and efficient services for all or most of the user groups. General model of information service The biggest feature of the general model of information services based on cloud computing is that it services are dynamically provided through virtual machines. According to the strength of the service load, the cloud system can dynamically increase or decrease the number of virtual machines providing services, so as to achieve a dynamic balance between internal supply and external load, optimize resource allocation, and thus reduce energy consumption. The architecture can effectively support resource allocation and service provision of multiple applications. The general mode is mainly divided into knowledge service mode and user experience information service mode. With the change of Internet resource allocation. The concept of cloud computing center is gradually taking shape. The cloud computing center has the following characteristics: elastic scalability, dynamic allocation of resources, weakening the dependence between resources at different levels, self fault tolerance and self management. Cloud computing can provide intimate personalized services according to the actual use time and needs of users, making service items increasingly refined, and thus promoting the diversification of service items. The on-demand use of cloud computing makes cloud services more refined. The network supported by cloud computing provides self-service, which turns routine human activities into self-service interaction through the network. The low-cost client makes the information services provided by cloud computing more popular, so that more people can enjoy high-performance computing applications. These characteristics of cloud services will expand the breadth of information services, deepen the depth of information services, and improve the popularity of information services, thus speeding up the process of informatization of the whole society. Algorithm Analysis In the process of establishing the cloud model, the communication mechanism and data exchange mechanism between each component should be realized, the functional interface and service interface of each component should be connected, and finally a web data mining system based on cloud computing should be established. Suppose there is a training set A with n records, and they belong to m different classes, then the Gini coefficient of this training set is defined as follows. Cloud computing is used to store and manage data resources. Taking data as the center, computing tasks are scheduled to run at data storage nodes, forming a model from web to users. Users can access various data processing services provided by cloud computing system through the Internet. Virtualization technology realizes the separation of software and hardware. Users only need to run their own software in the virtual layer without considering the realization process of background hardware. Virtualization technology realizes the migration of resources on servers. When one server is overloaded, it will migrate to another server. When determining the optimal partition point of discrete attributes, repeated calculations and irrelevant operations should be deleted. Assuming that the discrete attribute has a different value of t, when t is an even number, the number of calculations of Gini coefficient is defined as formula. When t is an odd number, the calculation times of Gini coefficient are defined as formula : The calculation results show that the general information service architecture based on cloud computing can reduce the payment cost of information services by about 31.22%. Although the cost can be reduced by reducing the startup time of locally deployed physical servers, the shutdown and startup of a large number of servers will lead to a waste of time. The more troublesome problem is that if the server is shut down, the current running state of the application is difficult to save and restore, so the running mode based on the cloud computing platform will reduce many management and maintenance difficulties. Comparative analysis There may be great differences between the "operating systems" of different service platforms. Socialized, intensive and professional cloud computing centers need corresponding "operating systems" to shield multiple physical servers, while cloud computing does not require a network operating system with thousands of people. In view of the characteristics of the current information service market and the wide range of potential cloud computing users, a general information service architecture based on cloud computing is proposed. On the one hand, this architecture can give full play to the performance and cost advantages of cloud computing, and on the other hand, it realizes excellent adaptability to a variety of user groups through virtualization technology. Cloud computing makes massive data storage no longer the bottleneck of the system, greatly improving the speed and accuracy of data management and use. By transplanting the traditional data mining algorithm into the distributed algorithm architecture, it can effectively improve the efficiency of the algorithm, shorten the time of data mining, and quickly respond to users' requests (see Figure 2). Figure 2 Comparison of traditional data mining and cloud model data mining The hardware construction of the information service platform for the elderly should not only take full advantage of the existing network resources, but also be based on the principle of economic applicability. Therefore, it is necessary to scientifically and reasonably estimate the hardware configuration according to business needs and development expectations. In order to avoid single point of failure, two load balancers are configured. Web access portal is provided. According to laboratory test experience and data, a single board supports 100 concurrency, meeting business requirements. Two atae boards are configured as active and standby. According to the user access trend, the active resources in the virtualization cluster are pre adjusted to meet the concurrent access requirements. Cloud computing gathers the existing computer resources, and also creates a new mode of computing to effectively manage and control the construction of parallel work nodes, which can assign work tasks to appropriate hosts through intelligent means (see Fig. 3). In view of the large amount of data that needs to be transmitted through the network in the traditional management mode, the simplified data is submitted to the server through the form method through the protocol, and then encapsulated by the server and transmitted to the virtual cluster, reducing the amount of network data transmission. Design a work interface similar to application software and compatible with traditional user habits. The protocol is used to collect physical machine information, monitor the operation of the cluster, and realize real-time resource scheduling. Each subsystem in cloud computing shares resources, but some subsystems bear more responsibilities. Compensating them can solve the unfair situation of information services in cloud computing. Through the improvement of relevant technologies, actively build a compensation system to promote the harmonious development of all aspects of cloud computing information services. Cloud computing can indeed enable users to obtain supercomputing services at a low cost, but hackers can obtain the information of any computer as long as they break into the "cloud", and the cost of hackers has been reduced. Once these "clouds" are used to break all kinds of passwords and carry out various attacks, it will cause great danger to the user's data security. Although there are some problems at this stage, it is not the technical problems and defects of cloud computing itself. The development of cloud computing will be vigorously promoted. In the process of Public Cloud Application in the future, we should pay attention to strengthening the management of network security, actively formulate systematic and perfect security transmission protocols, and effectively ensure the integrity of data information transmitted through the Internet. Conclusions To ensure the smooth progress of information service for the elderly, we must follow a certain mechanism. The market mechanism in the urban information service guarantee system for the elderly can effectively mobilize the enthusiasm of all participants, make up for the shortage of government information service for the elderly, and keep the balance between supply and demand of information resources. Through the provision of multi-level services, the degree of optimal allocation and utilization of information resources can be improved. The group mechanism promotes the behavior imitation of the information behavior of the elderly and the improvement of the information literacy of the elderly through the interpersonal information exchange and sharing of the elderly in the family and informal organizations. The emergence of cloud computing has changed the information service mode for the elderly. Its outstanding computing power, storage capacity and low cost of use have rapidly set off a "cloud era". Cloud computing integrates existing software and hardware resources and allocates them reasonably. This is a breakthrough and changes the mode of information service. Cloud computing is not a new weapon to solve the problem of information security. Its most important technological progress is the networked change of storage mode, computing mode and interaction mode, as well as the idea of software as a service. With the continuous evolution of mobile network technology, the coverage of 5G network will continue to deepen. The elderly information service platform will also gradually introduce new functions with the continuous improvement of data service bandwidth and downlink rate, and the real-time smooth video assistance information display and activity instruction information industry will be introduced into the system. The new generation of cloud computing mobile network will further improve the system positioning accuracy and real-time information display.
/** * Uncheck a function exception. * <p> * uncheck(Class::forName, "xxx"); * * @param function The {@link Function_WithExceptions}. * @param t The function argument. * @param <T> The type of the function argument. * @param <R> The return type of the function. * @param <E> The type of the exception. * @return The return value of the function. */ public static <T, R, E extends Exception> R uncheck(Function_WithExceptions<T, R, E> function, T t) { try { return function.apply(t); } catch (Exception exception) { throwAsUnchecked(exception); return null; } }
/** * MIT License * * Copyright (c) 2017-2021 Julb * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all * copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ package me.julb.applications.authorizationserver.entities.session; import javax.validation.constraints.Min; import javax.validation.constraints.NotBlank; import javax.validation.constraints.NotNull; import javax.validation.constraints.Size; import org.springframework.data.annotation.Id; import org.springframework.data.mongodb.core.mapping.DBRef; import org.springframework.data.mongodb.core.mapping.Document; import lombok.EqualsAndHashCode; import lombok.Getter; import lombok.NoArgsConstructor; import lombok.Setter; import lombok.ToString; import me.julb.applications.authorizationserver.entities.UserEntity; import me.julb.library.persistence.mongodb.entities.AbstractAuditedEntity; import me.julb.library.utility.interfaces.IIdentifiable; import me.julb.library.utility.validator.constraints.DateTimeISO8601; import me.julb.library.utility.validator.constraints.IPAddress; import me.julb.library.utility.validator.constraints.Identifier; import me.julb.library.utility.validator.constraints.Trademark; /** * The user session entity. * <br> * @author Julb. */ @Getter @Setter @ToString @EqualsAndHashCode(callSuper = false, of = "id") @NoArgsConstructor @Document("users-sessions") public class UserSessionEntity extends AbstractAuditedEntity implements IIdentifiable { //@formatter:off /** * The id attribute. * -- GETTER -- * Getter for {@link #id} property. * @return the value. * -- SETTER -- * Setter for {@link #id} property. * @param id the value to set. */ //@formatter:on @Id @Identifier private String id; //@formatter:off /** * The tm attribute. * -- GETTER -- * Getter for {@link #tm} property. * @return the value. * -- SETTER -- * Setter for {@link #tm} property. * @param tm the value to set. */ //@formatter:on @NotNull @NotBlank @Trademark private String tm; //@formatter:off /** * The user attribute. * -- GETTER -- * Getter for {@link #user} property. * @return the value. * -- SETTER -- * Setter for {@link #user} property. * @param user the value to set. */ //@formatter:on @NotNull @DBRef private UserEntity user; //@formatter:off /** * The expiryDateTime attribute. * -- GETTER -- * Getter for {@link #expiryDateTime} property. * @return the value. * -- SETTER -- * Setter for {@link #expiryDateTime} property. * @param expiryDateTime the value to set. */ //@formatter:on @DateTimeISO8601 private String expiryDateTime; //@formatter:off /** * The durationInSeconds attribute. * -- GETTER -- * Getter for {@link #durationInSeconds} property. * @return the value. * -- SETTER -- * Setter for {@link #durationInSeconds} property. * @param durationInSeconds the value to set. */ //@formatter:on @NotNull @Min(1) private Long durationInSeconds; //@formatter:off /** * The mfaVerified attribute. * -- GETTER -- * Getter for {@link #mfaVerified} property. * @return the value. * -- SETTER -- * Setter for {@link #mfaVerified} property. * @param mfaVerified the value to set. */ //@formatter:on @NotNull private Boolean mfaVerified; //@formatter:off /** * The securedIdToken attribute. * -- GETTER -- * Getter for {@link #securedIdToken} property. * @return the value. * -- SETTER -- * Setter for {@link #securedIdToken} property. * @param securedIdToken the value to set. */ //@formatter:on @NotNull @NotBlank @Size(max = 128) private String securedIdToken; //@formatter:off /** * The ipv4Address attribute. * -- GETTER -- * Getter for {@link #ipv4Address} property. * @return the value. * -- SETTER -- * Setter for {@link #ipv4Address} property. * @param ipv4Address the value to set. */ //@formatter:on @IPAddress private String ipv4Address; //@formatter:off /** * The lastUseDateTime attribute. * -- GETTER -- * Getter for {@link #lastUseDateTime} property. * @return the value. * -- SETTER -- * Setter for {@link #lastUseDateTime} property. * @param lastUseDateTime the value to set. */ //@formatter:on @NotNull @NotBlank @DateTimeISO8601 private String lastUseDateTime; //@formatter:off /** * The operatingSystem attribute. * -- GETTER -- * Getter for {@link #operatingSystem} property. * @return the value. * -- SETTER -- * Setter for {@link #operatingSystem} property. * @param operatingSystem the value to set. */ //@formatter:on @Size(max = 32) private String operatingSystem; //@formatter:off /** * The browser attribute. * -- GETTER -- * Getter for {@link #browser} property. * @return the value. * -- SETTER -- * Setter for {@link #browser} property. * @param browser the value to set. */ //@formatter:on @Size(max = 32) private String browser; }
import { MiddlewareConsumer, Module, NestModule } from '@nestjs/common'; import { NextFunction } from 'express'; import { authenticate } from 'passport'; import { googleStrategyConfig, GOOGLE_STRATEGY_CONFIG } from '~/config/google'; import { MicroservicesModule } from '~/microservices/microservices.module'; import { AuthModule } from '../auth/auth.module'; import { AuthType } from '../auth/enums/auth-type.enum'; import { UriModule } from '../uri/uri.module'; import { GoogleController } from './google.controller'; import { GoogleService } from './google.service'; import { GoogleStrategy } from './google.strategy'; const googleStrategyConfigFactory = { provide: GOOGLE_STRATEGY_CONFIG, useValue: googleStrategyConfig }; @Module({ controllers: [GoogleController], imports: [AuthModule, UriModule, MicroservicesModule], providers: [GoogleService, GoogleStrategy, googleStrategyConfigFactory] }) export class GoogleModule implements NestModule { public configure(consumer: MiddlewareConsumer) { const googleOptions = { scope: googleStrategyConfig.scope, session: false, state: null }; consumer .apply((req: any, res: any, next: NextFunction) => { googleOptions.state = JSON.stringify(req.query); next(); }, authenticate(AuthType.GOOGLE, googleOptions)) .forRoutes(GoogleController); } }
<filename>src/styles/theme.ts import styled from "styled-components"; export const ThemeLight = { body: "var(--background)", textTitle: "var(--text-title)", shape:"var(--shape)", bgBtnHeader: "var(--blue-light)", } export const ThemeDark = { body:"var(--darkBackground)", textTitle:"var(--darkTitle)", shape: "var(--darkShape)", bgBtnHeader:"var(--darkBtnHeader)", } export const BtnTheme = styled.button` position: absolute; top: 4vh; right: 20px; background-color: transparent; border: none; cursor: pointer; `; export const BackgroundBody = styled.div` background-color: ${({theme})=> theme.body}; height: 100vh; `;
<gh_stars>1-10 /* * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except * in compliance with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software distributed under the License * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express * or implied. See the License for the specific language governing permissions and limitations under * the License. */ /* * This code was generated by https://github.com/googleapis/google-api-java-client-services/ * Modify at your own risk. */ package com.google.api.services.dataproc.v1beta2.model; /** * Specifies the cluster auto-delete schedule configuration. * * <p> This is the Java data model class that specifies how to parse/serialize into the JSON that is * transmitted over HTTP when working with the Cloud Dataproc API. For a detailed explanation see: * <a href="https://developers.google.com/api-client-library/java/google-http-java-client/json">https://developers.google.com/api-client-library/java/google-http-java-client/json</a> * </p> * * @author Google, Inc. */ @SuppressWarnings("javadoc") public final class LifecycleConfig extends com.google.api.client.json.GenericJson { /** * Optional. The time when cluster will be auto-deleted. * The value may be {@code null}. */ @com.google.api.client.util.Key private String autoDeleteTime; /** * Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this * period. Valid range: 10m, 14d.Example: "1d", to delete the cluster 1 day after its creation.. * The value may be {@code null}. */ @com.google.api.client.util.Key private String autoDeleteTtl; /** * Optional. The duration to keep the cluster alive while idling. Passing this threshold will * cause the cluster to be deleted. Valid range: 10m, 14d.Example: "10m", the minimum value, to * delete the cluster when it has had no jobs running for 10 minutes. * The value may be {@code null}. */ @com.google.api.client.util.Key private String idleDeleteTtl; /** * Optional. The time when cluster will be auto-deleted. * @return value or {@code null} for none */ public String getAutoDeleteTime() { return autoDeleteTime; } /** * Optional. The time when cluster will be auto-deleted. * @param autoDeleteTime autoDeleteTime or {@code null} for none */ public LifecycleConfig setAutoDeleteTime(String autoDeleteTime) { this.autoDeleteTime = autoDeleteTime; return this; } /** * Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this * period. Valid range: 10m, 14d.Example: "1d", to delete the cluster 1 day after its creation.. * @return value or {@code null} for none */ public String getAutoDeleteTtl() { return autoDeleteTtl; } /** * Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this * period. Valid range: 10m, 14d.Example: "1d", to delete the cluster 1 day after its creation.. * @param autoDeleteTtl autoDeleteTtl or {@code null} for none */ public LifecycleConfig setAutoDeleteTtl(String autoDeleteTtl) { this.autoDeleteTtl = autoDeleteTtl; return this; } /** * Optional. The duration to keep the cluster alive while idling. Passing this threshold will * cause the cluster to be deleted. Valid range: 10m, 14d.Example: "10m", the minimum value, to * delete the cluster when it has had no jobs running for 10 minutes. * @return value or {@code null} for none */ public String getIdleDeleteTtl() { return idleDeleteTtl; } /** * Optional. The duration to keep the cluster alive while idling. Passing this threshold will * cause the cluster to be deleted. Valid range: 10m, 14d.Example: "10m", the minimum value, to * delete the cluster when it has had no jobs running for 10 minutes. * @param idleDeleteTtl idleDeleteTtl or {@code null} for none */ public LifecycleConfig setIdleDeleteTtl(String idleDeleteTtl) { this.idleDeleteTtl = idleDeleteTtl; return this; } @Override public LifecycleConfig set(String fieldName, Object value) { return (LifecycleConfig) super.set(fieldName, value); } @Override public LifecycleConfig clone() { return (LifecycleConfig) super.clone(); } }
Texas Attorney General and Republican gubernatorial candidate Greg Abbott argued in the state’s Friday brief defending the gay marriage ban that it does not matter how the ban actually impacts children. Abbott wrote that as long as it’s rational to believe that a law will further the state’s interests, it does not matter if it actually does. He said that the same-sex marriage ban furthers the state’s interest in encouraging couples of the opposite sex to have children. “It is rational to believe that opposite-sex marriages will generate new offspring to a greater extent than same-sex marriages will,” he wrote. He said the state also wants to keep individuals from having kids out of wedlock, which he also believes a gay marriage ban will further. As ThinkProgress notes, Abbott then argues that it does not matter how the law actually impacts children in the state. “It does not matter under rational-basis review whether same-sex marriage will produce societal benefits,” he wrote in response to the argument that the children of gay couples would benefit from same-sex marriage. He claims that it only matters if the law furthers the specified state interests of encouraging people to produce offspring and preventing people from having children out of wedlock.
In situ characterization of thermal cleaned surface for preparing superior transmission-mode GaAs photocathodes. Considering that it is impractical to utilize in situ surface diagnostic means to determine the surface cleanliness of transmission-mode GaAs photocathodes in the vacuum device manufacturing process, the thermal desorption technique with the aid of the quadrupole mass spectrometer during the thermal cleaning process is employed to in situ characterize the thermal cleaned surface. The desorption behaviors for various impurity gases during the thermal cleaning process are analyzed. The experimental results show that the amount of desorbed impurity gases varies due to the different heat treatment temperatures. Through the verification of Cs/O activation and quantum efficiency measurement, it is found that the desorption behaviors of the specific impurity gases including AsH3 and As2 are crucial to surface cleanliness of transmission-mode GaAs photocathodes, which relate to the final photoemission capability. This simple and reliable criterion provides an effective way to guide the thermal cleaning process of transmission-mode GaAs photocathodes, and the desorption behaviors assist in in situ evaluation of surface cleanliness.
/******************************************************************************* * Copyright 2018, Tinvention - Ingegneria Informatica * This project includes software developed by Tinvention - Ingegneria Informatica . * http://tinvention.net/ * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the Licens * *******************************************************************************/ package net.tinvention.training.web.servlet4; import java.io.IOException; import java.text.MessageFormat; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; @SuppressWarnings("serial") public class HelloWorldServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // Generate page contents String stringPageTemplate = " <!DOCTYPE HTML>" + " <html>" + " <head><title>Servlet 4.0 Demo</title></head>" + " <body>" + " <h1>This is a Servlet 4.0 Example</h1>" + " </br>" + " <p>{0} {3}</p>" + " </br>" + " <p>{1} {2} {3}</p>" + " </body>" + " </html>"; String outputBodycontent = MessageFormat.format(stringPageTemplate, "It works", "Hello", "World", "!!"); //it is using the varargs way //Send contents to browser response.setContentType("text/html"); response.setCharacterEncoding("UTF-8"); response.getWriter().write(outputBodycontent); // it is better do not to close here the writer, it is managed by the Application Server } }
// indexCommand finds the index of the first non-global-option argument in a Git // argument list or len(args) if no such argument could be found. func indexCommand(args []string) int { scanArgs: for i := 0; i < len(args); i++ { a := args[i] if !strings.HasPrefix(a, "-") { return i } if !strings.HasPrefix(a, "--") { if strings.IndexByte(globalShortOptionsWithArgs, a[len(a)-1]) != -1 { i++ } continue scanArgs } for _, opt := range globalLongOptionsWithArgs { if a[2:] == opt { i++ continue scanArgs } } } return len(args) }
Children in Need 2016: Where can I buy merchandise? Children in Need is back for another year as people across the join forces with Pudsey bear to raise money for disadvantaged children. The annual fundraising and TV event will be jointly hosted by Graham Norton, Tess Daly, Greg James, Ade Adepitan and Rochelle and Marvin Humes on Friday, November 18. The show will air from 7.30pm until 10pm on BBC One, featuring the cast of Eastenders starring in a dance medley, and an exclusive look at the Doctor Who Christmas special. This year the people of Cambridgeshire are encouraged to support the fundraising drive by getting their hands on the new 2016 appeal t-shirt. Created by fashion designer and long-time BBC Children in Need advocate, Giles Deacon, the top features the charity mascot, Pudsey bear. The t-shirt, sporting a geometric design, is just one of the goodies you can get your hands on to help raise money for the charity. We&apos;ve put together a list of official merchandise available for you to buy to get you into the charitable spirit. If you want to support the efforts of Children in Need, there are many places where you can buy Pudsey bear themed merchandise. The products are available online and in some major retailers. The BBC’s online Children in Need shop sells a host of Pudsey themed merchandise, such as a knitted Pudsey bear, Pudsey ear muffs and even light-up Pudsey ears. They’re also selling T-shirts with a revamped Pudsey bear logo designed by Giles Deacon. For the gadget inclined, there are also Pudsey VR viewers on sale for £4, allowing you to use your smartphone to experience virtual reality by visiting the BBC Children in Need YouTube channel. You can visit the BBC Children in Need Shop here. To donate £5 text TEAM to 70405. To donate £10 text TEAM to 70410. George at Asda is also selling Children in Need products online and in-store so you can pick something up while you’re shopping for groceries. At Asda you can buy a Pudsey hooded onesie for £12, or a pyjama set for £9. The Pudsey backpack costs £5. You can look at the full collection on the store&apos;s website. Lakeland are also selling a Pudsey-themed range, particularly focusing on baking utensils such as cupcakes cases, cookie cutters, icing sets and bunting. The shop even has Pudsey shaped pasta! You can host a bake off party to raise money for the cause, while enjoying some Pudsey shaped cookies. You can see the full range here. At Boots, you can buy a ‘Do Your Thing’ Pudsey Bear which has a little white t-shirt on that you can design with fabric pens. You can see the Boots range here. Children in Need has teamed up with the Post Office to provide small parcel packages with a Pudsey bear on the front. They cost £2.91 and you can buy them here.
Histopathological analysis of abnormal electrocardiographic findings in hypertrophic cardiomyopathy--with biopsied ventricular myocardium. To clarify the relationship between electrocardiographic abnor malities and the histopathology of hypertrophic cardiomyopathy (HCM), 43 cases of HCM were studied electrocardiographically observing 1) abnormal Q waves, 2) QRS axis and voltages, 3) depth of negative P waves in V1, 4) giant negative T waves and 5) degree of ST segment depression. Each biopsied specimen was checked microscopically for 1) hypertrophy, 2) disarray and 3) fibrosis. Hypertrophy was graded from to (3+) by the mean value of the shortest transverse diameters of at least 30 myocytes under hematoxylin-eosin stain. Disarray was graded from to (3+) according to the degree of disarrangement of the myocytes under phosphotungstic acid-hematoxylin stain. Fibrosis was graded from to (3+) by the % area counting method under Malloryazan stain. Multiple abnormal Q waves were associated with fibrosis in the presence of hypertrophy (p<0.01). HCM with SV1+RV5-6<35 mm had significantly more advanced fibrosis (p<0.001) and disarray (p<0.01) than HCM with SV1+RV5-6≥35 mm. The depth of the negative P in V1 reflected the degree of hypertrophy, disarray and fibrosis in HCM. HCM with giant negative T waves and ST segment depression (≥ 2 mm) had significantly milder fibrosis (p<0.01) and milder disarray (p<0.05) in the presence of hypertrophy.
Models of our Galaxy II Stars near the Sun oscillate both horizontally and vertically. In Paper I the coupling between these motions was modelled by determining the horizontal motion without reference to the vertical motion, and recovering the coupling by assuming that the vertical action is adiabatically conserved as the star oscillates horizontally. Here we show that, although the assumption of adiabatic invariance works well, more accurate results can be obtained by taking the vertical action into account when calculating the horizontal motion. We use orbital tori to present a simple but fairly realistic model of the Galaxy's discs in which the motion of stars is handled rigorously, without decomposing it into horizontal and vertical components. We examine the ability of the adiabatic approximation to calculate the model's observables, and find that it performs perfectly in the plane, but errs slightly away from the plane. When the new correction to the adiabatic approximation is used, the density, streaming velocity and velocity dispersions are in error by less than 10 per cent for distances up to $2.5\kpc$ from the Sun. The torus-based model reveals that at locations above the plane the long axis of the velocity ellipsoid points almost to the Galactic centre, even though the model potential is significantly flattened. This result contradicts the widespread belief that the shape of the Galaxy's potential can be strongly constrained by the orientation of velocity ellipsoid near the Sun. An analysis of orbits reveals that in a general potential the orientation of the velocity ellipsoid depends on the structure of the model's distribution function as much as on its gravitational potential, contrary to what is the case for Staeckel potentials. We argue that the adiabatic approximation will provide a valuable complement to torus-based models in the interpretation of current surveys of the Galaxy. INTRODUCTION Study of the structure of the Milky Way Galaxy is a major theme of contemporary astronomy. Our Galaxy is typical of the galaxies that dominate the current cosmic star-formation rate, so understanding it is of more than parochial interest. We believe that most of its mass is contributed by elementary particles that have yet to be directly detected, but we have only weak constraints on the spatial density and kinematics of these particles -we urgently need stronger constraints on them. The Cold-Dark-Matter (CDM) cosmogony provides a very persuasive picture of how a galaxy such as ours formed, and we need to know how accurately this theory predicts the structure of the Galaxy. In view of these considerations, large resources have ⋆ E-mail: [email protected] been invested over the last decade in massive surveys of the stellar content of the Galaxy. The rate at which data from this observational effort becomes available will increase at least through 2020. Models of the Galaxy will surely play a key role in extracting science from the data, because the Galaxy is a very complex object and every survey is subject to powerful observational biases. Consequently it is extremely hard to proceed in a model-independent way from observational data to physical understanding. We are likely to achieve the desired physical understanding by comparing observational data with the predictions of models. It is useful to distinguish between kinematic and dynamic models. A kinematic model specifies the spatial density of stars and their kinematics at each point without asking whether a gravitational potential exists in which the given density distribution and kinematics constitute a steady state. Bahcall & Soneira pioneered kinematic models, and recent versions include Galaxia (). The science of constructing dynamical models is still in its infancy. The Besanon model () has a dynamical element to it in that in it the disc's density profile perpendicular to the plane is dynamically consistent with the corresponding run of dispersion of velocities perpendicular to the plane. Schnrich & Binney and Binney (hereafter Paper I) offer models that are more thoroughly dynamical. These models adopt a plausible model of the Galaxy's gravitational potential (R, z), in which there are substantial contributions to the local acceleration from a disc, a bulge and a dark halo, all assumed to be axisymmetric. They assume that motion parallel to the plane is to some degree decoupled from motion perpendicular to the plane. Specifically, the vertical motion is governed by the time-dependent potential where R(t) is the radius at time t that one obtains by assuming that the radial motion is governed by the onedimensional effective potential Paper I assumed that the time-dependence of the potential is slow enough that the action Jz of vertical motion is constant. It justified this assumption by referring to Figure 3.34 in Binney & Tremaine (hereafter BT08), which shows that the boundaries of one particular orbit are fairly well recovered by the adiabatic approximation (hereafter aa). In this paper we explore the validity of the aa much more extensively. Our other goal is to present a model of the Galactic disc that is not reliant on the aa. This model dispenses with the assumption that the R and z motions are decoupled by using numerically synthesised orbital tori. The paper is organised as follows. In Section 2 the validity of the aa is tested on typical orbits. In Section 3 we explain the general principles of torus modelling and why we believe this technique will prove a valuable tool for the interpretation of observational data. We define the torusbased model of the Galactic discs that we will use to test the accuracy of observables obtained from the aa, and we summarise the methods used to extract observables when the model is based on (i) tori, and (ii) the adiabatic approximation. In Section 4 we compare the model's observables with estimates of them obtained from the aa. In Section 5 we examine the tilt of the velocity ellipsoid near the Sun, which cannot be computed from the aa, and show that its long axis points towards the Galactic centre even though the potential is significantly flattened. Section 6 sums up and looks to the future. An appendix explains how some important Jacobians are calculated for a torus model. VALIDITY OF THE ADIABATIC APPROXIMATION Each panel of Fig. 1 shows an orbit in the meridional plane in the gravitational potential of a Miyamoto-Nagai Galaxy model (Miyamoto & Nagai 1975) with scale-length ratio b/a = 0.2: Figure 2.7 of BT08 shows that this model has a prominent disc. From the Fourier transforms of the time series R(t) z(t) on these orbits (Binney & Spergel 1984) we find that in units of √ GM a their actions are (Jr, Jz) = (0.109, 0.067) and (0.127, 0.022), respectively. One can also estimate the vertical actions Jz of these orbits by dropping particles from points on their upper edges, and following their motion in the one-dimensional potential with R frozen at its current value -see equation below. The numbers at top of each panel show the values obtained for Jz in the same units when particles are dropped from the top left and top right corners of the orbit; the values are displaced from the true value by < ∼ 7%. The corresponding vertical energies Ez are also shown at the top of each panel; they differ by more than a factor 2. Thus the aa does provide a fairly good guide to how the vertical motion is influenced by the radial motion. The nearly straight full lines in Fig. 1 show the outlines of the approximate orbits that we obtain from the aa by setting Jr to its true value and Jz to the average of the values given at the top of the panel. The shape of each approximate orbit is reasonable, although the left and right edges are straight rather than curved, but the orbit is clearly displaced to smaller radii with respect to the true orbit. This difference reflects the fact that the vertical motion contributes to the centrifugal potential alongside the azimuthal motion. By assuming that the radial motion is governed by the effective potential in which Lz occurs rather than the total angu- lar momentum L, we have under-estimated the centrifugal potential. Consequently, we predict that the orbit lives at smaller radii than it really does. In a spherical potential, the total angular momentum is related to Lz and Jz by L = |Lz| + Jz (e.g. BT08 §3.5.2) and the radial motion is governed by an effective potential in which the centrifugal component is L 2 /2r 2, where r 2 = R 2 + z 2. Consequently, an obvious strategy for improving the predictions of the aa is to replace Lz in the effective potential by L + Jz. The dashed lines in Fig. 1 show the effect of replacing Lz by with = 1. Both orbits are now quite closely modelled. If we calculate the radial action of a given phase-space point (x, v) using Lz rather than Lz in the effective potential, the value we obtain is too large when (x, v) lies near apocentre, because the star moves in an effective potential that has its minimum at a radius that is too small. Conversely, when (x, v) lies near pericentre, our estimate of Jr is too small if we use Lz. Since the df decreases with increasing Jr, the use of the less accurate effective potential leads to phase-space points near pericentre being over-weighted relative to points near apocentre, and this in consequence shifts the predicted distribution in v to large values. Hence, replacing Lz in the effective potential with Lz for suitably chosen can usefully improve the accuracy of results obtained with the aa. The points in Fig. 2 show the consequents of the upper orbit of Fig. 1 in three surfaces of section that are obtained by noting z and pz when the star crosses the line R = constant in the meridional plane with pR > 0. The curves in each panel show the dependence of pz on z along the one-dimensional orbit in the potential (R, z) with R fixed at the appropriate value when the action Jz is set to the average of the values given above the top panel of Fig. 1. The agreement between the curves yielded by the aa and the numerical consequents is on the whole good. In the left and central panels we see that while the curves have reflection symmetry in pz = 0 the consequents do not. This is because the surface of section is for pR > 0, and when the star is moving outwards, it is likely to be moving upwards when z > 0 and downwards when z < 0. As we will discuss in Section 5, when a galaxy is formed out of such orbits, this z-dependent correlation between pR and pz causes the principal axes of the velocity ellipsoid to become inclined to the R, z axes at |z| > 0. The aa is unable to capture this aspect of the dynamics and will always yield a velocity ellipsoid that is aligned with the R, z axes. The panel on the extreme right shows that at large radii the aa underestimates the maximum height zmax reached by a star, although at most values of z it predicts pz with good accuracy. The analogue of Fig. 2 for the orbit shown in the lower panel of Fig. 1 shows smaller offsets between the numerical consequents and the predictions of the aa, because the latter works best for small vertical amplitudes. With vt denoting the tangential speed, the centrifugal potential is v 2 t /r 2. In a separable potential, the time average of the ith component of velocity is related to the ith frequency and action by v 2 i = iJi, so when we replace v 2 t by its time average the centrifugal potential becomes The standard aa underestimates the centrifugal potential by neglecting the term proportional to Jz in the numerator, and partially compensates by neglecting the z 2 in the denominator. This neglect of z 2 must be responsible for the fact that we find the optimum value of to be unity rather than 2, which is a typical value of z/ for disc stars at R0 in plausible Galactic potentials. However, in an unrealistically flat potential, larger values of prove optimal. For example, when the potential is that of a razor-thin exponential disc and there is no contribution from a bulge or a dark halo, we find ≃ 1.9 is required. Even in this case is smaller than the typical value of z/ on account of the neglect of z 2 in the denominator. A MODEL BASED ON ORBITAL TORI The classical approach to modelling globular clusters starts by positing an analytic form for the distribution function (df) and then calculating the density distribution and kinematics that are implied by this df. Thus globular clusters have been successfully modelled with dfs of the King-Michie form (e.g. BT08 §4.3). This approach can be extended to disc galaxies. For example Rowley Given that the df will depend on two of an orbit's three actions Jr, Jz and Lz, substantial advantages arise from employing Jr as the other argument of the df in place of E. For this reason Paper I studied Galaxy models in which each stellar component had a df that was an analytic function of the three actions. It used the aa to calculate observables from the df. The purpose of this section is to compare observables obtained in this way to those obtained without invoking the aa but instead using orbital tori. General principles of torus modelling Orbital tori are the three-dimensional surfaces in sixdimensional phase space on which individual orbits move. They are the building blocks from which Jeans' theorem assures us that any equilibrium model can be built. Each torus is characterised by a set of three actions J = (Jr, Lz, Jz) and therefore corresponds to a point in action space. We build a galaxy model by assigning a weight to each torus. We obtain tori as the images of analytic tori under a canonical transformation. The tori used here are defined by the angle-action coordinates of the isochrone potential (e.g. BT08 §3.5). Given actions J, the computer constructs a canonical transformation that maps the analytic torus with actions J into the required torus by adjusting the coefficients in a trial generating function so as to minimise the rms variation of the Galactic Hamiltonian on the image torus. Once this has been done, we have analytic expressions for the phase-space coordinates as functions of the angle variables i, which control the orbital phase. On a given torus, the phase-space coordinates (x, v) are 2-periodic functions of each i. The torus-fitting program also returns the values of the torus's characteristic frequencies i, so we can determine the motion of a star using (t) = + t. For a fuller summary of how orbital tori are constructed and references to the papers in which torus dynamics was developed, see McMillan & Binney. Torus modelling is best understood as an extension of Schwarzschild modelling (Schwarzschild 1979), which has been successfully used in many studies of the dynamics of external galaxies (e.g. ;). A Schwarzschild model is constructed by assigning weights to each orbit in a "library" of orbits. The orbit library is assembled by integrating the equations of motion in the given potential for a sufficient time, and noting the fraction of its time that the orbiting particle spends in each bin in the space of observables. Then a non-negative weight wi is chosen for each orbit such that the data are consistent with the model's predictions. In torus modelling orbits are replaced by tori, which are essentially equivalence classes of orbits that differ from one another only in phase, and a Runge-Kutta integrator is replaced by the torus-fitting code. Whereas orbits are defined by their six-dimensional initial conditions, tori are defined by their actions J. Replacing numerically integrated orbits with orbital tori brings the following advantages (i) The phase-space density of orbits becomes known be-cause tori have prescribed actions and the six-dimensional phase-space volume occupied by orbits with actions in d 3 J is = (2) 3 d 3 J. Knowledge of the phase-space density of orbits allows one to convert between orbital weights and the value taken by the df on an orbit. (ii) On account of the above result, there is a clean and unambiguous procedure for sampling orbit space and relating the weights of individual tori to the value that the df takes on them -see below. The choice of initial conditions from which to integrate orbits for a library is less straightforward because the same orbit can be started from many initial conditions, and when the initial conditions are systematically advanced through six-dimensional phase space, the resulting orbits are likely at some point to cease exploring a new region of orbit space and start resampling a part of orbit space that is already represented in the library. On account of this effect, it is hard to relate the weight of an orbit to the value taken by the df on it (but see ;). (iii) There is a simple relationship between the distribution of stars in action space and the observable structure and kinematics of the model; as explained in §4.6 of BT08, the observable properties of a model change in a readily understood way when stars are moved within action space. The simple relationship between the observables and the distribution of stars in action space enables us to infer from the observables the form of the df f (J), which is nothing but the density of stars in action space. (iv) From a torus one can readily find the velocities that a star on a given orbit can have when it reaches a given spatial point x. By contrast a finite time series of an orbit is unlikely to exactly reach x, and searching for the time at which the series comes closest to x is laborious. Moreover, several velocities are usually possible at a given location, and a representative point of closest approach must be found for each possible velocity. (v) An orbital torus is represented by of order 100 numbers while a numerically-integrated orbit is represented either by some thousands of six-dimensional phase-space locations, or by a similar number of occupation probabilities within a phase-space grid. (vi) The numbers that characterise a torus are smooth functions of the actions J. Consequently tori for actions that lie between the points of any action-space grid can be constructed by interpolation on the grid. Interpolation between time series is not practicable. (vii) Schwarzschild and torus models are zeroth-order, time-independent models which differ from real galaxies by suppressing time-dependent structure, such as ripples around early-type galaxies (Malin & Carter 1980;Quinn 1984;Schweizer & Seitzer 1992), and spiral structure or warps in disc galaxies. Since the starting point for perturbation theory is action-angle variables (e.g. Kalnajs 1977), in the case of a torus model one is well placed to add timedependent structure as a perturbation. Kaasalainen showed that classical perturbation theory works extremely well when applied to torus models because the integrable Hamiltonian that one is perturbing is typically much closer to the true Hamiltonian than in classical applications of perturbation theory (Gerhard & Saha 1991;Dehnen & Gerhard 1993;Weinberg 1994), in which the unperturbed Hamilto- nian arises from a potential that is separable (it is generally either spherical or plane-parallel). Choice of the DF For the comparison of results obtained with and without the aa it is appropriate to study a model that has a very simple df. Specifically we represent both the thin and the thick discs with a df that is quasi-isothermal in the sense of Paper I: where Here (Lz) is the circular frequency for angular momentum Lz, (Lz) is the radial epicycle frequency and (Lz) is its vertical counterpart. (Lz) = 0e −(R−Rc)/R d is the (approximate) radial surface-density profile, where Rc(Lz) is the radius of the circular orbit with angular momentum Lz. The factor 1+tanh(Lz/L0) in equation is there to effectively eliminate stars on counter-rotating orbits and the value of L0 is unimportant provided it is small compared to the angular momentum of the Sun. In equations and the functions z(Lz) and r(Lz) control the vertical and radial velocity dispersions. The observed insensitivity to radius of the scaleheights of extragalactic discs motivates the choices where q = 0.45 and r0 and z0 are approximately equal to the radial and vertical velocity dispersions at the Sun. We take the df of the entire disc to be the sum of a df of the form for the thin disc, and a similar df for the thick disc, the normalisations being chosen so that at the Sun the surface density of thick-disc stars is 23 per cent of the total stellar surface density. Table 1 lists the parameters of each component of the df. There are two main differences between the df we use here and that used in Paper I: (i) Paper I used the actual vertical frequency z(J) in equation while here we use the vertical epicycle frequency (Lz). This substitution is necessary because for large J, z tends to zero so fast that the product zJz can decrease as Jz → ∞, leading to unphysical results when zJz appears in the df as the argument of an exponential. (ii) In the interests of simplicity the thin disc is here represented by a single quasi-isothermal component whereas in Paper I it was represented it by a sum of quasi-isothermals, one for stars of each age. Any serious attempt to fit a real stellar catalogue must distinguish between stars of different ages, and different metallicities, because the colours and luminosities of stars are very much functions of age and metallicity, so the chances of a star entering a catalogue depend on its age and metallicity. Consequently, by lumping together all thindisc stars regardless of age we forgo the opportunity to fit a real stellar catalogue in a detailed way. Nonetheless, we shall require our df to reproduce an observational density profile to demonstrate that even our unrealistically simple df has sufficient flexibility to reproduce given data to reasonable precision. The physical properties of the model are jointly determined by the df and the gravitational potential (R, z). Ultimately it will be necessary to require that the Galaxy's df be consistent with in the sense that the density of matter that the df predicts generates. However, before the question of dynamical self-consistency can be addressed, one must not only specify the df of dark matter (which is believed to contribute about half the gravitational force on the Sun) but also distinguish carefully between the masses of stars and their luminosities in the wavebands in which they are observed. In practice the latter can be done only if one has specified the Galaxy's star-formation and metalenrichment history. This enterprise goes far beyond the scope of the present paper; it will be addressed in subsequent papers in this series, which will explain the importance of comparing models to data in the space of observables, such as apparent magnitudes, parallaxes and proper motions, rather than the space of physical variables such as (x, v) used here. The purpose of this paper is merely to lay the foundations for such an exercise, which we expect to give first insights into the df of dark matter. Here we take the view that our df weights stars by their luminosity rather than their mass, and assume that is the potential of Model 2 of Dehnen & Binney modified to have thinand thick-disc scaleheights of 360 pc and 1 kpc (Table 2). In this model the disc contributes 60 per cent of the gravitational force on the Sun, with dark matter contributing most of the remaining force. Modelling procedures Together the df and the potential specify the probability density of stars in phase space. The simplest way to derive the model's physical characteristics from this probability density is to obtain a discrete realisation of the probability density by Monte-Carlo sampling. The model's physical characteristics are then obtained by binning the realisation's stars. The df specified by equations and can be analytically integrated over Jz and Jr to obtain the marginal distribution in Lz, so we can obtain a discrete realisation of this df by successively sampling one-dimensional pdfs in Lz, Jr and Jz. The results presented below are typically obtained with ∼ 200 000 tori. Once we have a torus library, a discrete realisation of the Galaxy is obtained by repeatedly choosing a torus from the library at random, then choosing each angle variable uniformly within (0, 2), and using the functions returned by the torus-generating software to to determine (x, v) from the given values of J and. When the aa is used, model construction proceeds rather differently: then given (x, v) one determines Jr and Jz by the following steps. Evaluate the vertical and radial energies where z and R are defined by equations and. where zmax is defined by z(zmax) = Ez and Ra and Rp are defined by R(Ri) = ER. These steps make it straightforward to evaluate the df at an arbitrary point (x, v), and thus derive the model's physical properties by numerically integrating the df, times any power of vi, over velocity space. The quantities such as stellar number density (x) and velocity dispersion vz 1/2 (x) are then continuous functions of their arguments. In the absence of the aa, an iterative procedure such as that described by McMillan & Binney is required to determine J(x, v), and the torus-modeeling procedure avoids this procedure by choosing J not (x, v). The price we pay for starting with J is discreteness, and the necessity of estimating (x), etc, by binning stars. Gilmore & Reid, which led to the identification of the thick disc. Since our df provides a reasonable fit to these points, it may be close to the actual df for turnoff stars. The aa recovers the density profile of the torus model to good accuracy for either value of. The lower panel in Fig. 3 shows how the mean streaming speed v is predicted to fall with |z|. The agreement between the torus model and the model based on the aa with = 1 (dotted red) is excellent for |z| < ∼ 1.6 kpc but at larger heights the aa has a systematic tendency to under- Figure 6. As Fig. 4 except for a volume that is 1.5 kpc from the plane. The dotted red curves are for the aa with = 1 rather than 0, shown by the broken blue curves. COMPARISONS estimate v. On account of the problem discussed in Section 2 apropos Fig. 1, the aa with = 0 over-estimates the mean-streaming speed by a few km s −1 for |z| > ∼ 0.7 kpc. Fig. 4 shows the distributions of the radial (U ), tangential (V ) and vertical (W ) components of velocity in a small volume akin to the solar neighbourhood. Since the full black curves from the torus model coincide with the dashed blue curves from the aa with = 0, the aa reproduces the torus results to high precision. The results obtained on setting = 1 are plotted as a dotted red curve but overlie the other curves too closely to be clearly distinguishable. Fig. 5 shows that the aa is equally successful in recovering the distributions of U, V and W at a smaller Galactocentric radius, 6.5 kpc. The insensitivity to of the velocity distribution in the plane arises because these distributions are dominated by rather nearly circular orbits, and for these orbits Jz is so much smaller than Lz that adding Jz to Lz barely changes the numerical value. Fig. 6 shows the U, V and W distributions at R = 8 kpc and 1.5 kpc away from the midplane. Systematic differences between the predictions of the aa and the results of the full torus model are now evident. For either value of, the aa yields a distribution of U that is slightly too broad. When = 0, the aa gives a distribution in W (dashed blue curve) that is too sharply peaked, but this fault is nicely corrected by setting = 1 (dotted red curve) because increasing moves orbits of given Lz outwards, and by virtue of equation, the smaller an orbit's value of Lz, the faster it is likely to move vertically. As expected, with = 0 the aa yields a distribution in V that is offset from that of the full torus model by ∼ 8 km s −1 towards higher velocities (dashed blue curve). This offset is largely cured by setting = 1 (dotted red curve). with Lz replaced by |Lz| + Jz and = 0 in the dotted case and = 1 in the dashed case. The full curve shows the effective potential derived from a time-average of the radial force. Fig. 7 shows the variation with |z| of the radial, tangential and vertical velocity dispersions. The two planar velocity dispersions are accurately reproduced for |z| < ∼ 1.3 kpc. At greater heights the aa over-estimates the dispersions, most strikingly so in the case of. The excessive value of is clearly associated with the tendency of the red curve in the middle panel of Fig. 6 to lie above the black one at v ∼ 50 km s −1. As the top and middle panels of Fig. 6 suggest, v 2 R 1/2 and (v − v ) 2 1/2 are at any height remarkably insensitive to the value of. From the bottom panel of Fig. 6 we anticipate that increasing will increase v 2 z at large |z| and indeed the bottom panel of Fig. 7 shows that increasing from zero to unity increases z at all z, but particularly at large z. This result arises because increasing shifts orbits with large Jz outwards, and since f (J) is a strongly decreasing function of |J|, this outwards shift raises the density of stars with large Jz at a given location. At |z| > ∼ 1 kpc setting = 1 increases the accuracy of z, while at smaller values of |z|, a slight deterioration in accuracy results. The bottom panel of Fig. 6 hints that the full curve in Fig. 7 may lie slightly too low as a result of poor sampling by the torus model of orbits with large |vz|, so the results from the aa with = 1 may be more accurate than appears to be the case. The dotted and dashed curves in Fig. 8 show the effective potential for a typical solar-neighbourhood orbit when = 0 and = 1, respectively. The full curve shows an effective potential for this orbit that was obtained by first evaluating the time-average of ∂/∂R−L 2 z /R 3 at each value of R visited by the orbit, and then integrating the resulting function of R. We see that with Lz replaced by |Lz| + Jz the simple effective potential provides a good fit to the effective potential obtained by time-averaging the radial force as a function of R. Figure 9. The variation of the angle of tilt of the velocity ellipsoid with respect to the plane versus the angle arctan(|z|/R) between the plane and the line of sight from (R, z) to the Galactic centre. Since the full curve tracks the dotted line, the long velocity ellipsoid points almost straight at the Galactic centre. Figure 10. The distribution of the ratio uz/u R that gives the direction of a principal axis of the contribution of each torus to the velocity ellipsoid at the point (R, z) = (R 0, 1.5 kpc). The full histogram weights each torus equally, while the dotted one weights them in proportion to their contributions to the density at the given point. Even the latter distribution is wide, so the orientation of the velocity ellipsoid depends quite sensitively on the weights assigned to orbits by the df. TILT OF THE VELOCITY ELLIPSOID A popular diagnostic of the Galaxy's gravitational potential is the way in which the principal axes of the velocity ellipsoid tilt as one moves away from the plane (;). As stated above, this phenomenon lies beyond the scope of the aa, but it can be determined from the torus model. Fig. 9 shows that this angle is nearly equal to the angle arctan(|z|/R) between the plane and the line of sight from (R, z) to the Galactic centre. That is, in the vicinity of the Sun, the longest axis of the velocity ellipsoid is almost parallel to the radial vector r. This behaviour is that expected in a spherical potential, and Smith et al. argue that alignment of the velocity ellipsoid with spherical coordinates implies that the potential is spherically symmetric. Our assumed gravitational po-tential is far from spherical because roughly half the radial force at the Sun is contributed by the disc, and the dark halo is itself flattened (axis ratio 0.8). The aspherical nature of the potential is reflected in the fact that the distribution of frequency ratios z/ of the model's tori, has a median close to 2, while in a spherical potential this ratio is inevitably unity. Although our model is not strictly speaking a counter example to the assertion of Smith et al. because we have established only that the velocity ellipsoid is approximately aligned with spherical coordinates in the region around the Sun, it does suggest that one should examine more closely the reasoning in that paper. A key step in its argument is the assertion that if the df is an even function of v 2 r, then the isolating integrals that constrain individual orbits are also. If the isolating integrals have this property, then the velocity ellipsoid provided by any df will be aligned with spherical coordinates. In particular the df f (J) = (J − J0) corresponding to a single orbit will be radially aligned, so the matrix vivj x will be diagonal in spherical coordinates, where. x implies the time average over instants when the star lies in some small volume around x. As Fig. 1 illustrates, for stars on standard orbits in the meridional plane, four velocities are possible at a given point. Let these velocities be ±v1 and ±v2. By time-reversal symmetry, the probability that v1 occurs is inevitably equal to the probability that −v1 occurs, and similarly for ±v2. However, it turns out that ±v1 may occur more or less often than ±v2. In fact the probability of occurrence of vi is proportional to the Jacobian ∂()/∂(x), which is a non-trivial function of, but one that can be determined for a numerically constructed torus (see Appendix A). The four possible velocities at x correspond to the four values of that bring the star to the given point x, and the values p1 and p2 taken by ∂()/∂(x) at these values of depend on whether v is ±v1 or ±v2. Clearly we have Let u be the eigenvector of this matrix that lies closest to the r direction. Fig. 10 shows for the point (R, z) = (8, 1.5) kpc the distribution of the ratio uz/uR. The full histogram shows the distribution when each torus is given equal weight, while the dotted histogram shows the distribution when tori are correctly weighted by the contributions that they make to the stellar density at the given point. (Although the density of sampling in action space ensures that all tori make equal contributions to the stellar mass of the entire Galaxy, every torus has its own way of spreading its mass in space.) We see that the distribution of orientations of individual contributions to the velocity ellipsoid is quite broad, even when the tori are correctly weighted. An examination of the dependence of uz/uR on Jz reveals that orbits with larger values of Jz make contributions that are aligned with r, while it is orbits with small Jz that sometimes make contributions that are aligned nearly parallel to the plane. Since the df specifies the relative weight of these variously oriented contributions, it controls the orientation of the final velocity ellipsoid at least as much as does the gravitational potential. Consequently, only limited inferences about the nature of the potential can be drawn from observations of the ve-locity ellipsoid yielded by the df that the Galaxy happens to have. The widespread belief that the shape of the potential can be inferred from the orientation of the velocity ellipsoid probably arises from studies of models that have Stckel potentials. For these potentials the Hamilton-Jacobi equation separates in an appropriate coordinate system (u, v), and the canonically conjugate momenta are functions of one coordinate only: pu(u), pv(v) (e.g. BT08 §3.5.3). Consequently, the coordinate directions are bisectors of the angles between v1 and v2. Moreover, it turns out that for these potentials, ∂()/∂(x) is the same for all four values of so the coordinate directions are the eigenvectors of vivj x for every orbit that reaches x. Consequently, the velocity ellipsoid has to be oriented with the coordinate directions regardless of how orbits are weighted. 1 In a general potential, there is no universal coordinate system that describes the alignment of the egenvectors of vivj x, and the orientation of the final velocity ellipsoid very much depends on how orbits are weighted. CONCLUSIONS Dynamical models of the Milky Way will play an key role in the scientific exploitation of data from large surveys that are currently being undertaken. Models that are based on Jeans' theorem should be the most powerful tools for extracting science from data, and amongst such models those that express the df as a function of the actions enjoy some very important advantages. The major obstacle to the use of Jeans' theorem in the context of the Galaxy is the lack of analytic expressions for three independent isolating integrals. Paper I presented models that use approximate expressions for the actions that rely on the adiabatic invariance of the vertical action Jz. In Section 2 we tested the validity of this adiabatic approximation (aa) by numerically integrating typical orbits. We found that the orbits' vertical dynamics is reproduced by the aa to remarkably good accuracy, but the motion in the plane is less accurately recovered because the naive aa under-estimates the strength of the centrifugal potential. This defect leads to the radial action derived for a given phase-space point (x, v) being over-estimated when the point lies near apocentre, and under-estimated when it lies near pericentre. Since v is small at apocentre and large near pericentre and the df is a declining function of Jr, the defect leads to the meanstreaming velocity v being over-estimated. The problem can be largely resolved by replacing Lz in the centrifugal potential by |Lz| + Jz with a number of order unity. The more strongly flattened the potential is, the more accurate the aa becomes and the larger the value of needs to be. For example, in the extreme case of vanishing dark halo = 1.9 works well. In Section 3 we explained how to build a model Galaxy using orbital tori. Torus modelling is best considered an extension of Schwarzschild modelling, which has long been a standard tool for the interpretation of data for external galaxies, both in connection with searches for massive black holes and attempts to understand how early-type galaxies were assembled. Torus modelling is a more powerful technique principally because (i) it enables us to quantify orbits by the values taken on them of essentially unique and physically easily understood isolating integrals, and (ii) it makes it easy to determine at what velocities a star will pass through any spatial point. We presented a df of exceptional simplicity, which generates a reasonably realistic model of the Galaxy's discs. In Section 4 we examined in some detail observable quantities in this model when they are calculated from either the full torus machinery, or from the aa. We showed that in the plane, at both R0 and R = 6.5 kpc, the distributions of all three components of velocity are reproduced to high accuracy by the aa, regardless of whether is set to zero or unity. Away from the plane the velocity distribution is sensitive to the weights of orbits that have relatively large values of Jz, with the consequence that it matters whether the centrifugal potential contains Lz or |Lz| + Jz, and we find materially better fits to the distributions of both v and vz when = 1 rather than zero. Regardless of the value of, the aa predicts a value for vR 1/2 that exceeds the true value by an amount that grows with |z|, being ∼ 3.4 per cent at |z| = 1.5 kpc. The aa yields a value of that lies very close to the true value for |z| < 1.3 kpc, but exceeds the true value by ∼ 10 percent at |z| = 2 kpc because the true value declines with |z| at |z| > 1.3 kpc, whereas that obtained from the aa does not. With = 1 the aa yields a value for vz 1/2 that lies within 3 percent of the true value right up to |z| = 2.3 kpc. The aa inevitably predicts that the velocity ellipsoid has two axes parallel to the plane, so we must turn to the full torus model to discover how the velocity ellipsoid tilts as one moves away from the plane. We find that its longest axis points quite close to the Galactic centre. This result emerges through averaging the quite disparate contributions of individual tori. Consequently it reflects the structure of the df as much as the gravitational potential. From a computational perspective, the aa is extremely convenient, both because it does not require specialised torus-generating code, and because it yields J from (x, v) rather than (x, v) from J. Consequently, a model's observables can be obtained by integrating over velocity space, just as traditionally we have obtained the observables of models with dfs of the form f (E) and f (E, L). While McMillan & Binney showed that it is possible to determine J from (x, v), the procedure used is iterative and time-consuming, so for this paper observables were estimated by binning the particles of a discrete realisation obtained by Monte-Carlo sampling the df. Even this procedure is computationally expensive when enough samples are drawn to make Poisson noise negligible, so it is very useful to be able to obtain good approximations to J(x, v) from the aa, and we anticipate that the aa will be widely used in the interpretation of observations of the Galaxy. Paper I and the present paper represent two small steps towards the kind of Galaxy modelling apparatus that should available before a preliminary Gaia Catalogue appears in the second half of this decade. The next big step is to carry the predictions of models into the space of observables -such as apparent magnitudes, parallaxes and proper motions -and then to explore how tightly the df of stars can be constrained by data of varying extent and precision. This step is crucial because distance uncertainties propagate from observables such as magnitudes and proper motions to correlated errors in physical quantities such as stellar masses and velocities. We hope to report on this work soon. ∂xj∂Ji = ∂vj ∂Ji x. Taking determinants The Jacobian on the left is the orbit's probability density in real space because the amount of time a star spends in a region of angle space is proportional to its volume, d 3. It makes good physical sense that the Jacobian in equation is the orbit's probability density. The torus machine delivers the numerical values of both sn and ∂sn/∂J. We need The inverse of the second Jacobian on the right follows trivially from equation, and for the first Jacobian we can write dx = ∂x ∂ (T) Dividing through by d (T) and holding J constant we find The first two matrices on the right involve only toy variables so they are available analytically, and the third matrix can be obtained from equations.
Efficient Network Coding with Interference-Awareness and Neighbor States Updating in Wireless Networks Network coding is emerging as a promising technique that can provide significant improvements in the throughput of Internet of Things (IoT). Previous network coding schemes focus on several nodes, regardless of the topology and communication range in the whole network. Consequently, these schemes are greedy. Namely, all opportunities of combinations of packets in these nodes are exploited. We demonstrate that there is still room for whole network throughput improvement for these greedy design principles. Thus, in this paper, we propose a novel network coding scheme, ECS (Efficient Coding Scheme), which is designed to achieve a higher throughput improvement with lower computational complexity and buffer occupancy compared to current greedy schemes for wireless mesh networks. ECS utilizes the knowledge of the topologies to minimize interference and obtain more throughput. We also prove that the widely used expected transmission count metric (ETX) in opportunistic listening has an inherent error ratio that would lead to decoding failure. ECS therefore exploits a more reliable broadcast protocol to decrease the impact of this inherent error ratio in ETX. Simulation results show that ECS can greatly improve the performance of network coding and decrease buffer occupancy.
Biosorption of Cu(II) by powdered anaerobic granular sludge from aqueous medium. Copper(II) biosorption processes by two pre-treated powdered anaerobic granular sludges (PAGS) (original sludges were methanogenic anaerobic granules and denitrifying sulfide removal (DSR) anaerobic granules) were investigated through batch tests. Factors affecting the biosorption process, such as pH, temperature and initial copper concentrations, were examined. Also, the physico-chemical characteristics of the anaerobic sludge were analyzed by Fourier transform infrared spectroscopy, scanning electron microscopy image, surface area and elemental analysis. A second-order kinetic model was applied to describe the biosorption process, and the model could fit the biosorption process. The Freundlich model was used for describing the adsorption equilibrium data and could fit the equilibrium data well. It was found that the methanogenic PAGS was more effective in Copper(II) biosorption process than the DSR PAGS, whose maximum biosorption capacity was 39.6% lower. The mechanisms of the biosorption capacities for different PAGS were discussed, and the conclusion suggested that the environment and biochemical reactions during the growth of biomass may have affected the structure of the PAGS. The methanogenic PAGS had larger specific surface area and more biosorption capacity than the DSR PAGS.
The Strategic Meaning of CBCA Criteria From the Perspective of Deceivers In 2014, Volbert and Steller introduced a revised model of Criteria-Based Content Analysis (CBCA) that grouped a modified set of content criteria in closer reference to their assumed latent processes, resulting in three dimensions of memory-related, script-deviant and strategy-based criteria. In this model, it is assumed that deceivers try to integrate memory-related criteriabut will not be as good as truth tellers in achieving thiswhereas out of strategic considerations they will avoid the expression of the other criteria. The aim of the current study was to test this assumption. A vignette was presented via an online-questionnaire to inquire how participants (n = 135) rate the strategic value of CBCA criteria on a five-point scale. One-sample t-tests showed that participants attribute positive strategic value to most memory-related criteria and negative value to the remaining criteria, except for the criteria self-deprecation and pardoning the perpetrator. Overall, our results corroborated the model's suitability in distinguishing different groups of criteriasome which liars are inclined to integrate and others which liars intend to avoidand in this way provide useful hints for forensic practitioners in appraising the criteria' diagnostic value. INTRODUCTION The Empirical Footing of CBCA The underlying rationale of Criteria-Based Content Analysis (CBCA) holds that the content of experienced-based accounts is qualitatively higher than the content of fabricated statements (the so-called Undeutsch Hypothesis; Undeutsch, 1967). After identifying content characteristics that practitioners and scholars deemed suitable to substantiate truthfulness, Steller and Khnken compiled a systematic set of 19 CBCA criteria (see Table 1). Since then, a multitude of both field and laboratory studies have confirmed that experience-based accounts indeed yield higher content quality than fabricated statements, in this way corroborating the overall validity of CBCA as a truthdetection tool (for recent meta-analyses see ;). Most of these studies summed up the individual criteria scores to one comprehensive (total) CBCA score, which subsequently served as the relevant variable for further analysis (e.g., ;). Such an approach, however, may be too simplistic and may underestimate the actual utility of CBCA, since it ignores the possibility that some criteria might be more sensitive to truthfulness and hence bear higher diagnostic value than others. For instance, after having (Steller and Khnken, 1989 identified children who were able to achieve a higher total CBCA score in their fabricated than in their truthful accounts, Hommers showed that the criteria accurately reported details not comprehended, unexpected complications, and related external associations still discriminated between true and fabricated statements. This suggests that particular weights should be assigned to these criteria as predictors of the veracity of statements. More empirical knowledge is essential however, if one intends to advance the prospect of weighting beyond the stage of mere suggestion. To date, CBCA still lacks a weighting system, despite the fact that numerous researchers have criticized its absence and stressed the need for implementation to increase the method's accuracy and sensitivity (e.g., Vrij, 2005;Porter and ten Brinke, 2010). Theoretical Considerations About the Diagnostic Value of CBCA Criteria The diagnostic value of a criterion refers to its validity in discriminating between self-experienced and fabricated statements. For identifying a criterion as diagnostically valuable, its occurrence in true statements is necessary, but by no means sufficient (i.e., ): The relevant question is not how likely a criterion is to occur in true statements, but how likely it is to appear in true relative to fabricated statements. As a first step toward inferring the diagnostic value of a criterion, theoretical considerations of what processes govern its emergence loom necessary: Why exactly is a criterion expected to occur in true accounts, but not in fabricated ones? From a psychological perspective, two universal aspects apply to a truth teller but not to a lying person (Volbert and Steller, 2014): Truth tellers report from actual memory, and in doing so are convinced that the event in question had happened as reported. Therefore, in the statement of an honestly reporting person content criteria are likely to occur naturally, as they reflect phenomena associated with genuine memories and feelings of sincerity. A lying person on the other hand needs to put (more) deliberate effort in inventing information that substitutes the missing memory of the alleged experience (creative demands related to primary deception; Khnken, 1990) and in masking the discrepancy between statement and belief by presenting the fabricated event in a credible manner (strategic demands related to secondary deception; Khnken, 1990). In accordance with the premise of primary vs. secondary deception, Khnken distinguished between two forms of CBCA criteria by classifying them as being either cognitivelyor motivationally-related. Both kinds of criteria indicate true statements, albeit for different reasons: The former relate to creative (or cognitive) demands; typically, they should be too difficult to produce when fabricating. Motivational criteria refer to how a witness presents a statement; typically, they should be avoided out of strategic considerations when lying. While such categorization suggests that a criterion's diagnostic value is to be derived from either its cognitive or motivational aspects, Niehaus et al. pointed out that both components need to be taken into account. That is, considerations of the underlying motivational component should also be applied to criteria originally regarded as purely cognitively-related, and vice versa. In summary then, two considerations require clarification if the diagnostic value of a criterion is to be deduced: To what degree is the deceiver inclined to produce the criterion and to what degree would the deceiver be capable of doing so. Insight about the motivational component should hence provide a first hint toward the criterion's diagnostic value: If the deceiver considers the criterion to be strategically detrimental to his or her self-presentation efforts, the likelihood of its emergence in fabricated accounts is generally lower. In turn however, if the deceiver ascribes positive strategic value to a criterion and thus is inclined to produce it, a higher likelihood for its occurrence does not necessarily follow: Whether or not the criterion will emerge should then crucially depend on the cognitive component, that is, how difficult it is for the deceiver to integrate the criterion in his or her statement (differential controllability, Khnken, 1990). Previous Research About the Motivational Component The concept of strategic self-presentation is firmly established within the literature, stating that liars are typically more concerned with appearing credible than truth tellers (DePaulo, 1992;). Inquiring about suspects' strategies to appear credible during police interrogations, Hartwig et al. correspondingly found that lying participants seemed to be more prone to adopting verbal 1 and non-verbal strategies than truthfully reporting participants. Further scientific efforts to examine deceivers' verbal strategies are hardly existent however. While there are several articles that did elaborate on beliefs of lay people about verbal cues of deception, these studies primarily investigated which kind of contents are believed to be indicative for detecting lies in somebody else's statement (e.g., Granhag and Strmwall, 2000;;). Consequently, insights derived from their findings do not necessarily elucidate on the actual content-related strategies of deceivers, since the results were gained from the perspective of the to-be-deceived rather than from the perspective of the deceiver (). That is, while people's beliefs of how lies can be spotted in others are likely to affect their strategies in detecting them (), these beliefs must not necessarily govern their own way of acting when being deceptive. To our knowledge then, only two studies (;Niehaus, 2008) exist which quantitatively examined how laypersons assess the strategic value of CBCA or other content-related criteria in the context of deception. This is certainly surprising, considering that the CBCA criteria classified as motivationally-related are only valid if the forensic assumptions-predicting that laypersons would ascribe negative strategic meaning to these criteria and try to avoid their production when deceiving-are correct. We, therefore, aim to elaborate further on the motivational component of CBCA criteria by building on the findings of Niehaus et al. and Niehaus about content-related deception strategies. Because the two previous studies are available in German language only, they are first introduced in more detail. Studies About Content-Related Deception Strategies Both studies asked subjects to take the perspective of a fictitious protagonist, who for personal reasons needed to convincingly lie about a certain type of event. The first investigation (; n = 120; M age = 29.6; all females) presented a scenario in which the protagonist's friend felt unable to press charges against her neighbor, who had previously raped her. Therefore, the protagonist decided to claim that she herself had been raped by the neighbor so that the perpetrator would still receive punishment. The story outline in the second study (Niehaus, 2008; n = 50; M age = 28.6; 16 men) entailed less severe ramifications: The protagonist needed to find a convincing explanation why he had shown up late at work, as otherwise he would be fired by his boss. In need of an excuse, he wrongly accused his neighbor to have had him trapped in a cellar. After the presentation of the story outline, a standardized questionnaire was handed out on which each CBCA criterion was described in a discrete way as well as illustrated by means of an example embedded into the story outline. Subjects then had to indicate on a 5-point or 7-point () scale whether they would rather integrate or avoid the criterion if they aimed to deliver the false statement as convincingly as possible. saying things that were not true" being the categories most frequently mentioned (each being mentioned by 22% of the relevant sample). As pointed out by Hartwig et al., these findings were limited to qualitative analyses and thus are best to be interpreted as a starting point for future research. Negative ratings signified that the criterion was considered to weaken one's credibility, while a positive value indicated that the criterion was believed to promote deception efforts. If a criterion was considered to be strategically irrelevant, the neutral value "0" was to be assigned. For analysis purposes one-sample t-tests were conducted to reveal whether the averaged value ratings differed significantly from "0, " indicating that subjects ascribed strategic meaning to the criterion. Table 2 summarizes the obtained results from both studies. Note that some CBCA criteria, as well as their classification, differ from the original compilation (Steller and Khnken, 1989) since the authors deduced five higher-level strategic goals that deceivers would pursue in the context of sexual rape allegations, and structured the criteria accordingly (for more detail see Niehaus, 2001;). As depicted in Table 2, the results give rise to two major implications: First, most motivational criteria were rated as strategically negative and second, cognitive criteria were found to carry strategic meaning as well. The pattern showing that subjects generally ascribed negative strategic meaning to motivational criteria is paramount, given that the discriminatory value of these criteria rests largely on the presumption that laypersons would tend to avoid them in deception contexts. The findings also corroborated the postulation of Niehaus et al., stating that for each criterion-regardless of its original classificationboth motivational and cognitive aspects needed to be considered if its diagnostic value is to be deduced. In regards to the modified structure of the CBCA model ( Table 2) however, group I (competency), IV (content-related inconspicuousness) and V (formal inconspicuous) contained criteria of both positive and negative valences. Consequently, the model's structure does not allow for clear group-based distinctions on the motivational level. Considering further that purely motivational aspects governed the classification of the criteria into the five different groups, no information about the cognitive component can be deduced from its criteria groups. The Revised Model of CBCA Criteria In 2014, Volbert and Steller introduced a revised CBCA model, which is based on theoretical considerations of what processes govern the emergence of criteria in statements (Volbert and Steller, 2014). The model still distinguishes criteria pointing to the differences in the cognitive processes of liars and truth tellers (cognitive criteria) from criteria referring to the aspects of strategic self-presentation (motivational criteria). Other than before, cognitive criteria are distinguished even further in the model, resulting in two main groups of cognitively-related criteria: The first group entails details characterizing episodic autobiographical memory, such as specific spatiotemporal and self-related information. When deceivers fabricate statements, these characteristics are likely to occur as well since they provide essential information (i.e., temporal or spatial details) without which any delivered account would appear incomplete. Because liars cannot draw on actual episodic memory however, the criteria are assumed to be expressed in a less elaborate way than in experience-based statements. The second main group of cognitive criteria comprise script-deviant details, such as unusual details or unexpected complications during the incident. Criteria (;Niehaus, 2008). Strategic goal CBCA criterion Classification I. Competency (of the deceiver) Niehaus. Criteria in green font color received significant positive value ratings in at least one of the two studies, while criteria in red font color symbolize significant negative ratings. Additional bold font was applied if these criteria received significant ratings in both studies. The third row shows whether a criterion was originally classified as cognitively ("C")-or motivationally-related ("M"), based on the work of Khnken. Furthermore, Niehaus et al. referred to the impression-management account to predict in advance whether laypersons would ascribe positive ("+") or negative ("−") strategic meaning to a criterion. c To allow meaningful comparisons, criteria that were investigated in only one of the two studies are not presented (self-related/victim-related/neutral associations, attribution of negative traits, clichs, repetitions). Spontaneous from this group should rarely occur in fabricated accounts, considering that a lying person must construct his or her statement from cognitive scripts (e.g., Schank and Abelson, 1977) to substitute for the lack of experience-based memories. Cognitive scripts reflect the liar's subjective assumptions of what properties typically look like for the event in question (Khnken, 1990). Script-deviant criteria, on the other hand, refer to characteristics that go beyond the very limitations of such simplified, script-guided knowledge. If the statement giver cannot draw on actual memories providing a potential source for script-deviant elements, he or she should face great difficulties in deliberately producing them as he or she would need to overcome the limited scope of his or her own imagination (Khnken, 1990). Finally, the third main group refers to motivational criteria, thereby addressing efforts of positive strategic self-presentation (see section Theoretical Considerations About the Diagnostic Value of CBCA Criteria for an explanation of why motivational criteria are expected to appear in true rather than fabricated statements). Table 3 depicts the revised model, with the original binary classification of cognitively-vs. motivationally-related criteria presented in brackets behind each criterion. Aim of the Present Study The present study inquired about the content-related deception strategies of laypersons like done by Niehaus et al. and Niehaus before. However, their categorization of criteria into 5 groups resulted in value ratings that were of opposite valence within some of the groups (i.e., positive and negative value ratings across criteria of the same group), thereby rendering group-based distinctions impractical. Against this background, the main goal of the current study was to examine whether the theoretically-driven structure of the revised model would correspond better to the observed pattern of strategic value ratings. Put differently, the relevant question was to what degree the composition of each of the three criteria groups would contain criteria that on the motivational level are compatible with each other (i.e., containing criteria that consistently carry either negative or positive strategic meaning). Derived from the findings of Niehaus et al. and Niehaus, we expected participants to rate the memory-related (group 1) criteria predominantly positive. In contrast, we expected participants to generally attribute negative strategic value to script-deviant (group 2) and strategy-based (group 3) criteria. Overall, for each of the three criteria groups, the predicted pattern of strategic ratings should result in a degree of compatibility higher than observed for any previous models, meaning that within each group the strategic value ratings should be either consistently negative or consistently positive. METHODS Participants A total of 135 participants (M age = 28.6, SD = 9.8, Range 19-67; 32 men) filled out a questionnaire 2 inquiring about contentrelated deception strategies. The sample consisted mostly of students (n = 66) or working professionals (n = 55). Participants were recruited via an online participation system of Volbert and Steller understand their allocation of specific criteria to be exemplary rather than irrevocably. For illustration purposes, the structure presented in this article thus differs slightly from the version originally presented by the authors: In the original version, a separate category "statement as a whole" addresses criteria that can only be evaluated if the statement is analyzed in its entirety, as opposed to scoring the same criterion multiple times at different parts of the statement ("single characteristics"). For reasons of clarity, we exclusively focused on single characteristics and rejected all "statement as a whole" criteria (namely reconstructability of the event, vividness of the event, quantity of details, unstructured production and spontaneous supplementing). the University of Potsdam (Germany) or through advertisement in public Facebook groups related to psychological topics. Prior to participation, all participants were assured that their data would be treated confidentially, and all participants gave written informed consent. Upon request, credit points were awarded for participation. Procedure and Material The survey was administered online by using the platform www. soscisurvey.de. The first items of the questionnaire asked subjects to provide information about age, gender, and occupation. Next, the story outline was presented, with the instruction to assume the perspective of the protagonist. The story closely resembled the one devised by Niehaus, with only minor modifications to preclude potential misunderstandings. In brief, the story 3 described a protagonist who is at risk of losing his highly valued job, unless he would be able to deliver a convincing explanation to his boss for his (repeatedly) belated arrival at work. After having read the story outline, 27 CBCA criteria derived from the model of Volbert and Steller were presented on the questionnaire, with the sequence of criteria presentation being adjusted to the model's structure. For each criterion, we gave a short abstract description illustrated by an example embedded into the story outline. For instance, we first described the criterion spontaneous corrections by phrasing the question in the following way: "Without being asked, would you refute parts of the information you had already provided at an earlier stage and revise them?" Subsequently, the criterion was illustrated by means of the following story-related example: "Oh no, that was wrong what I had said earlier. In fact, I was holding the folder already in my hands at this time point." Before the first criterion was presented, we further instructed participants 3 See Appendix 1 for a display of the entire story as presented to participants. to not pay too much attention to the upcoming examples, but to indicate whether in principle they would rather integrate or avoid the criterion in question. Thereby, we asked participants to also consider to what degree they would feel confident to integrate the criterion in their fabricated accounts. With this instruction, we intended to strengthen participants' motivation to assess the strategic meaning of the criteria in a thorough and attentive way. For each criterion, participants should indicate their strategic assessment on a five-point scale (−2: "No, this would strongly weaken my credibility"; −1: "No, this would weaken my credibility"; 0: "It does not matter. My credibility would remain unchanged"; +1: "Yes, this would strengthen my credibility" +2: "Yes, this would strongly strengthen my credibility"). RESULTS Identical to the investigations of Niehaus et al. and Niehaus, the goal of our analysis was to compare participants' strategic value ratings for each criterion to the neutral value "0" (= no relevant strategic meaning). Significant differences between any such pair of values would indicate that participants were inclined to either avoid or integrate the respective criterion, dependent on the valence of the criterion's rating (negative vs. positive). We therefore conducted onesample t-tests to assess for each criterion whether its average strategic rating differed significantly at the p < 0.05 level from the value "0." To account for the inflated Type I error rate due to multiple t-tests, the Benjamini-Hochberg procedure (Benjamini and Hochberg, 1995) was performed, which controls the expected proportion of falsely rejected null hypotheses (false positives). In consideration of the rather explanatory nature of our study, we preferred this method over Bonferroni corrections, since the latter greatly increase the probability of a Type II error. With the false discovery rate set at 5%, no false positives were detected. Consequently, we can safely conclude that at least 95% of the criteria ratings that were found to be significant were correctly identified as such. The results are presented in Table 4, with the criteria structured according to the revised model of Volbert and Steller. Viewed irrespective of criteria dimension or classification, the strategic value ratings for 18 of the 24 criteria differed significantly from "0, " indicating that participants considered most criteria to be strategically relevant when deceiving. The effect sizes of the significantly rated criteria ranged from d = 0.19 to d = 1.28. For 12 of these 18 criteria the value ratings were negative, while the remaining 6 criteria received positive value ratings. In sum then, the criteria can be generally categorized as strategically relevant vs. strategically not relevant from the perspective of deceivers. As further hypothesized, the strategically relevant criteria were found to carry strategic meaning of either positive or negative valence. Regarding cognitive criteria, the results showed that participants tended to ascribe strategic meaning of different valence to them. This finding, in fact, supports our predictions since at closer inspection a clear difference between the two dimensions of cognitive criteria was indeed visible: Taking only statistically significant value ratings into account, all ratings for memory-related (group 1) criteria were of positive valence. Among these criteria the effect sizes were of medium strength (d > 0.5; Cohen, 1988), except for temporal information (d = 0.19). The value ratings for 6 of the 10 memory-related criteria were statistically non-significant, among which the strategic ratings of 3 criteria (own thoughts, sensory impressions, attribution of perpetrator's mental state) were tentatively negative. Put another way, if participants attributed strategic meaning to a criterion from group 1 (in terms of value ratings that differed significantly from "0"), the valence was exclusively positive. In sharp contrast, for script-deviant (group 2) criteria all ratings assumed significant negative values, with the strength of their effect sizes ranging from small (d > 0.2; Cohen, 1988) to large (d > 0.8; Cohen, 1988). DISCUSSION Our findings indicate that on the motivational level, most CBCA criteria are clearly distinguishable in criteria that liars are inclined to integrate vs. criteria that liars are inclined to avoid in their statements. For half of the 18 strategically relevant criteria (as defined by value ratings differing significantly from "0") we obtained effect sizes of at least medium strength, which may be interpreted as evidence for the practical significance of these findings. Furthermore, even though the three-group structure of the revised model goes beyond simply dichotomizing the criteria in bearing positive vs. negative valence, it nonetheless yielded a widely homogenous pattern of strategic value ratings. In comparison to the 5-group classification used in the two previous studies (;Niehaus, 2008), a considerably higher degree of compatibility within each criteria group was observable. For illustration purposes, Table 5 applies the results of all three studies to the structure of the revised model, enabling viewers to graphically examine to what extent the studies' results tally with each other. Inspecting the valence of the strategic ratings on group level, the degree of compatibility appears to be lowest for memoryrelated criteria of group 1. That is, in our study the strategic ratings of more than half of the criteria in this group failed to reach significance and in part showed a directional tendency opposite to the direction of the strategically relevant criteria. If collectively examined however, attribution of perpetrator's mental state remains the only criterion to which no (positive) strategic meaning was attributed in any of the three studies 4 (see Table 5). Nonetheless, the overall picture for criteria of group 1 remains less consistent than for the criteria of the other two groups, as positive ratings that were significant in each of the three studies (2 out of 7 criteria) were the exception rather than the rule. Viewed the other way around however, it is crucial to note that none of the three investigations yielded significantly negative ratings for memory-related criteria, a pattern which sharply contrasts with the results found for script-deviant and strategybased criteria. Motivational considerations are therefore of little avail in ascribing diagnostic value to memory-related criteria, as the underlying rationale requires that liars would typically avoid such contents. Instead, only considerations on the cognitive level may explain why these specific criteria are more likely to occur in true than in fabricated statements, implying that when lying certain contents are more difficult to produce than when telling the truth. Regarding script-deviant criteria of group 2, our results showed that they consistently carry negative strategic meaning for laypersons. These results correspond well with the findings previously reported by Niehaus et al. and Niehaus, as illustrated by Table 5 (significant negative ratings across all three studies 5 for 2 out of 3 criteria). In contrast to memory-related criteria, motivational considerations thus appear relevant when assessing the diagnostic value of scriptdeviant criteria, considering that deceivers typically intend to avoid their production. Criteria in green font color received significant positive value ratings, while criteria in red font color symbolize significant negative ratings. A For results contradictory to our hypotheses the p-values from two-tailed tests were reported. Otherwise, the p-values were derived from one-tailed tests, since the testing hypotheses were one directional. As pointed out before, both memory-related (group 1) and script-deviant (group 2) criteria were originally classified as being cognitively-related. Interestingly, even though the newlymade distinction of memory-related vs. script-deviant criteria was originally deduced from considerations on the cognitive level-assuming that different cognitive processes underlie their production-, the hypothesized differences seem to translate to the motivational level as well. In this way, the obtained pattern of strategic value ratings clearly corroborated the utility of distinguishing between two groups of cognitive criteria. Other than group 1 and group 2 criteria, strategy-based (group 3) criteria are classified as motivational criteria, which implies that their validity depends largely on the assumption that deceivers out of strategic reasons avoid producing them. While our results indeed showed consistent negative ratings for 7 of the 9 criteria (with 4 of them being rated significantly negative across all three studies; see Table 5), participants in our study attributed positive strategic meaning to the criteria self-deprecation and pardoning the perpetrator. Participants in the investigation of Niehaus et al. on the other hand had rated the same criteria significantly negative. Possibly, the discrepant findings between the two investigations might be attributable to their variations in context: While the story outline of the current study revolved around a rather ordinary every-day work situation, the context in Niehaus et al.'s study bore graver ramifications and entailed false allegations of sexual rape. Such relationships between context and valence of the rating would, in fact, correspond well with the proposition of Niehaus et al., predicting that the negative strategic meaning of self-deprecation and pardoning the perpetrator is dependent on sufficient contextual gravity to render elements related to selfcriticism unconceivable. Within less severe contexts on the other hand (i.e., scenarios typically used in laboratory studies, such as accusations of minor theft or insurance fraud) laypersons would ascribe positive strategic value to these elements, believing to make them appear more amiable and trustworthy. It is not clear however whether the negative strategic ratings reported by Niehaus et al. primarily reflect the sexual connotation of the context or rather the severe ramifications associated with it, leaving open the question under which specific circumstances the criteria could be valid indicators for truthfulness. Furthermore, empirical findings from several laboratory studies appear to p < 0.05: a study of Niehaus et al. ; b study of Niehaus ; c current study. Criteria in green font color received significant positive ratings in at least one of the three studies (and no significant negative ratings in any of the other studies). Vice versa, criteria in red font color received significant negative ratings (and no significant positive ratings in any of the other studies). If a criterion received both significant positive and negative value ratings across different studies, a blue font color was applied. Additional bold font highlights that the criterion was rated significantly across all three studies. d For reasons of clairty, the structure presented in this article differs slightly from the version originally presented by Volbert and Steller ; see footnote 5 for details. Also, Table 5 excludes criteria that were previously investigated by Niehaus et al. and Niehaus, but that the authors of the revised model had allocated to the "statement as whole" category (quantity of details, unstructured production) or not adopted at all (justifying memory gaps/uncertainties, spontaneous clarifications). dispute their validity in indicating true testimony. Amado et al. for instance identified in their meta-analysis selfdeprecation and pardoning the perpetrator as the only criteria that failed to discriminate between true and fabricated statements, while Vrij even found that in two out of ten studies, self-deprecation appeared significantly more often in fabricated than in true statements. Crucially though, the design and nature of laboratory studies typically vary in important aspects from forensic interrogation settings (Volbert and Steller, 2014), including but not limited to the gravity of the context in which the interview takes place. Inferring from these findings that self-deprecation and pardoning the perpetrator are by or in themselves unsuitable in indicating true testimony may therefore be premature. Instead, further investigation seems warranted to explicate the exact contingencies under which laypersons are inclined to avoid rather than promote their production. At least to a weaker degree, if viewed collectively the studies' results may hint at additional criteria that in their strategic meaning are context-dependent. Concerning memory-related (group 1) criteria for instance, context-dependency may explain why across studies participants rated the first five criteria uniformly within the two studies that introduced a nearly identical story outline, but differently so in the investigation that referred to a scenario with considerably graver ramifications (). From a purely theoretical perspective, it seems further possible that the strategic meaning of strategybased (group 3) criteria may depend on the underlying context as well. For instance, some of these criteria pertain to contents that in true accounts reflect the expression of genuine mnemonic processes, such as admitting lack of memory or efforts to remember. From their own experience liars may be well aware that memories fade with time, and thus may ascribe rather positive strategic meaning to these contents when the event in question dates back enough years in time. Since all three studies introduced scenarios in which only brief time periods lay between statement and the event to be imagined, their paradigms would be unsuitable for detecting such forms of context-dependency. Future studies could examine this issue by implementing scenarios that differ in regards to the length of time that had passed between event and statement deliverance. Limitations Some important limitations of our study deserve attention. As we made use of a questionnaire, we cannot be sure whether participants correctly understood every example that we provided for illustrating the criteria. Furthermore, our conclusions about content-related deception strategies are based on averaged findings that may not apply equally well across individual cases. For instance, Niehaus found that the specific content-related deception strategies vary between different age groups, suggesting that the developmental stage of a person may mediate the strategic meaning he or she ascribes to a criterion 6. Most importantly, we only investigated how lay people rate the strategic meaning of the criteria in theory but did not test in which ways their content-related deception strategies translate to the practical level. Considering that the potential outcome for the liar can highly affect his or her behavior associated with deception (Porter and ten Brinke, 2010), it seems reasonable to assume that participants' hypothetical use of content criteria in fictitious scenarios may differ from their actual verbal performance in real-life forensic settings. Future research would first need to explore or even manipulate the deception strategies of participants (i.e., pointing out to them the strategic meaning of criteria from the perspective of forensic practitioners), and subsequently motivate participants to successfully deceive within an ecologically valid, high stakes interrogation setting 7. Such an approach would allow examining the relationship between the strategic meaning that statement providers ascribe to a criterion and the criterion's subsequent occurrence in their fabricated statements. Conclusions The current study demonstrated that CBCA criteria differ in their strategic meaning and that the three-dimensional structure of the revised model of Volbert and Steller is suitable for representing these differences. Few exceptions (such as self-deprecation and pardoning the perpetrator carrying strategic value opposite to the valence of their respective group) notwithstanding, our group-based predictions regarding the strategic meaning of the criteria were largely confirmed. That is, laypersons tended to rather ascribe positive strategic meaning to criteria related to episodic autobiographical memory (group 1) but tended to ascribe negative strategic meaning to criteria related to script-deviant information (group 2) and efforts of strategic self-presentation (group 3). In practical terms, our results then provide valuable input for forensic practitioners in appraising the diagnostic value of the criteria: The fact that deceivers typically intend to refrain from simulating script-deviant (group 2) or strategy-based (group 3) criteria strengthens their validity in indicating true statements. In contrast, no such avoidance inclinations are to be expected for memory-related (group 1) criteria, implying that the mere presence of these criteria does not automatically 7 For definitions of high stakes, experimentally realistic lie detection scenarios as well as their effects on lie detection accuracy see O'Sullivan et al.. support a statement's truthfulness. Such generalized guidelines can only be of heuristic value however, since positive strategic meaning is rather a prerequisite than actual indication for a criterion's emergence-whether the criterion occurs in the fabricated statement then depends on the statement giver's ability to produce such content (primary vs. secondary deception; Khnken, 1990). More elaborate assessments of a criterion's diagnostic value thus necessitate additional insight about the cognitive component; above all, knowledge about the cognitive difficulty associated with the criterion's production is required. Such insight combined with our established knowledge about the strategic meaning of the criteria would then constitute a solid foundation for optimally assessing their diagnostic value. DATA AVAILABILITY The raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher. ETHICS STATEMENT This study was carried out in accordance with the Ethical Guidelines of the German Psychological Society (DGPs). All subjects gave informed consent in accordance with these guidelines. Ethical approval was not deemed necessary as there was no foreseeable risk of harm or discomfort for participants. AUTHOR CONTRIBUTIONS RV, SN, and SW: contributed to conception and design of the study; SW: carried out the experiment and organized data acquisition; BM and SW: performed the statistical analysis; BM: wrote the first draft of the manuscript; BM and RV: wrote sections of the manuscript; RV, BM, and SN: contributed to manuscript revision. All authors read and approved the submitted version. FUNDING The work of BM was individually supported by the Elsa-Neumann-Scholarship of Berlin. Otherwise, this research received no specific grant from any funding agency in the public, commercial, or non-profit sectors. APPENDIX 1: DISPLAY OF THE ENTIRE STORY AS PRESENTED TO PARTICIPANTS (TRANSLATED FROM GERMAN) Dear participants, we are interested in your strategies of how to make a fabricated story appear credible. There are no "wrong" answers; we will not assess your answers on moral grounds, nor do we want to know if you would pursue such strategies in real-life. Please read the instructions carefully and please answer all questions in their order of appearance. For your responses, please put yourself into the perspective outlined in the following story: For 6 weeks you have been holding a job position you had desired before for a long time. However, within these first few weeks some things went wrong: Twice already you have shown up to work late. Your boss is quite irritated about this, and since you are still on probation you should by no means show up late a third time. Your boss made clear that if this should happen again, you will lose your position. Yet, it happens again: You are oversleeping in the morning! You hurry to the tram, where you happen to meet your neighbor Mr Schneider, who is quite an irritable fellow. Once he had even punctured the tires of your bike. Running into this guy was the last thing you needed now! During the tram ride you are contemplating how you can explain yourself best to your boss, and looking at your maliciously smirking neighbor you come up with the following story: "This morning, right before I was about to leave for work, I went down to the cellar to fetch a folder with work-related documents. They contain my personal notes that I intended to use for work. So, here is what happened: I was unlocking the door to the cellar, and as always, I left the key in the lock. Upon entering the cellar, I recognized some noise behind me. As I turned around, Mr. Schneider -my neighbor, who constantly causes trouble -was standing outside. He quickly moved forward and locked up the door to the cellar. I had been calling and pounding on the door for quite a while until finally an unfamiliar person showed up and unlocked the door. Luckily, the key was still left on the outside lock. The unfamiliar person moved on, and I quickly made my way to work." After having told this story to your boss, your boss remarks: "Oh, Mr Schneider, I do know this guy! He works as doorman for the company of my wife! Maybe I should talk to him, what he just did is hardly acceptable." Then, your boss instructs you to tell him again what had happened, as precisely as possible. He lets you know that he will not talk to Mr Schneider if he finds your story set out convincingly. Otherwise however, in case he will have doubts about your story, he will talk to Mr Schneider for clarification. Clearly, it is crucial now that your boss believes your story. You certainly do not want to lose your job! You even had wrongly accused another person-Mr Schneider would, of course, deny the allegations, so your only hope is to convince your boss with your story. Otherwise, you will lose your job for sure.
<filename>src/message_digest.cpp<gh_stars>0 /** * @file message_digest.cpp * @author Jichan (<EMAIL> / http://ablog.jc-lab.net/ ) * @date 2019/07/19 * @copyright Copyright (C) 2019 jichan.\n * This software may be modified and distributed under the terms * of the Apache License 2.0. See the LICENSE file for details. */ #include <jcp/message_digest.hpp> #include <jcp/provider.hpp> #include <jcp/security.hpp> namespace jcp { std::unique_ptr<MessageDigest> MessageDigest::getInstance(const char* name, std::shared_ptr<Provider> provider) { MessageDigestFactory* factory = provider ? provider->getMessageDigest(name) : Security::findMessageDigest(name); if (factory) return std::move(factory->create()); return NULL; } std::unique_ptr<MessageDigest> MessageDigest::getInstance(uint32_t algo_id, std::shared_ptr<Provider> provider) { MessageDigestFactory* factory = provider ? provider->getMessageDigest(algo_id) : Security::findMessageDigest(algo_id); if (factory) return std::move(factory->create()); return NULL; } }
// Creates the conjunction of QueryBuilder and tag = operand func (qb *QueryBuilder) AndEquals(tag string, operand interface{}) *QueryBuilder { qb.condition.Tag = tag qb.condition.Op = equalString qb.condition.Operand = qb.operand(operand) return NewQueryBuilder(qb.and(stringIterator(qb.conditionString()))) }
1. Field The present invention generally relates to the field of self-adaptation circuit design. More specifically, the present invention relates to scaling a load device with frequency in a phase interpolator. 2. Background Phase interpolators are important components in high-speed timing circuits, where clock and data recovery (CDR) must be performed before data can be decoded. In general, in a clock recovery system, a reference clock signal of a particular clock frequency is generated together with a number of different clock signals with the same frequency but with different phases. A conventional method to generate a number of clocks with the same frequency but with different phases is to use a voltage controlled delay loop (VCDL). Phase interpolators can be implemented to interpolate between delay stages in the VCDL to generate finer phase spacing, thus creating more clock signals. These clock signals are compared to the phase and frequency of an incoming data stream, where one or more clock signals are selected for data recovery. FIG. 1(a) illustrates a conventional phase interpolator 100 with a0- and a1-weighted current sources. Parameters a0 and a1 refer to a number of pair transistors 111 and 121 (e.g., current tail sources), respectively, connected to each other in parallel, where an amount of current generated by differential pairs 10 and 120 varies according to the number of pair transistors 111 and 121. For instance, phase interpolator 100 can have input signals Φ0 and Φ1 that are 45° apart. The phase output of phase interpolator 100 can be varied using a current summation of differential pairs 110 and 120. That is, for a given ratio of a0:a1, a phase output from phase interpolator 100 varies. For example, as illustrated in FIG. 1(b), a ratio of a0:a1=4:4 can result in a 22.5° phase output, a ratio of a0:a1=7:1 can result in a 5.7° phase output, and a ratio of a0:a1=1:7 can result in a 39.4° phase output. A design consideration of phase interpolator 100 is the slew rate of its phase output. For instance, as illustrated in FIG. 2, if a slew rate of input signal Φ0 is too fast, a weighted sum of the ratio a0:a1 becomes highly non-linear at output 210. In order to ensure proper phase interpolation with a relatively linear phase output (with respect to current weighting ratio a0:a1), the time constant of the phase output (τPI) should be at least twice the time separation (ΔT) between the two input phases Φ0 and Φ1 (τPI>2·ΔT). This relation sets a lower bound of the phase output time constant. Conversely, the time constant should not be too large because this reduces the frequency bandwidth and output swing of phase interpolator 100. Since the time constant is inversely proportional to frequency (τPI∝1/f), the operating frequency of phase interpolator 100 sets an upper bound of the phase output time constant. These lower and upper bound time constant constraints dictate the frequency range of phase interpolator 100. What is needed is a method or apparatus for scaling a phase interpolator output's slew rate with operating frequency such that a wide range of operating frequency can be achieved.
Carbon Fiber Reinforced Composite Toughness and Structural Integrity Enhancement by Integrating Surface Modified Steel Fibers Carbon fiber reinforced plastic (CFRP) was integrated with steel fibers in order to improve the toughness and to enhance the structural integrity during crash. An epoxy system with internal mold release was chosen as the matrix system. The surface modification of steel fibers was done by sandblasting and twisting in order to improve the fiber-matrix adhesion through mechanical interlocking mechanism. The pull-out test of surface modified steel fiber doubled the adhesive strength. The steel fiber integration increased the maximum bending stress of the composites up to 20% whereas the elongation at break reduced to 2.3%. The energy dissipation factor of the steel fiber integrated CFRPs was also reduced compared to CFRPs without steel fiber. An increase in fracture toughness was observed for the CFRPs with steel fibers that amounts to 17 J.
//AddFormReader adds new reader as form part to MultipartReader func (mr *MultipartReader) AddFormReader(r io.Reader, name, filename string, length int64) { form := fmt.Sprintf("--%s\r\nContent-Disposition: form-data; name=\"%s\"; filename=\"%s\"\r\n\r\n", mr.Boundary, name, filename) mr.AddReader(strings.NewReader(form), int64(len(form))) mr.AddReader(r, length) return }
def create_python_job(python_module_path: str, project: str, gcp_resources: str, location: str, temp_location: str, requirements_file_path: str = '', args: Optional[str] = '[]'): job_id = None if requirements_file_path: install_requirements(requirements_file_path) args_list = [] if args: args_list = json.loads(args) python_file_path = stage_file(python_module_path) for idx, param in enumerate(args_list): if param in ('--requirements_file', '--setup_file'): args_list[idx + 1] = stage_file(args_list[idx + 1]) logging.info('Staging %s at %s locally.', param, args_list[idx + 1]) cmd = prepare_cmd(project, location, python_file_path, args_list, temp_location) sub_process = Process(cmd) for line in sub_process.read_lines(): logging.info('DataflowRunner output: %s', line) job_id, location = extract_job_id_and_location(line) if job_id: logging.info('Found job id %s and location %s.', job_id, location) job_resources = gcp_resources_pb2.GcpResources() job_resource = job_resources.resources.add() job_resource.resource_type = 'DataflowJob' job_resource.resource_uri = f'https://dataflow.googleapis.com/v1b3/projects/{project}/locations/{location}/jobs/{job_id}' with open(gcp_resources, 'w') as f: f.write(json_format.MessageToJson(job_resources)) break if not job_id: raise RuntimeError( 'No dataflow job was found when running the python file.')
/* Operating system specific defines to be used when targeting GCC for hosting on Windows32, using GNU tools and the Windows32 API Library. Copyright (C) 1997, 1998, 1999, 2000, 2001, 2002, 2003 Free Software Foundation, Inc. This file is part of GNU CC. GNU CC is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version. GNU CC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with GNU CC; see the file COPYING. If not, write to the Free Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ /* Most of this is the same as for cygwin, except for changing some specs. */ /* Mingw GCC, unlike Cygwin's, must be relocatable. This macro must be defined before any other files are included. */ #ifndef WIN32_NO_ABSOLUTE_INST_DIRS #define WIN32_NO_ABSOLUTE_INST_DIRS 1 #endif #include "i386/cygwin.h" #define TARGET_EXECUTABLE_SUFFIX ".exe" /* See i386/crtdll.h for an altervative definition. */ #define EXTRA_OS_CPP_BUILTINS() \ do \ { \ builtin_define ("__MSVCRT__"); \ builtin_define ("__MINGW32__"); \ } \ while (0) #undef TARGET_OS_CPP_BUILTINS /* From cygwin.h. */ #define TARGET_OS_CPP_BUILTINS() \ do \ { \ builtin_define ("_WIN32"); \ builtin_define_std ("WIN32"); \ builtin_define_std ("WINNT"); \ builtin_define ("_X86_=1"); \ builtin_define ("__stdcall=__attribute__((__stdcall__))"); \ builtin_define ("__cdecl=__attribute__((__cdecl__))"); \ builtin_define ("__declspec(x)=__attribute__((x))"); \ if (!flag_iso) \ { \ builtin_define ("_stdcall=__attribute__((__stdcall__))"); \ builtin_define ("_cdecl=__attribute__((__cdecl__))"); \ } \ EXTRA_OS_CPP_BUILTINS (); \ builtin_assert ("system=winnt"); \ } \ while (0) /* Specific a different directory for the standard include files. */ #undef STANDARD_INCLUDE_DIR #define STANDARD_INCLUDE_DIR "/usr/local/mingw32/include" #undef STANDARD_INCLUDE_COMPONENT #define STANDARD_INCLUDE_COMPONENT "MINGW" #undef CPP_SPEC #define CPP_SPEC "%{posix:-D_POSIX_SOURCE} %{mthreads:-D_MT}" /* For Windows applications, include more libraries, but always include kernel32. */ #undef LIB_SPEC #define LIB_SPEC "%{pg:-lgmon} %{mwindows:-lgdi32 -lcomdlg32} \ -luser32 -lkernel32 -ladvapi32 -lshell32" /* Include in the mingw32 libraries with libgcc */ #undef LINK_SPEC #define LINK_SPEC "%{mwindows:--subsystem windows} \ %{mconsole:--subsystem console} \ %{shared: %{mdll: %eshared and mdll are not compatible}} \ %{shared: --shared} %{mdll:--dll} \ %{static:-Bstatic} %{!static:-Bdynamic} \ %{shared|mdll: -e _DllMainCRTStartup@12}" /* Include in the mingw32 libraries with libgcc */ #undef LIBGCC_SPEC #define LIBGCC_SPEC \ "%{mthreads:-lmingwthrd} -lmingw32 -lgcc -lmoldname -lmingwex -lmsvcrt" #undef STARTFILE_SPEC #define STARTFILE_SPEC "%{shared|mdll:dllcrt2%O%s} \ %{!shared:%{!mdll:crt2%O%s}} %{pg:gcrt2%O%s}" /* MS runtime does not need a separate math library. */ #undef MATH_LIBRARY #define MATH_LIBRARY "" /* Output STRING, a string representing a filename, to FILE. We canonicalize it to be in Unix format (backslashe are replaced forward slashes. */ #undef OUTPUT_QUOTED_STRING #define OUTPUT_QUOTED_STRING(FILE, STRING) \ do { \ char c; \ \ putc ('\"', asm_file); \ \ while ((c = *string++) != 0) \ { \ if (c == '\\') \ c = '/'; \ \ if (ISPRINT (c)) \ { \ if (c == '\"') \ putc ('\\', asm_file); \ putc (c, asm_file); \ } \ else \ fprintf (asm_file, "\\%03o", (unsigned char) c); \ } \ \ putc ('\"', asm_file); \ } while (0) /* Define as short unsigned for compatability with MS runtime. */ #undef WINT_TYPE #define WINT_TYPE "short unsigned int"
Former Minnesota governor Jesse Ventura in 2014. (Hannah Foslien/Getty Images) Jesse Ventura, the former Navy SEAL turned professional wrestler who eventually became Minnesota’s governor, has never been fond of decorum or the media. As “The Body,” Ventura provoked ire inside the ring as a blond bully and went on to become “the most controversial announcer in WWE, calling it like he saw it on a weekly basis, no matter what political correctness might dictate,” according to his WWE profile. He has written a book advocating for marijuana legalization, taught about wrestling and politics at Harvard and found himself in a legal battle with the estate of “American Sniper” Chris Kyle that earned him sharp criticism from some in the military community. Now, Ventura will star in his own reality show, “The World According to Jesse,” on RT America — the Washington-based branch of Russian state television. Known previously as Russia Today, RT is funded by the Russian government and describes itself as a TV channel “for viewers who want to Question More.” Critics call RT a propaganda tool with poor journalism standards, as The Washington Post has reported. “RT covers stories overlooked by the mainstream media, provides alternative perspectives on current affairs, and acquaints international audiences with a Russian viewpoint on major global events,” according to the channel’s website. But, as The Washington Post’s Adam Taylor and Paul Farhi reported, RT initially focused on reporting events in Russia and combating negative news about Russia, then shifted to offering an alternative view of international events. Its coverage of the 2016 U.S. presidential election appeared to favor Donald Trump, whose interview with Larry King during the last year’s campaign was aired on RT America. [A Trump interview may be crowning glory for RT, a network funded by the Russian government] Ventura told the Minneapolis Star Tribune that he been reassured by RT that he will not be censored. “I have total artistic control and I can talk about anything I want,” he told the Star Tribune. “We’re more interested in talking about our country. I didn’t join RT to report on Russia.” [Facebook temporarily blocked RT — and Moscow isn’t happy] Ventura’s show premieres this month and will air on Fridays, with weekend reruns, according to RT. The episodes will capture “the feeling of freedom,” Ventura said in a promotional video for the program, which showed him zooming down an isolated road atop a motorcycle, wearing a sleeveless leather jacket. “Everyone in the world should experience the feeling of freedom, and you get it on the open road,” he says. “Welcome to my world. Come along for the ride.” According to a news release from February, Ventura’s show will tackle “both the current news agenda and deeper issues such as government hypocrisy and corporate deception, with Jesse’s distinctive take on stories sidelined by the mainstream media.” “I am working for the enemy of mainstream media now,” Ventura told the Star Tribune. Then-Minnesota Gov. Jesse Ventura gestures during an address at the National Press Club in Washington in 1999. (Susan Walsh/AP) The RT news release said Ventura will conduct in-depth interviews and “reporting from the field” with his “uncensored, bold and bare-knuckled approach.” “What you will hear from me is opinions, not agendas,” Ventura said in a statement. “I look forward to holding our government accountable. I will be exercising my First Amendment rights with no filters.” Ventura previously hosted a different talk show, “Off the Grid,” for Ora TV, an online channel. But last year his new bosses demanded he take a pay cut, the Star Tribune reported, so instead he signed a contract with RT. He’ll do 16 shows in 2017 and 16 more in 2018, according to the Star Tribune. More from Morning Mix: How Omran, the dazed Aleppo boy who reappeared this week, became a political pawn in Syria’s war Could the Persian Gulf rift mean the beginning of the end of Al Jazeera? Jury awards $6.7 million to woman alleging repeated rapes by guard in Sheriff David Clarke’s Milwaukee County Jail
Balloon mitral valvotomy in youngest documented rheumatic mitral stenosis patient Juvenile rheumatic mitral stenosis (MS) is common in the Indian subcontinent. Early recognition and management is essential. Rarely rheumatic MS may occur in <5 years of age, wherein rapid hemodynamic progression and cardiac morbidity and mortality occurs. Severe/symptomatic MS in preschool age requires urgent and meticulous decision making. Condition of valve and wishes of parents may complicate management decisions. Percutaneous transmitral commissurotomy (PTMC) may, therefore, become the only lifesaving intervention in these cases unless contraindicated, although the procedure entails considerable technical issues in this age group. Herein, we report a successful balloon mitral valvotomy in a 4yearold child with severe rheumatic MS (documented since 2 years 6 months of age) presenting with repeated pulmonary edema. To the best of our knowledge, this child is the youngest documented case of established rheumatic heart disease and also one of the youngest PTMC procedure performed. This report supports the clinical usefulness of PTMC in childhood MS; however, pertinent technical issues are raised, which needs a proper consensus. © 2015 Wiley Periodicals, Inc.
Subjects include Len Cariou in Sweeny Todd, Glenn Close's role in Barnum, and her Tony nomination for Barnum. Includes an article from the Flat Hat. Glenn Close Papers, Special Collections Research Center, Swem Library, College of William and Mary.
1. Field of the Invention The present invention relates to a polyorganosiloxane graft copolymer excellent in impact resistance, low-temperature characteristics and weather resistance, and further giving molded products excellent in bond strength to paint film applied thereon. 2. Description of the Related Art As thermoplastic resins excellent in impact resistance, low-temperature characteristics and weather resistance, a polyorganosiloxane graft copolymer obtained by graft-polymerizing a monomer having an ethylenically unsaturated bond (e.g. acrylonitrile, styrene) onto polyorganosiloxane rubber is disclosed in Japanese Patent Application Kokai No. 61-106614. Also, a graft copolymer obtained by graft-polymerizing particular amounts of an epoxy group-containing vinyl monomer and other vinyl monomer onto a polyorganosiloxane polymer copolymerized with a graft-linking agent as copolymer component is disclosed in Japanese Patent Application Kokai No. 2-138360. However, the former graft copolymer disclosed in Japanese Patent Application Kokai No. 61-106614 has problems although it is excellent in impact resistance, low-temperature characteristics and weather resistance. Firstly, in the course of production of the graft copolymer, an organosiloxane oligomer which is very difficult to remove from the graft copolymer is produced as by-product. Secondly, when the graft copolymer is molded, this oligomer blooms out to the surface of the molded product, so that when the molded product is painted, the bond strength of the paint film is low. Similarly, the graft copolymer disclosed in Japanese Patent Application Kokai No. 2-138360 also has problems although it is excellent in impact resistance, low-temperature characteristics and weather resistance. That is, the adhesion property of paint film applied to the molded product of this graft copolymer is not of very high level, and besides the above adhesion property becomes quite poor as the amount of the polyorganosiloxane polymer in the graft copolymer becomes large. This graft copolymer further has a problem that when it is put in wet heat conditions under pressure, its impact strength becomes remarkably poor.
Normal values of reactive oxygen metabolites on the cord-blood of full-term infants with a colorimetric method. The main end-point of the study was to evaluate the normal values of reactive oxygen metabolites (ROMs) in healthy full-term babies. Secondary end-points were differences between groups related to modality of delivery, Apgar score, birth weight, gestational age and sex. All apparently healthy babies born at our institution between 8 a.m. and 8 p.m. Monday to Friday with gestational age 37-42 weeks, delivered both vaginally or by caesarean section and without foetal distress and perinatal asphyxia. ROMs were evaluated by a colorimetric method (d-ROM test) on cord-blood immediately after birth. The values are reported as arbitrary unit U. Carr. Statistical analysis was performed by t-test and by multiple and stepwise regression analysis. We have analyzed 80 babies with mean birth weight 3301 +/- 446 g. and mean gestational age 39.5 +/- 1.0 weeks. The male:female ratio was 1.56 and the median (range) Apgar score was 9 at 1' and 10 at 5'. The babies born by vaginal delivery were 37 out of 80 while the remaining 43 were delivered by cesarean section. Because the two groups did not differ for the clinical characteristics they were considered together for the determination of the mean value of ROMs and indicated as "total". The mean value +/- SD of ROMs of the "total" was 115.5 +/- 32.6 U. Carr. Significant differences in the mean value of ROMs were not found related to type of delivery, birth weight, gestational age, and Apgar score at 1' and 5'. Instead the female infants had a significantly lower mean value of ROMs than the male babies (respectively 104.4 +/- 32.2 vs 120.2 +/- 30.6 U. Carr.; p = 0.031). Multiple and stepwise regression analyses both demonstrated that the sex of the neonate is able to independently influence the value of ROMs (respectively p = 0.025 and p = 0.035). The main end-point of the study was to determine the standard reference values for this method in the healthy full-term infant at birth: the values of ROMs we found in the "total" population are lower than those of healthy adults (between 250-300 U. Carr.) and similar to those of adults treated with steroids or antioxidant drugs. The finding that the female sex is able to independently determine lower values of ROMs at birth compared to the male sex, lets speculate that the female infants are less prone to oxidative stress in the first moments of life.
// TestNotExist tests that the implementation behaves correctly // for paths that do not exist. func TestNotExist( ctx context.Context, t *testing.T, impl file.Implementation, path string) { _, err := impl.Open(ctx, path) assert.True(t, errors.Is(errors.NotExist, err)) _, err = impl.Stat(ctx, path) assert.True(t, errors.Is(errors.NotExist, err)) }
package com.ehi.storm.hello.bolt; import static org.mockito.Matchers.any; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; import java.util.ArrayList; import java.util.List; import org.testng.annotations.Test; import com.kitmenke.storm.bolt.SplitSentenceBolt; import org.apache.storm.topology.BasicOutputCollector; import org.apache.storm.topology.OutputFieldsDeclarer; import org.apache.storm.tuple.Fields; import org.apache.storm.tuple.Tuple; public class SplitSentenceBoltTest { private List<Object> newListWithOneString(String val) { List<Object> list = new ArrayList<Object>(1); list.add(val); return list; } @Test public void legalSentenceShouldEmitFiveWords() { // given Tuple tuple = mock(Tuple.class); when(tuple.getString(0)).thenReturn("milk was a bad choice"); SplitSentenceBolt bolt = new SplitSentenceBolt(); BasicOutputCollector collector = mock(BasicOutputCollector.class); // when bolt.execute(tuple, collector); // then verify(collector).emit(newListWithOneString("milk")); verify(collector).emit(newListWithOneString("was")); verify(collector).emit(newListWithOneString("a")); verify(collector).emit(newListWithOneString("bad")); verify(collector).emit(newListWithOneString("choice")); } @Test public void shouldMakeWordsLowercaseAndRemovePunctuation() { // given Tuple tuple = mock(Tuple.class); when(tuple.getString(0)).thenReturn("Boy, that escalated QUICKLY..."); SplitSentenceBolt bolt = new SplitSentenceBolt(); BasicOutputCollector collector = mock(BasicOutputCollector.class); // when bolt.execute(tuple, collector); // then verify(collector).emit(newListWithOneString("boy")); verify(collector).emit(newListWithOneString("that")); verify(collector).emit(newListWithOneString("escalated")); verify(collector).emit(newListWithOneString("quickly")); } @Test public void shouldDeclareOutputFields() { // given OutputFieldsDeclarer declarer = mock(OutputFieldsDeclarer.class); SplitSentenceBolt bolt = new SplitSentenceBolt(); // when bolt.declareOutputFields(declarer); // then verify(declarer, times(1)).declare(any(Fields.class)); } }
/** * Method returns spell check suggestions for input * @param input * @return * @throws Throwable */ public List<String> spellCheck(String input) throws Throwable { List<String> list = new ArrayList<>(); try { check(); QueryManager qm = session.getWorkspace().getQueryManager(); String xpath = "/jcr:root[rep:spellcheck('" + input + "')]/(rep:spellcheck())"; @SuppressWarnings("deprecation") QueryResult result = qm.createQuery(xpath, Query.XPATH).execute(); RowIterator it = result.getRows(); if (it.hasNext()) { Row row = it.nextRow(); list.add(row.getValue("rep:spellcheck()").getString()); } } catch (Throwable e) { logger.error(e.getMessage(), e); throw e; } return list; }
LEUCINE TRANSPORT FROM THE VENTRICLES AND THE CRANIAL SUBARACHNOID SPACE IN THE CAT 1 Abstract The clearance of leucine from the ventricular and cranial subarachnoid space was studied in cats subjected to ventriculocisternal and ventriculocraniosubarachnoidal perfusions. Clearance from both the ventricles and the subarachnoid space was mediated by transport mechanisms with nonsaturable and saturable components. Net clearance from the subarachnoid space was considerably greater than from the ventricles. Analysis of the transport kinetics revealed the affinity constants (Kt) to be comparable for both compartments but transport sites appeared to be more numerous in the subarachnoid space (greater Vmax).
If you go to Wikipedia's page for American Novelists, you might notice something strange: Of the first 100 authors listed, only a small handful of them are women. You could potentially blame this on the fact that there simply are more famous male authors than there are female (a-whole-nother can of worms), but the real reasoning is much more intentional. Wikipedia editors have slowly been moving female authors to a subcategory called American Women Novelists so that the original list isn't at risk of "becoming too large." Bad luck, ladies. They need to make room and someone has to go first. Why shouldn't it be unimportant literary folk like Harper Lee, Harriet Beecher Stowe or Louisa May Alcott? Novelist Amanda Filipacchi was the one who — very recently — first cottoned on to what Wikipedia was doing. The edits, she noticed, have been happening gradually and mostly alphabetically by last name though in a few special cases the editors jumped ahead because they just couldn't wait for R and T to get Ayn Rand and Donna Tartt off the list. Filipacchi herself was one of the authors to get booted to the subcategory. Please note that there is no subcategory for American Men Novelists. It appears that many female novelists, like Harper Lee, Anne Rice, Amy Tan, Donna Tartt and some 300 others, have been relegated to the ranks of “American Women Novelists” only, and no longer appear in the category “American Novelists.” If you look back in the “history” of these women’s pages, you can see that they used to appear in the category “American Novelists,” but that they were recently bumped down. Male novelists on Wikipedia, however — no matter how small or obscure they are — all get to be in the category “American Novelists.” It seems as though no one noticed. According to Abigail Grace Murdy of the Melville House blog, only 9% of Wikipedia's editors are female and women make up on 19% of the site's contributors. If your reaction is "It's only Wikipedia and at least the women are still on there somewhere," consider that plenty of people do use Wikipedia as their primary resource for information and that, by being left off the American Novelist list, many of these female authors will be entirely ignored, probably not for the first time in their histories. People who go to Wikipedia to get ideas for whom to hire, or honor, or read, and look at that list of “American Novelists” for inspiration, might not even notice that the first page of it includes far more men than women. They might simply use that list without thinking twice about it. It’s probably small, easily fixable things like this that make it harder and slower for women to gain equality in the literary world. Exactly. The sad fact remains that women don't need anymore handicaps in the publishing world where male authors still get taken more seriously than female ones. This is the reason that Joanne Rowling became J.K. Rowling. This is the reason that the boys I took creative writing with back in college loved that "A.M. Homes dude," but had little-to-nothing to contribute when talking about Zora Neale Hurston, Mary Gaitskill and Mona Simpson. And it's not an issue of the themes people are attracted to. There are plenty of female writers who write in ways that are thought of as typically masculine (take the aforementioned Homes or Flannery O'Connor for example) just as there are plenty of male writers who write in ways that are considered more feminine. (Remember the great Weiner-Eugenides debate of 2012?) Maybe that's because the reader's own limited view forces them to see novels as bro-ish or girly when really it's more a difference between being optimistic or sad, sentimental or stoic — all having nothing to do with the gender of the writer. What it all boils down to, once again, is sexism. By allowing male authors to simply be "American Novelists" and forcing female authors to be qualified as "American Women Novelists" you are automatically marking women as different or lesser. It's like complimenting a dog that's learned to wear clothes and walk on its hind legs. Sure, it's doing a great job and it looks adorable, but it's still not quite passing as a human and an actual person could undoubtedly do better. Except these American Female Novelists aren't "not quite passing." They're not passing at all because, as some of the best and most important authors in American history, writing actually comes quite naturally to them.
The paraventricular organ of mormyrid fish: uptake or release of intraventricular biogenic amines? The paraventricular organ of Gnathonemus petersii was investigated with light and electronmicroscopical techniques. It contains high concentrations of dopamine, noradrenaline and serotonin, but the synthesizing enzymes are not or hardly present. Consequently, the cerebrospinal fluid-contacting neurons might pick up their biogenic amines from the ventricular fluid. Dense subependymal axonal plexuses in the everted telencephalon probably release these substances into the ventricle. However, electronmicroscopical observations suggest release rather than uptake by the paraventricular organ. The possible significance of intraventricular release, transport and uptake of biogenic amines is discussed.
<gh_stars>0 /* * Copyright (C) 2011 Freescale Semiconductor, Inc. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED * TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Author: <NAME> <<EMAIL>> */ #include <stdio.h> #include <fcntl.h> #include <sys/shm.h> #include<linux/fs.h> /* For using Share Memory Allocator, Application MUST include fsl_ipc_shm.h */ #include "fsl_ipc_shm.h" int main(int argc, char *argv[], char *envp[]) { void *ret; unsigned int *ptr, *ptr1; unsigned int *paddr, *vaddr; /* * Before using any allocation(shm_alloc), de-allocation(shm_free) API * application MUST call fsl_shm_init API. * * fsl_shm_init needs to be called, ONLY ONCE, during the initialization * of application. */ ret = fsl_shm_init(0); if (!ret) { printf("Intialization of Shared Memory Allocator Failed\n"); return -1; } /* * shm_alloc - can be used for allocation of shared memory. * On Sucess, it will return the virtual address of allocated memory. * On Failure, It returns NULL. */ ptr = (unsigned int *)shm_alloc(0x100000); if (!ptr) { printf("Memory Allocation Failed through shm_alloc\n"); return -1; } *ptr = 0xdeadbeaf; /* * shm_vtop - Used to translate virtual address to its corresponding * physical address of allocated shared memory. */ paddr = shm_vtop(ptr); /* * shm_ptov - Used to translate physical address to its corresponding * virtual address of allocated shared memory. */ vaddr = shm_ptov(paddr); printf("shm_alloc: vaddr %p paddr %p\n", vaddr, paddr); /* * shm_memalign - Used for allocation of aligned shared memory area. * On Sucess, Return virtual address of aligned allocated memory. * On Failure, It returns NULL. */ ptr1 = (unsigned int *)shm_memalign(0x100000, 0x1000000); if (!ptr1) { printf("Aligned Memory Allocation Failed using shm_memalign\n"); return -1; } printf("shm_memalign: vaddr %p paddr %p\n", ptr1, shm_vtop(ptr1)); /* * shm_free - It free/de-allocates the memory allocated using shm_alloc * shm_memalign in order to further allocate it again. */ shm_free(ptr); shm_free(ptr1); printf("Apps finished\n"); return 0; }
PROSPECTS FOR THE DEVELOPMENT OF LEGISLATION ON RURAL GREEN TOURISM The article is devoted to the study of prospects for the development of legislation on rural green tourism. The urgency of the work is conditioned upon the urgent need for special legislation that would reflect the specific features of rural green tourism and would effectively develop this area to reduce unemployment, overcome poverty, increase incomes of the rural population by intensifying non-agricultural activities. The purpose of the study is to identify possible areas of development of legislation on rural green tourism on the basis of scientific and theoretical analysis of current and future regulations in this area and developments in legal science. The methodological basis of the study was the dialectical method of scientific knowledge, general scientific (formal-logical, method of analysis) and special-legal methods (formal-legal, comparative-legal). As a result of this study, proposals were developed to improve the terminology of legislation in the field of rural tourism. Namely, the necessity of replacing the term rural green tourism with the term rural hospitality in normative legal acts is justified, the use of the category rural tourism exclusively in the sense of one of the types of tourism, the specific feature of which is implementation in rural areas. To increase the effectiveness of legal regulation of rural hospitality, a proposal was made to form special legislation the Law of Ukraine On Rural Hospitality in Ukraine. It has been proved that personal farms are the most potentially attractive subjects for the development of rural hospitality in Ukraine. For the practical implementation of this potential, changes are proposed to Part 1 of Art. 1 of the Law of Ukraine On Personal Peasant Economy on enabling private farms to use their property to provide rural hospitality services. It is proposed that to clearly distinguish between rural hospitality and rural tourism, the latter should be regulated by the Law of Ukraine On Tourism and other regulations in the field of tourism, adopted to implement the provisions of this law. The obtained conclusions can be used in formulating changes to the current legislation of Ukraine, will be useful when working on research on the specific features of the legal regulation of relations in the field of rural hospitality
Erratum to: Modeling the Effects of Vaccination and Treatment on Pandemic Influenza Erratum to: The AAPS Journal DOI 10.1208/s12248-011-9284-7 On page 429, after the last sentence Further, the shape of the epidemic curve also strongly depends on the initial fraction of susceptibles in the population at t0 when (t) is periodic., as well as after Fig. 11d, the reference below should be inserted. Bacaer N, Gomes MGM. On the final size of epidemics with seasonality. Bull Math Biol. 2009;71:195466.
Role of microRNA Pathway in Mental Retardation Deficits in cognitive functions lead to mental retardation (MR). Understanding the genetic basis of inherited MR has provided insights into the pathogenesis of MR. Fragile X syndrome is one of the most common forms of inherited MR, caused by the loss of functional Fragile X Mental Retardation Protein (FMRP). MicroRNAs (miRNAs) are endogenous, single-stranded RNAs between 18 and 25 nucleotides in length, which have been implicated in diversified biological pathways. Recent studies have linked the miRNA pathway to fragile X syndrome. Here we review the role of the miRNA pathway in fragile X syndrome and discuss its implication in MR in general. Mental retardation (MR) is defined as a failure to develop cognitive abilities and achieve an intelligence level that would be appropriate for the particular age group. About 2-3% of the total population is reported to be function two standard deviations below the mean IQ of the general population: below 70. Mild MR (IQ between 50 and 70) is most frequent (up to 80-85% of all MR) and is often associated with lower socioeconomic status, whereas the more severe forms occur in all social groups and in families of all educational levels. Understanding the molecular basis of MR would provide insights into human cognition and intelligence. One of the most common forms of inherited MR is fragile X syndrome, which has an estimated prevalence of about 1 in 4,000 males and 1 in 8,000 females. The syndrome is transmitted as an X-linked dominant trait with reduced penetrance (80% in males and 30% in females). The clinical presentations of fragile X syndrome include mild to severe MR, with IQ between 20 and 70, mild abnormal facial features of a prominent jaw and large ears, mainly in males, and macro-orchidism in postpubescent males. In 1991, its molecular basis was revealed by positioning cloning and shown to be associated with a massive trinucleotide repeat expansion within the gene Fragile X Mental Retardation-1 (FMR1). Loss of the Fragile X Mental Retardation Protein (FMRP) has been identified as the major cause of fragile X syndrome. FMRP, as an RNA binding protein, has been implicated in translational control. Recently, FMRP has also been linked to the microRNA (miRNA) pathway that is involved in translational suppression. Here we will review the current knowledge on the biological functions of FMRP, and discuss the role of the miRNA pathway in the pathogenesis of fragile X syndrome and MR in general. MICRORNAS: BIOGENESIS AND MODE OF ACTION miRNAs are endogenous, single-stranded RNAs between 18 and 25 nucleotides in length. The biogenesis of miRNAs involves enzymatic machinery that is well conserved from animals to plants. The transcription of miRNA genes is mainly directed by RNA polymerase II, which produces primary miRNAs (pri-miRNAs). In the nucleus, the RNase III endonuclease Drosha, along with DGCR8, excises pre-from pri-miRNA. The pre-miRNA is transported out of the nucleus by Exportin 5. In the cytoplasm, pre-miRNA is processed by Dicer, unwound, and loaded onto the effector complex, RISC (RNA-induced silencing complex), which then directs sequence-specific mRNA degradation or translational suppression ( Fig. 1). FIGURE 1. A model for FMRP-mediated translational suppression of mRNA targets via the miRNA pathway in the neuron. In the nucleus, miRNA genes are transcribed to generate primiRNAs, which are processed into the hairpin-structured pre-miRNAs by Drosha along with DGCR8. After being transported to the cytoplasm by Exportin 5, pre-miRNAs are further processed by Dicer to generate the duplex miRNA. Upon unwinding, one strand of the duplex is preferentially integrated into the RNA-induced silencing complex (RISC) and directs it to mRNA targets. FMRP forms a messenger ribonucleoprotein (mRNP) complex by interacting with specific RNA transcripts and proteins. At least some of the FMRP-mRNP complexes interact with RISC, which could regulate protein synthesis in the cell body of the neuron. In addition, FMRPcontaining RISC could also be transported into dendrites to regulate local protein synthesis of specific RNAs in response to synaptic stimulation signals such as metabotropic glutamate receptor activation. miRNAs function as negative regulators of gene expression by targeting mRNAs for translational repression, degradation, or both. In plants, miRNAs usually trigger mRNA degradation by perfect base pairing to its target mRNAs, while in animals, miRNAs usually inhibit translation through imperfect base pairing. Recent data suggest that one miRNA could regulate multiple mRNA targets, while a given mRNA could be regulated by multiple miRNAs, which could provide transient and temporal specificities to translational regulation. There has been a feverish increase in the identification and characterization of miRNAs and current data suggest that miRNAs regulate diversified biological pathways. MIRNAS IN NEURAL DEVELOPMENT Recent years have seen an enormous increase in the list of neuronal-specific and -enriched species of miRNAs. Numerous studies showing the miRNA expression profile from different tissues and cellular systems have come up with a consensus that the brain displays 70% of experimentally detectable miRNAs, and among those with tissue-specific/enriched expression patterns, half are brainspecific/enriched. During the course of development, miRNA expression profiles vary dynamically and temporally, as seen during the brain development in rodents, during the differentiation of primary neurons into mature differentiated neurons, or during differentiation of P19 embryonic carcinoma cells into neurons by retinoic acid treatment. Despite the identification of so many miRNAs specifically expressed and enriched in the nervous system, unfortunately, direct functional studies of these miRNAs in mammalian nervous system are still sparse, and most conclusions are drawn from the studies made on lower model systems. Studies in C. elegans have shown that cell fate decision between two taste receptor neurons and maintenance of their identity arises by a complex regulatory network of interactions between two miRNAs, miR-273 and lsy-6, and their transcription factor targets. Using loss-of-function analysis, Gao and colleagues demonstrated that miR-9a is required to regulate precise production of neuronal precursor cells in the peripheral nervous system of Drosophila embryos and in adults. Interestingly, miR-9a is conserved in humans and is highly expressed in fetal brains, thus suggesting that a similar mechanism may operate in mammalian neurogenesis as well. Furthermore, Zebrafish lacking maternal and zygotic components of Dicer, the enzyme necessary for the biogenesis of miRNAs, produces latter cellular differentiation defects notably in brain, which are specifically rescued by reintroduction of miR-430. In a similar study, severe defects were observed in Dicer-deficient mice, which died very early in development and were depleted of pluripotent stem cells. These two studies together suggest a crucial role of miRNAs in early embryonic patterning and morphogenesis. There is a generalized view that de novo synthesis of proteins is required for neuronal construction and plasticity that ultimately affects higher brain functions, such as learning and memory. Various mRNAs have been shown to be transported actively and rapidly to dendrites in the form of transportcompetent RNP particles. Once mRNAs arrive at their dendritic destination site, in the default state, translation is assumed to be repressed. However, in response to synaptic activity, appropriate spatiotemporal control of its translation is achieved, producing long-lasting changes in the synaptic strength that, in turn, are responsible for producing long-term memory. Such local translational control is implemented at different levels and through various pathways. Interestingly, miRNA pathways have been recently identified in providing such a control. Kunes and colleagues found that in dendrites of Drosophila neurons, mRNA translation for CaMKII-alpha (calmodulin-dependent protein kinase II) occurs in response to olfactory learning. The 3'-UTR of CaMKII-alpha contain two putative miRNA binding sites, indicating that binding of miRNAs might function to repress translation. Indeed, they showed the presence of miRNA processing machinery at synapses and further found that one of these proteins, encoding one of RISC components, Armitage, was degraded at the synapse during learning. When Armitage is degraded in response to neuronal activity, CaMKII-alpha mRNA translation increases. Taken together, these studies suggest that persistent generation of miRNAs is required to repress translation and that neuronal activity leads to degradation of components of the RNA interference machinery such that miRNAs are no longer generated, thereby lifting translational repression. In mammals, Greenberg and colleagues investigated the expression and localization of candidate miRNAs in rat hippocampus, and revealed that miR-134 was specifically located close to synaptic sites in dendrites and enriched in synaptoneurosomal preparations. Furthermore, they found that miR-134 could regulate the translation of the mRNA encoding for LimKinase 1 protein (LimK1), which regulates actin filament dynamics and has an important role in dendrite and spine development and maintenance. In another study, using rat primary cortical neurons, it was shown that miR-132 targets and represses P250GAP, a Rho/Rac regulator. Interestingly, the activities of both miR-134 and miR-132 were shown to be modulated in response to synaptic activity. These studies suggest that the miRNA pathway is involved in the regulation of local protein synthesis. Neurobiology of Fragile X Syndrome Fragile X syndrome is caused by the loss of functional FMRP. Since the molecular basis of fragile X syndrome was revealed by positioning cloning, a lot of effort has been dedicated to understanding the biological functions of FMRP and how the loss of FMRP leads to MR. FMRP belongs to a small family of highly conserved proteins referred to as the fragile X-related proteins. Proteins of this family are characterized by the presence of two RNP K homology domains (KH domains), a cluster of arginine and glycine residues (RGG box), and both nuclear localization and nuclear export signals ( Fig. 1). The KH domain and RGG box are common among RNA binding proteins. Indeed, FMRP has been found to bind RNA homopolymers and mRNAs in vitro, indicating a potential role for FMRP in the regulation of RNA metabolism Normally, expression of FMRP is widespread, although not necessarily ubiquitous, being most abundant in the brain and testis. In neurons, FMRP has been found localized within and at the base of dendritic spines in association with polyribosomes. Dendritic spines are the postsynaptic compartments of most excitatory synapses in mammalian brains. There is growing evidence that induction of synaptic plasticity correlates with changes in the number and/or shape of dendritic spines. Dendritic spines in fragile X patients and Fmr-1 knockout mice are denser apically, elongated, thin, and tortuous. This association is RNA dependent as well as microtubule dependent, indicating a role for FMRP in mRNA trafficking and dendritic development (Fig. 1). Support for a role of FMRP in neural dysgenesis has been provided in studies identifying MAP1B, a key component of microtubule stability, as an mRNA target of FMRP. Lack of FMRP in Fmr1 knockout mice was found to cause elevated levels of MAP1B protein and increased microtubule stability in neurons. Thus, loss of FMRP results in altered microtubule dynamics that affect neural development and, therefore, indicates a potential role for FMRP in synaptic plasticity. Indeed, a link between abnormal dendritic spines and MR has been suggested for other MR diseases like Down syndrome and Rett syndrome. The fact that spine defects are associated with MR in humans thus lends support to the view that local translation is a pivotal event in the cognitive processes. A role for FMRP in synaptic plasticity, particularly long-term depression (LTD), seems logical since LTD is a protein synthesis-dependent phenomenon. Indeed, such a role for FMRP has been established. Activation of the metabotropic glutamate receptor 5 (mGluR5) by DHPG agonist stimulates LTD under normal conditions. LTD activated in this manner requires new protein synthesis, but does not require transcription. These data suggest that mGluR5 stimulation allows for the removal of translational repression of transcribed and localized mRNA messages necessary for LTD. Fitting with such a model is the observation that mGluR-dependent LTD is exaggerated in an Fmr1 knockout mouse. It has been suggested that the presence of FMRP represses translation of proteins involved in LTD and that stimulation of mGluR5 results in the localized translation of these transcripts. Currently, the effect of mGluR5 antagonists on the balance between FMRP-mediated translational repression and mGluR5activated protein synthesis is seen as potential target for new therapies treating fragile X syndrome. miRNA Pathway in Fragile X Syndrome Initial biochemical studies in Drosophila to identify protein components of dFmrp-containing complexes and RISC led to the identification of dFmrp as a component of RISC. Further analysis revealed specific interactions between dFmrp and two functional RISC proteins, dAGO2 and Dicer. Although dAGO2 is generally associated with siRNA-mediated gene silencing in Drosophila, the loss of dFmr1 does not seem to affect RNAi. In addition, the endogenous miRNAs have been found associated with FMRP in both flies and mammals. Therefore, it has been proposed that FMRPmediated translational suppression occurs via the miRNA pathway and involves miRNA. This view is further supported by the association of FMRP with a mammalian Argonaute, eIF2C2, which is itself a component of miRNA-containing mRNP complexes. In adult mice brains, Dicer and eIF2C2 have also been observed to interact with Fmrp at postsynaptic densities. Importantly, the genetic interactions between dFmr1 and the components of the miRNA pathway have been demonstrated in Drosophila. dAGO1 was shown to interact dominantly with dFmr1 in dFmr1 loss-of-function and overexpression models. Overexpression of dFmr1 leads to a mildly rough eye phenotype that is the result of increased neuronal cell death. By introducing a recessive lethal allele of AGO1, containing a P-element insertion that reduces its expression level, suppression of the mild rough eye phenotype could be observed. The loss-of-function model revealed that AGO1 was required for dFmr1 regulation of synaptic plasticity. In the absence of dFmr1, pronounced synaptic overgrowth can be observed in the neuromuscular junction (NMJ) of Drosophila larvae, similar to the dendritic overgrowth seen on the brains of Fmr1 knockout mice and human patients. It was shown that while both dFmr1 and dAGO1 heterozygotes had normal NMJs, a transheterozygote displayed strong synaptic overgrowth and overelaboration of synaptic terminals. This result suggests that a limiting factor for function of dFmr1 at synapses is the function of AGO1. Rather significantly, this directly implicates the potential role of the miRNA pathway in human disease since dAGO1 modulates translational suppression mediated by dFmrp. The genetic interaction between dFmr1 and dAGO2 has also been demonstrated in experiments concerning the gene pickpocket1 (ppk1), which was found to control rhythmic locomotion in flies. ppk1 was found to be an mRNA target of dFmrp, and the expression of PPK1 seems to be regulated by both dFmr1 and dAGO2. Another recent study showing the involvement of dFmrp in miRNA/RNAi function comes from the identification of P body-like granule in Drosophila neurons. A set of neuronal Staufen-containing RNPs that function in the transport and translational control of mRNAs share fundamental organization and function with maternal RNA granules and somatic P bodies. Many proteins found in P bodies, including the RNA-degradation enzymes (Dcp1p and Xrn1p/Pacman), components of miRNA (Argonaute), nonsense-mediated decay (NMD ), and general translational repression (Dhh1p/Me31B) pathway have been identified in them. Interestingly, Me31B/Dhh1p present in P bodylike granule containing Staufen and dFMR1 functions together with another dFMR1-associated, P body protein (trailerhitch/Scd6p) in dFMR1-driven, Argonaute-dependent translational repression in dendritic morphogenesis and miRNA function in vivo. Experiments concerning the role of FMRP-KH2 in binding "kissing complex" RNA for FMRPmediated translational suppression may hint at a mechanism by which FMRP cooperates with the miRNA pathway. The in vitro RNA selection experiments were able to identify "kissing complex" as a structure that specifically interacts with the KH2 domain of FMRP and prevents association of FMRP with polyribosomes. However, it is unclear whether the KH2 domain of FMRP binds to inter-or intramolecular "kissing complexes". Since the in vitro RNA selection experiments were performed with random RNA libraries, it remains possible that the "kissing complex" identified represents an intermolecular interaction, whereby FMRP-KH2 is stabilizing a duplex between a miRNA and an mRNA target, which would, therefore, suppress translation of that target. Although the current data strongly support the idea that FMRP could regulate the translation of its mRNA through miRNA interaction, unfortunately, the exact mechanism of its action together with RISC is not clear at present. However, one could envisage some reasons for this kind of mechanism. Association of miRNA, FMRP, and RISC could combine and modify their classical properties to emerge as a single entity with dynamic, quick, and reversible features suitable for local protein synthesis-dependent synaptic plasticity. For example, FMRP has an intrinsic property to discriminate among different RNAs, which leads to that RISC interacts with different combinations of RNA. RISC core, in turn, could modify or diversify the translational ability of FMRP. While, AGO proteins and FMRP have been generally thought to mediate translational repression, studies from Vasudevan and Steitz have given another complexity in the mechanism of translation. Under specific cellular conditions, Argonaute 2 (AGO2) and fragile X mental retardation-related protein 1 (FXR1), can also act together as translation activators. Both AGO2 and FXR1 simultaneously bind to the AU-rich element in the 3'-UTR of TNF-alpha mRNA, and activates its translation in a cell growthdependent manner. In this context, it will be interesting to determine whether previously identified complexes of AGO proteins and FMRP form similar complexes to up-regulate translation of mRNAs, and if so, what are their partners? CONCLUDING REMARKS The accumulated data from genetic, biochemical, and molecular studies suggest that FMRP could utilize the miRNA pathway to regulate the translation of specific mRNAs, particularly in local protein synthesis at dendrites. Given the link between abnormal dendritic spines and MRs, it is very likely that the misregulation of the miRNA pathway could contribute to the disease pathogenesis of MR in general, and it would be interesting and important to examine further other types of inherited MR, such as Rett syndrome and Down syndrome, and determine the potential involvement of the miRNA pathway in disease pathogenesis.
<reponame>the-last-willy/id3d<filename>iehl/src/application/renderer/settings/ui.hpp #pragma once struct UiSettings { bool show_settings = false; };
from keras.models import Model from keras.layers import Flatten, Dense, BatchNormalization, Dropout def get_probe(model, layer_name): for layer in model.layers: layer.trainable = False input_tensor = model.input attach_tensor = model.get_layer(name=layer_name).output nb_classes = int(model.output.shape[1]) # print(nb_classes) # define probe if len(attach_tensor.shape) >= 3: bn = BatchNormalization(axis=3, name="pbn")(attach_tensor) f = Flatten(name='pflat')(bn) else: f = BatchNormalization(axis=1,name="pbn")(attach_tensor) # f = attach_tensor drop = Dropout(.2, name='pdrop')(f) d = Dense(nb_classes, activation='softmax', name='psoft')(drop) prob = Model(input_tensor, d) return prob from keras.callbacks import ReduceLROnPlateau, EarlyStopping lr_reducer = ReduceLROnPlateau(monitor='val_acc', factor=0.5, patience=10, verbose=1, cooldown=4, min_lr=1e-7) # e_stop = EarlyStopping(monitor='val_acc', min_delta=0.0002, patience=15, verbose=1) def probe(probe, X_train, Y_train, X_test, Y_test, nb_batch=32, nb_epoch=80): probe.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) hist = probe.fit(X_train, Y_train, batch_size=nb_batch, nb_epoch=nb_epoch, validation_data=(X_test, Y_test), verbose=2, callbacks=[lr_reducer]) # accs = sorted(hist.history['val_acc'])[-10:] # acc = max(accs) mes = max(hist.history['val_acc']) print(mes) return mes
<reponame>embeddedt/MUP<filename>src/main/java/org/gr1m/mc/mup/bugfix/mc5694/IPlayerInteractionManager.java package org.gr1m.mc.mup.bugfix.mc5694; public interface IPlayerInteractionManager { void setClientInstaMined(boolean instaMined); }
// Copyright 2020 The Google Research Authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // A fancier version of Greedy: // Keeps a solution always ready, recomputes it when elements arrive/leave // faster, but keeps k "prefix" copies of function f, so uses more memory // should be equivalent to SimpleGreedy in what it outputs. #include "greedy_algorithm.h" void Greedy::InsertIntoSolution(int element) { partial_F_.emplace_back(partial_F_.back()->Clone()); double delta_e = partial_F_.back()->AddAndIncreaseOracleCall(element, -1); solution_.push_back(element); obj_vals_.push_back(obj_vals_.back() + delta_e); } // Remove the last element from the solution. void Greedy::RemoveLastElement() { if (solution_.empty()) { Fail("trying to remove element from empty solution"); } obj_vals_.pop_back(); solution_.pop_back(); partial_F_.pop_back(); } // Complete the solution to k elements. void Greedy::Complete() { while (static_cast<int>(solution_.size()) < cardinality_k_) { // select best element to add std::pair<double, int> best(-1, -1); for (int x : available_elements_) { best = max( best, std::make_pair(partial_F_.back()->DeltaAndIncreaseOracleCall(x), x)); } if (best.first < 1e-11) { // nothing to add break; } else { InsertIntoSolution(best.second); } } } void Greedy::Init(const SubmodularFunction& sub_func_f, int cardinality_k) { cardinality_k_ = cardinality_k; partial_F_.clear(); partial_F_.emplace_back(sub_func_f.Clone()); solution_.clear(); obj_vals_ = {0.0}; available_elements_.clear(); } void Greedy::Insert(int element) { available_elements_.insert(element); // Check if e should have been inserted at some point (if we had run greedy // with e present). for (int i = 0; i < std::min(cardinality_k_, static_cast<int>(partial_F_.size())); ++i) { // If full: partial_F[0], ..., partial_F[k], we go 0..k-1 (solution is // 0..k-1) not full: partial_F[0], ..., partial_F[l], we go 0..l (solution // is 0..l-1) should we add it as the (i+1)-th element? double old_delta = (i == static_cast<int>(solution_.size())) ? 0 : (obj_vals_[i + 1] - obj_vals_[i]); if (partial_F_[i]->DeltaAndIncreaseOracleCall(element) > old_delta) { // Yes. // Remove last element until only i remain. while (static_cast<int>(solution_.size()) > i) { RemoveLastElement(); } InsertIntoSolution(element); Complete(); break; } } } void Greedy::Erase(int element) { if (!available_elements_.count(element)) return; available_elements_.erase(element); // Check if it was part of the solution. for (int i = 0; i < static_cast<int>(solution_.size()); ++i) { if (element == solution_[i]) { while (static_cast<int>(solution_.size()) > i) { RemoveLastElement(); } Complete(); break; } } } double Greedy::Greedy::GetSolutionValue() { return obj_vals_.back(); } std::vector<int> Greedy::GetSolutionVector() { return solution_; } std::string Greedy::GetAlgorithmName() const { return "greedy (optimized)"; }
Temporal changes in total and metabolically active ruminal methanogens in dairy cows supplemented with 3-nitrooxypropanol. The purpose of this study was to investigate the effect of 3-nitrooxypropanol (3-NOP), a potent methane inhibitor, on total and metabolically active methanogens in the rumen of dairy cows over the course of the day and over a 12-wk period. Rumen contents of 8 ruminally cannulated early-lactation dairy cows were sampled at 2, 6, and 10 h after feeding during wk 4, 8, and 12 of a randomized complete block design experiment in which 3-NOP was fed at 60 mg/kg of feed dry matter. Cows (4 fed the control and 4 fed the 3-NOP diet) were blocked based on their previous lactation milk yield or predicted milk yield. Rumen samples were extracted for microbial DNA (total) and microbial RNA (metabolically active), PCR amplified for the 16S rRNA gene of archaea, sequenced on an Illumina platform, and analyzed for archaea diversity. In addition, the 16S copy number and 3 ruminal methanogenic species were quantified using the real-time quantitative PCR assay. We detected a difference between DNA and RNA (cDNA)-based archaea communities, revealing that ruminal methanogens differ in their metabolic activities. Within DNA and cDNA components, methanogenic communities differed by sampling hour, week, and treatment. Overall, Methanobrevibacter was the dominant genus (94.3%) followed by Methanosphaera, with the latter genus having greater abundance in the cDNA component (14.5%) compared with total populations (5.5%). Methanosphaera was higher at 2 h after feeding, whereas Methanobrevibacter increased at 6 and 10 h in both groups, showing diurnal patterns among individual methanogenic lineages. Methanobrevibacter was reduced at wk 4, whereas Methanosphaera was reduced at wk 8 and 12 in cows supplemented with 3-NOP compared with control cows, suggesting differential responses among methanogens to 3-NOP. A reduction in Methanobrevibacter ruminantium in all 3-NOP samples from wk 8 was confirmed using real-time quantitative PCR. The relative abundance of individual methanogens was driven by a combination of dietary composition, dry matter intake, and hydrogen concentrations in the rumen. This study provides novel information on the effects of 3-NOP on individual methanogenic lineages, but further studies are needed to understand temporal dynamics and to validate the effects of 3-NOP on individual lineages of ruminal methanogens.
Pet fountains that create flowing water for attracting pets are well known and there have been a number of commercially successful pet fountains. Exemplary pet fountains generally include a spout for providing a continuous flow of water from a reservoir to a container such that the pet is able to drink either directly from the flowing water stream or from the container. The stream of flowing water is formed as water flows from an elevated spout down into the lower positioned container. Electronic pumps are commonly used to recirculate water within the pet fountain, by means of drawing water from the container and pushing the water through the spout. The movement of the water, generated by the pump, also allows for the water to pass through an internal filter to remove contaminants from the water as it is recirculated through the fountain. One of the drawbacks of conventional pet fountains, such as those described above, is that they limit water movement. To allow a pet easy access to the flowing water, the outer surface of the pet fountain, along which the water travels, is not covered. As a result, the force with which the water exits the spout must be limited to prevent water overflowing onto or splashing the area surrounding the pet fountain. However, this limitation on the force of discharged water decreases the sound and movement of the flowing water, which is useful in attracting pets to the fountain. As a result, efforts have been made to design and manufacture pet fountains with an outer surface that decreases water overflow and splashing while simultaneously maximizing water movement and sound. For example, one proposed pet fountain is composed of a basin for holding a volume of water and a cover that sits atop the basin. The cover has a recessed portion that forms a drinking bowl for holding a smaller volume of water which, in combination with the basin, provides two different water supplies from which a pet may drink. A pump is contained beneath the cover and is operative to draw water from the basin and pump it to the drinking bowl within the cover via an elevated stream. As the volume of water pumped to the cover exceeds the holding capacity of the drinking bowl, water begins to waterfall from the cover back into the basin creating both movement and sound. While such pet fountain designs improve the movement of water along the outer surface of the fountain, they are still limited in that the water is kept in close proximity to the fountain's outer surface while it flows into the basin. Thus there is need for an apparatus which elevates the water above the discharge spout, to increase the water's movement and sound to attract a pet, while preventing undesirable splashing or overflow in the area around the pet fountain.