content
stringlengths
7
2.61M
<reponame>Poems-AI/AI<filename>notebooks/local_lib_import.py<gh_stars>0 from pathlib import Path import sys root_lib_path_str = str(Path.cwd().parent) if root_lib_path_str not in sys.path: sys.path.insert(0, root_lib_path_str)
/** * * @param bookStoreEntity * @return Retorna o objeto adicionado a lista */ @PostMapping public ResponseEntity<BookStoreEntity> store (@RequestBody BookStoreEntity bookStoreEntity) { return new ResponseEntity<BookStoreEntity>( this.livrariaRepository.save(bookStoreEntity), new HttpHeaders(), HttpStatus.CREATED ); }
def is_path_excluded(self, path): if path == '-': if self.options.stdin_display_name == 'stdin': return False path = self.options.stdin_display_name exclude = self.options.exclude if not exclude: return False basename = os.path.basename(path) if utils.fnmatch(basename, exclude): LOG.debug('"%s" has been excluded', basename) return True absolute_path = os.path.abspath(path) match = utils.fnmatch(absolute_path, exclude) LOG.debug('"%s" has %sbeen excluded', absolute_path, '' if match else 'not ') return match
Stereotactic body radiotherapy for prostate cancer: current results of a phase II trial. The hypofractionation of stereotactic body radiotherapy (SBRT) for prostate cancer has become a broad topic, and there are many aspects to consider before accepting this treatment into our clinics. Among the considerations are the data from the Stanford phase II trial, a seminal investigation into this area, which will be presented and reviewed here. A single-arm, prospective phase II trial was initiated at Stanford in December of 2003. This trial uses SBRT as monotherapy for 'low-risk' prostate cancer patients, and 69 patients have been entered to date. We have analyzed the patient data for the first 5 years of this study. For study entry, patients were required to have clinical stage T1c or T2a disease, prostate-specific antigen (PSA) ≤ 10 and a Gleason score of 3 + 3 (or 3 + 4 if the higher grade portion was of small volume, usually <25% of the cores involved). No prior treatment was permitted, including the use of transurethral resections or androgen deprivation therapies. A low urinary IPSS score of < 20 was required for study entry as well. The prescription dose was 7.25 Gy for 5 fractions for a total dose of 36.25 Gy. This was normalized to cover ≥ 95% of the planning target volume with 100% of the prescription dose. Patients were treated using CyberKnife technology. To date, excellent PSA responses have been observed in patients with lower-risk disease selected for treatment and receiving 36.25 Gy in 5 fractions. To date, sexual quality of life outcomes have also been approximately comparable to other radiotherapy approaches. Rates of late GI and GU toxicity have been relatively low and generally comparable to dose-escalated approaches using conventional fractionation.
<reponame>zubairmohiuddin/sbsa-acs<gh_stars>0 /** @file * Copyright (c) 2020, Arm Limited or its affiliates. All rights reserved. * SPDX-License-Identifier : Apache-2.0 * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. **/ #include "include/pal_common_support.h" #include "include/platform_override_fvp.h" extern PLATFORM_OVERRIDE_IOVIRT_INFO_TABLE platform_iovirt_cfg; extern PLATFORM_OVERRIDE_NODE_DATA platform_node_type; uint64_t pal_iovirt_get_rc_smmu_base ( IOVIRT_INFO_TABLE *Iovirt, uint32_t RcSegmentNum ) { return 0; } /** @brief Check if the context bank interrupt ids for this smmu node are unique @param ctx_int Context bank interrupt array @param ctx_int Number of elements in the array @return 1 if the IDs are unique else 0 **/ static uint8_t smmu_ctx_int_distinct(uint64_t *ctx_int, uint8_t ctx_int_cnt) { uint8_t i, j; for(i = 0; i < ctx_int_cnt - 1; i++) { for(j = i + 1; j < ctx_int_cnt; j++) { if(*((uint32_t *)&ctx_int[i]) == *((uint32_t *)&ctx_int[j])) return 0; } } return 1; } void pal_iovirt_create_info_table(IOVIRT_INFO_TABLE *IoVirtTable) { uint64_t iort; IOVIRT_BLOCK *block; NODE_DATA_MAP *data_map; uint32_t j, i=0; if (IoVirtTable == NULL) return; /* Initialize counters */ IoVirtTable->num_blocks = 0; IoVirtTable->num_smmus = 0; IoVirtTable->num_pci_rcs = 0; IoVirtTable->num_named_components = 0; IoVirtTable->num_its_groups = 0; IoVirtTable->num_pmcgs = 0; iort = platform_iovirt_cfg.Address; if(!iort) { return; } block = &(IoVirtTable->blocks[0]); for (i = 0; i < platform_iovirt_cfg.node_count; i++, block=IOVIRT_NEXT_BLOCK(block)) { block->type = platform_iovirt_cfg.type[i]; block->flags = 0; switch(platform_iovirt_cfg.type[i]){ case IOVIRT_NODE_ITS_GROUP: block->data.its_count = platform_node_type.its_count; block->num_data_map = (block->data.its_count +3)/4; IoVirtTable->num_its_groups++; break; case IOVIRT_NODE_NAMED_COMPONENT: block->num_data_map = platform_iovirt_cfg.num_map[i]; IoVirtTable->num_named_components++; break; case IOVIRT_NODE_PCI_ROOT_COMPLEX: block->data.rc.segment = platform_node_type.rc.segment; block->data.rc.cca = (platform_node_type.rc.cca & IOVIRT_CCA_MASK); block->data.rc.ats_attr = platform_node_type.rc.ats_attr; block->num_data_map = platform_iovirt_cfg.num_map[i]; IoVirtTable->num_pci_rcs++; break; case IOVIRT_NODE_SMMU: block->data.smmu.base = platform_node_type.smmu.base; block->data.smmu.arch_major_rev = 2; block->num_data_map = platform_iovirt_cfg.num_map[i]; if(!smmu_ctx_int_distinct(&platform_node_type.smmu.context_interrupt_offset,platform_node_type.smmu.context_interrupt_count)) { block->flags |= (1 << IOVIRT_FLAG_SMMU_CTX_INT_SHIFT); } IoVirtTable->num_smmus++; break; case IOVIRT_NODE_SMMU_V3: block->data.smmu.base = platform_node_type.smmu.base; block->data.smmu.arch_major_rev = 3; block->num_data_map = platform_iovirt_cfg.num_map[i]; IoVirtTable->num_smmus++; break; case IOVIRT_NODE_PMCG: block->num_data_map = platform_iovirt_cfg.num_map[i]; IoVirtTable->num_pmcgs++; break; default: print(AVS_PRINT_ERR, "Invalid IORT node type\n"); return; } if (platform_iovirt_cfg.type[i] != IOVIRT_NODE_ITS_GROUP) { data_map = (NODE_DATA_MAP *)&(block->data_map[0]); for(j = 0; j < block->num_data_map; j++) { (*data_map).map.input_base = platform_iovirt_cfg.map[i].input_base[j]; (*data_map).map.id_count = platform_iovirt_cfg.map[i].id_count[j]; (*data_map).map.output_base = platform_iovirt_cfg.map[i].output_base[j]; (*data_map).map.output_ref = platform_iovirt_cfg.map[i].output_ref[j]; data_map++; } } IoVirtTable->num_blocks++; } } /** @brief Check if given SMMU node has unique context bank interrupt ids @param smmu_block smmu IOVIRT block base address @return 0 if test fails, 1 if test passes **/ uint32_t pal_iovirt_check_unique_ctx_intid(uint64_t smmu_block) { IOVIRT_BLOCK *block = (IOVIRT_BLOCK *)smmu_block; /* This test has already been done while parsing IORT */ /* Check the flags to get the test result */ if(block->flags & (1 << IOVIRT_FLAG_SMMU_CTX_INT_SHIFT)) { return 0; } return 1; } uint32_t pal_iovirt_unique_rid_strid_map(uint64_t rc_block) { IOVIRT_BLOCK *block = (IOVIRT_BLOCK *)rc_block; if(block->flags & (1 << IOVIRT_FLAG_STRID_OVERLAP_SHIFT)) return 0; return 1; }
from __future__ import print_function import sys import json from corpus_utils import pretty_print_examples def read_parsed_corpus(fname): ls = [] with open(fname) as f: obj = json.load(f) while 'tail' in obj: ls.append(obj['head']) obj = obj['tail'] ls.append(obj['head']) return ls if __name__ == '__main__': if len(sys.argv) < 2: print('Usage: {} <corpus json file>'.format(sys.argv[0])) sys.exit(-1) examples = read_parsed_corpus(sys.argv[1]) pretty_print_examples(examples)
package main import ( "bytes" "context" "errors" "path" "time" "github.com/gogo/protobuf/proto" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promauto" pb "github.com/prysmaticlabs/prysm/proto/cluster" "github.com/prysmaticlabs/prysm/shared/bls" "github.com/prysmaticlabs/prysm/shared/bytesutil" "github.com/prysmaticlabs/prysm/shared/keystore" "github.com/prysmaticlabs/prysm/shared/params" bolt "go.etcd.io/bbolt" ) var ( allocatedPkCount = promauto.NewGauge(prometheus.GaugeOpts{ Name: "allocated_pk_count", Help: "The number of allocated private keys", }) assignedPkCount = promauto.NewGauge(prometheus.GaugeOpts{ Name: "assigned_pk_count", Help: "The number of private keys currently assigned to alive pods", }) bannedPKCount = promauto.NewGauge(prometheus.GaugeOpts{ Name: "banned_pk_count", Help: "The number of private keys which have been removed that are of exited validators", }) ) var ( dbFileName = "pk.db" assignedPkBucket = []byte("assigned_pks") unassignedPkBucket = []byte("unassigned_pks") deletedKeysBucket = []byte("deleted_pks") dummyVal = []byte{1} ) type keyMap struct { podName string privateKey []byte index int } type db struct { db *bolt.DB } func newDB(dbPath string) *db { datafile := path.Join(dbPath, dbFileName) boltdb, err := bolt.Open(datafile, params.BeaconIoConfig().ReadWritePermissions, &bolt.Options{Timeout: params.BeaconIoConfig().BoltTimeout}) if err != nil { panic(err) } // Initialize buckets if err := boltdb.Update(func(tx *bolt.Tx) error { for _, bkt := range [][]byte{assignedPkBucket, unassignedPkBucket, deletedKeysBucket} { if _, err := tx.CreateBucketIfNotExists(bkt); err != nil { return err } } return nil }); err != nil { panic(err) } // Populate metrics on start. if err := boltdb.View(func(tx *bolt.Tx) error { // Populate banned key count. bannedPKCount.Set(float64(tx.Bucket(deletedKeysBucket).Stats().KeyN)) keys := 0 // Iterate over all of the pod assigned keys (one to many). c := tx.Bucket(assignedPkBucket).Cursor() for k, v := c.First(); k != nil; k, v = c.Next() { pks := &pb.PrivateKeys{} if err := proto.Unmarshal(v, pks); err != nil { log.WithError(err).Error("Unable to unmarshal private key") continue } keys += len(pks.PrivateKeys) } assignedPkCount.Set(float64(keys)) // Add the unassigned keys count (one to one). keys += tx.Bucket(unassignedPkBucket).Stats().KeyN allocatedPkCount.Add(float64(keys)) return nil }); err != nil { panic(err) } return &db{db: boltdb} } // UnallocatedPKs returns unassigned private keys, if any are available. func (d *db) UnallocatedPKs(_ context.Context, numKeys uint64) (*pb.PrivateKeys, error) { pks := &pb.PrivateKeys{} if err := d.db.View(func(tx *bolt.Tx) error { c := tx.Bucket(unassignedPkBucket).Cursor() i := uint64(0) for k, _ := c.First(); k != nil && i < numKeys; k, _ = c.Next() { pks.PrivateKeys = append(pks.PrivateKeys, k) i++ } return nil }); err != nil { return nil, err } return pks, nil } func (d *db) DeleteUnallocatedKey(_ context.Context, privateKey []byte) error { return d.db.Update(func(tx *bolt.Tx) error { if err := tx.Bucket(unassignedPkBucket).Delete(privateKey); err != nil { return err } if err := tx.Bucket(deletedKeysBucket).Put(privateKey, dummyVal); err != nil { return err } bannedPKCount.Inc() allocatedPkCount.Dec() return nil }) } // PodPK returns an assigned private key to the given pod name, if one exists. func (d *db) PodPKs(_ context.Context, podName string) (*pb.PrivateKeys, error) { pks := &pb.PrivateKeys{} if err := d.db.View(func(tx *bolt.Tx) error { b := tx.Bucket(assignedPkBucket).Get([]byte(podName)) return proto.Unmarshal(b, pks) }); err != nil { return nil, err } return pks, nil } // AllocateNewPkToPod records new private key assignment in DB. func (d *db) AllocateNewPkToPod( _ context.Context, pk *keystore.Key, podName string, ) error { allocatedPkCount.Inc() assignedPkCount.Inc() return d.db.Update(func(tx *bolt.Tx) error { pks := &pb.PrivateKeys{} if b := tx.Bucket(assignedPkBucket).Get([]byte(podName)); b != nil { if err := proto.Unmarshal(b, pks); err != nil { return err } } pks.PrivateKeys = append(pks.PrivateKeys, pk.SecretKey.Marshal()) b, err := proto.Marshal(pks) if err != nil { return err } return tx.Bucket(assignedPkBucket).Put( []byte(podName), b, ) }) } // RemovePKAssignments from pod and put the private keys into the unassigned // bucket. func (d *db) RemovePKAssignment(_ context.Context, podName string) error { return d.db.Update(func(tx *bolt.Tx) error { data := tx.Bucket(assignedPkBucket).Get([]byte(podName)) if data == nil { log.WithField("podName", podName).Warn("Nil private key returned from db") return nil } pks := &pb.PrivateKeys{} if err := proto.Unmarshal(data, pks); err != nil { log.WithError(err).Error("Failed to unmarshal pks, deleting from db") return tx.Bucket(assignedPkBucket).Delete([]byte(podName)) } if err := tx.Bucket(assignedPkBucket).Delete([]byte(podName)); err != nil { return err } assignedPkCount.Sub(float64(len(pks.PrivateKeys))) for _, pk := range pks.PrivateKeys { if err := tx.Bucket(unassignedPkBucket).Put(pk, dummyVal); err != nil { return err } } return nil }) } // AssignExistingPKs assigns a PK from the unassigned bucket to a given pod. func (d *db) AssignExistingPKs(_ context.Context, pks *pb.PrivateKeys, podName string) error { return d.db.Update(func(tx *bolt.Tx) error { for _, pk := range pks.PrivateKeys { if bytes.Equal(tx.Bucket(unassignedPkBucket).Get(pk), dummyVal) { if err := tx.Bucket(unassignedPkBucket).Delete(pk); err != nil { return err } } } assignedPkCount.Add(float64(len(pks.PrivateKeys))) // If pod assignment exists, append to it. if existing := tx.Bucket(assignedPkBucket).Get([]byte(podName)); existing != nil { existingKeys := &pb.PrivateKeys{} if err := proto.Unmarshal(existing, existingKeys); err != nil { pks.PrivateKeys = append(pks.PrivateKeys, existingKeys.PrivateKeys...) } } data, err := proto.Marshal(pks) if err != nil { return err } return tx.Bucket(assignedPkBucket).Put([]byte(podName), data) }) } // AllocatedPodNames returns the string list of pod names with current private // key allocations. func (d *db) AllocatedPodNames(_ context.Context) ([]string, error) { var podNames []string if err := d.db.View(func(tx *bolt.Tx) error { return tx.Bucket(assignedPkBucket).ForEach(func(k, v []byte) error { podNames = append(podNames, string(k)) return nil }) }); err != nil { return nil, err } return podNames, nil } func (d *db) Allocations() (map[string][][]byte, error) { m := make(map[string][][]byte) if err := d.db.View(func(tx *bolt.Tx) error { return tx.Bucket(assignedPkBucket).ForEach(func(k, v []byte) error { pks := &pb.PrivateKeys{} if err := proto.Unmarshal(v, pks); err != nil { log.WithError(err).Error("Could not unmarshal private key") } pubkeys := make([][]byte, len(pks.PrivateKeys)) for i, pk := range pks.PrivateKeys { k, err := bls.SecretKeyFromBytes(pk) if err != nil { return err } pubkeys[i] = k.PublicKey().Marshal() } m[string(k)] = pubkeys return nil }) }); err != nil { // do something return nil, err } return m, nil } func (d *db) KeyMap() ([][]byte, map[[48]byte]keyMap, error) { m := make(map[[48]byte]keyMap) pubkeys := make([][]byte, 0) if err := d.db.View(func(tx *bolt.Tx) error { return tx.Bucket(assignedPkBucket).ForEach(func(k, v []byte) error { pks := &pb.PrivateKeys{} if err := proto.Unmarshal(v, pks); err != nil { return err } for i, pk := range pks.PrivateKeys { seckey, err := bls.SecretKeyFromBytes(pk) if err != nil { log.WithError(err).Warn("Could not deserialize secret key... removing") return tx.Bucket(assignedPkBucket).Delete(k) } keytoSet := bytesutil.ToBytes48(seckey.PublicKey().Marshal()) m[keytoSet] = keyMap{ podName: string(k), privateKey: pk, index: i, } pubkeys = append(pubkeys, seckey.PublicKey().Marshal()) } return nil }) }); err != nil { // do something return nil, nil, err } return pubkeys, m, nil } // RemovePKFromPod and throw it away. func (d *db) RemovePKFromPod(podName string, key []byte) error { return d.db.Update(func(tx *bolt.Tx) error { data := tx.Bucket(assignedPkBucket).Get([]byte(podName)) if data == nil { log.WithField("podName", podName).Warn("Nil private key returned from db") return nil } pks := &pb.PrivateKeys{} if err := proto.Unmarshal(data, pks); err != nil { log.WithError(err).Error("Unable to unmarshal private keys, deleting assignment from db") return tx.Bucket(assignedPkBucket).Delete([]byte(podName)) } found := false for i, k := range pks.PrivateKeys { if bytes.Equal(k, key) { found = true pks.PrivateKeys = append(pks.PrivateKeys[:i], pks.PrivateKeys[i+1:]...) break } } if !found { return errors.New("private key not assigned to pod") } marshaled, err := proto.Marshal(pks) if err != nil { return err } bannedPKCount.Inc() allocatedPkCount.Dec() assignedPkCount.Dec() nowBytes, err := time.Now().MarshalBinary() if err != nil { return err } if err := tx.Bucket(deletedKeysBucket).Put(key, nowBytes); err != nil { return err } return tx.Bucket(assignedPkBucket).Put([]byte(podName), marshaled) }) }
GMCs case against fertility doctor collapses The General Medical Councils case against a UK fertility specialist collapsed on 24 October, when a GMC fitness to practise panel ruled that the evidence against Mohamed Taranissi could not sustain a misconduct charge. Mr Taranissi, founder of the Assisted Reproduction and Gynaecology Centre, London, was accused of inappropriately pressuring one patient to take adalimumab, a drug not licensed for fertility treatment. He was also accused of failing to adequately treat a patient who attended his clinic distressed and nauseous and who later collapsed and was admitted to intensive care with severe hyponatraemia ( BMJ 2008;337:a1995, 6 Oct, doi:10.1136/bmj.a1995). The panels chairman, Harvey Marcovitch,
package saneryee.spring.data.simplejdbcdemo; import lombok.Builder; import lombok.Data; // Lombok @Data generates all the boilerplate that is normally associated with simple POJOs. // Include:@ToString, @EqualsAndHashCode, @Getter / @Setter and @RequiredArgsConstructor // Lombok @Builder lets you automatically produce the code required to have your class be instantiable with code such as: // Person.builder().name("<NAME>").city("San Francisco").job("Mythbusters").job("Unchained Reaction").build(); @Data @Builder public class Foo { private Long id; private String bar; }
Antidepressive effects of the &kgr;-opioid receptor agonist salvinorin A in a rat model of anhedonia Salvinorin A (SalvA), the hallucinogenic derivative of the plant Salvia divinorum, is a selective &kgr;-opioid receptor agonist that may also have antidepressant properties. Chronic mild stress (CMS) was applied to male and female LongEvans rats to model anhedonia common in depression. The progressive loss in preference for a sucrose solution over plain water, a measure of anhedonia, and locomotor activity were monitored for 7 weeks. Because antidepressant medications often modify reproductive functions, endocrine glands and hormone-sensitive tissues were assessed at necropsy after the conclusion of the behavioral protocol. Three weeks of CMS exposure led to a decrease in sucrose preference. CMS was continued for 3 additional weeks and animals were randomly assigned to treatment with 1 mg SalvA/kg body weight or to a vehicle control group. The results indicate that SalvA reversed anhedonia whereas control animals continued to show a suppressed preference for the sucrose solution. In addition, no change in sucrose preference was observed in nonstressed rats that were exposed to the same dosage of SalvA. The results indicate that SalvA is an effective antidepressant agent when administered chronically to rats showing symptoms of depression similar to those observed in humans.
<reponame>ivanst0/onnxruntime /* * Copyright (c) 2019, 2020 Oracle and/or its affiliates. All rights reserved. * Licensed under the MIT License. */ #include <jni.h> #include <stdio.h> #include "OrtJniUtil.h" jint JNI_OnLoad(JavaVM *vm, void *reserved) { // To silence unused-parameter error. // This function must exist according to the JNI spec, but the arguments aren't necessary for the library to request a specific version. (void)vm; (void) reserved; // Requesting 1.6 to support Android. Will need to be bumped to a later version to call interface default methods // from native code, or to access other new Java features. return JNI_VERSION_1_6; } /** * Must be kept in sync with ORT_LOGGING_LEVEL and the OrtLoggingLevel java enum */ OrtLoggingLevel convertLoggingLevel(jint level) { switch (level) { case 0: return ORT_LOGGING_LEVEL_VERBOSE; case 1: return ORT_LOGGING_LEVEL_INFO; case 2: return ORT_LOGGING_LEVEL_WARNING; case 3: return ORT_LOGGING_LEVEL_ERROR; case 4: return ORT_LOGGING_LEVEL_FATAL; default: return ORT_LOGGING_LEVEL_VERBOSE; } } /** * Must be kept in sync with GraphOptimizationLevel and SessionOptions#OptLevel */ GraphOptimizationLevel convertOptimizationLevel(jint level) { switch (level) { case 0: return ORT_DISABLE_ALL; case 1: return ORT_ENABLE_BASIC; case 2: return ORT_ENABLE_EXTENDED; case 99: return ORT_ENABLE_ALL; default: return ORT_DISABLE_ALL; } } /** * Must be kept in sync with ExecutionMode and SessionOptions#ExecutionMode */ ExecutionMode convertExecutionMode(jint mode) { switch (mode) { case 0: return ORT_SEQUENTIAL; case 1: return ORT_PARALLEL; default: return ORT_SEQUENTIAL; } } /** * Must be kept in sync with convertToONNXDataFormat */ jint convertFromONNXDataFormat(ONNXTensorElementDataType type) { switch (type) { case ONNX_TENSOR_ELEMENT_DATA_TYPE_UNDEFINED: return 0; case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT8: // maps to c type uint8_t return 1; case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT8: // maps to c type int8_t return 2; case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT16: // maps to c type uint16_t return 3; case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT16: // maps to c type int16_t return 4; case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT32: // maps to c type uint32_t return 5; case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT32: // maps to c type int32_t return 6; case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT64: // maps to c type uint64_t return 7; case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT64: // maps to c type int64_t return 8; case ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16: return 9; case ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT: // maps to c type float return 10; case ONNX_TENSOR_ELEMENT_DATA_TYPE_DOUBLE: // maps to c type double return 11; case ONNX_TENSOR_ELEMENT_DATA_TYPE_STRING: // maps to c++ type std::string return 12; case ONNX_TENSOR_ELEMENT_DATA_TYPE_BOOL: return 13; case ONNX_TENSOR_ELEMENT_DATA_TYPE_COMPLEX64: // complex with float32 real and imaginary components return 14; case ONNX_TENSOR_ELEMENT_DATA_TYPE_COMPLEX128: // complex with float64 real and imaginary components return 15; case ONNX_TENSOR_ELEMENT_DATA_TYPE_BFLOAT16: // Non-IEEE floating-point format based on IEEE754 single-precision return 16; default: return -1; } } /** * Must be kept in sync with convertFromONNXDataFormat */ ONNXTensorElementDataType convertToONNXDataFormat(jint type) { switch (type) { case 0: return ONNX_TENSOR_ELEMENT_DATA_TYPE_UNDEFINED; case 1: return ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT8; // maps to c type uint8_t case 2: return ONNX_TENSOR_ELEMENT_DATA_TYPE_INT8; // maps to c type int8_t case 3: return ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT16; // maps to c type uint16_t case 4: return ONNX_TENSOR_ELEMENT_DATA_TYPE_INT16; // maps to c type int16_t case 5: return ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT32; // maps to c type uint32_t case 6: return ONNX_TENSOR_ELEMENT_DATA_TYPE_INT32; // maps to c type int32_t case 7: return ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT64; // maps to c type uint64_t case 8: return ONNX_TENSOR_ELEMENT_DATA_TYPE_INT64; // maps to c type int64_t case 9: return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16; case 10: return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; // maps to c type float case 11: return ONNX_TENSOR_ELEMENT_DATA_TYPE_DOUBLE; // maps to c type double case 12: return ONNX_TENSOR_ELEMENT_DATA_TYPE_STRING; // maps to c++ type std::string case 13: return ONNX_TENSOR_ELEMENT_DATA_TYPE_BOOL; case 14: return ONNX_TENSOR_ELEMENT_DATA_TYPE_COMPLEX64; // complex with float32 real and imaginary components case 15: return ONNX_TENSOR_ELEMENT_DATA_TYPE_COMPLEX128; // complex with float64 real and imaginary components case 16: return ONNX_TENSOR_ELEMENT_DATA_TYPE_BFLOAT16; // Non-IEEE floating-point format based on IEEE754 single-precision default: return ONNX_TENSOR_ELEMENT_DATA_TYPE_UNDEFINED; } } size_t onnxTypeSize(ONNXTensorElementDataType type) { switch (type) { case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT8: // maps to c type uint8_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT8: // maps to c type int8_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_BOOL: return 1; case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT16: // maps to c type uint16_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT16: // maps to c type int16_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16: return 2; case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT32: // maps to c type uint32_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT32: // maps to c type int32_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT: // maps to c type float return 4; case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT64: // maps to c type uint64_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT64: // maps to c type int64_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_DOUBLE: // maps to c type double return 8; case ONNX_TENSOR_ELEMENT_DATA_TYPE_STRING: // maps to c++ type std::string case ONNX_TENSOR_ELEMENT_DATA_TYPE_UNDEFINED: case ONNX_TENSOR_ELEMENT_DATA_TYPE_BFLOAT16: // Non-IEEE floating-point format based on IEEE754 single-precision case ONNX_TENSOR_ELEMENT_DATA_TYPE_COMPLEX64: // complex with float32 real and imaginary components case ONNX_TENSOR_ELEMENT_DATA_TYPE_COMPLEX128: // complex with float64 real and imaginary components default: return 0; } } typedef union FP32 { int intVal; float floatVal; } FP32; jfloat convertHalfToFloat(uint16_t half) { FP32 output; output.intVal = (((half&0x8000)<<16) | (((half&0x7c00)+0x1C000)<<13) | ((half&0x03FF)<<13)); return output.floatVal; } jobject convertToValueInfo(JNIEnv *jniEnv, const OrtApi * api, OrtTypeInfo * info) { ONNXType type; checkOrtStatus(jniEnv,api,api->GetOnnxTypeFromTypeInfo(info,&type)); switch (type) { case ONNX_TYPE_TENSOR: { const OrtTensorTypeAndShapeInfo* tensorInfo; checkOrtStatus(jniEnv,api,api->CastTypeInfoToTensorInfo(info,&tensorInfo)); return convertToTensorInfo(jniEnv, api, (const OrtTensorTypeAndShapeInfo *) tensorInfo); } case ONNX_TYPE_SEQUENCE: { const OrtSequenceTypeInfo* sequenceInfo; checkOrtStatus(jniEnv,api,api->CastTypeInfoToSequenceTypeInfo(info,&sequenceInfo)); return convertToSequenceInfo(jniEnv, api, sequenceInfo); } case ONNX_TYPE_MAP: { const OrtMapTypeInfo* mapInfo; checkOrtStatus(jniEnv,api,api->CastTypeInfoToMapTypeInfo(info,&mapInfo)); return convertToMapInfo(jniEnv, api, mapInfo); } case ONNX_TYPE_UNKNOWN: case ONNX_TYPE_OPAQUE: case ONNX_TYPE_SPARSETENSOR: default: { throwOrtException(jniEnv,convertErrorCode(ORT_NOT_IMPLEMENTED),"Invalid ONNXType found."); return NULL; } } } jobject convertToTensorInfo(JNIEnv *jniEnv, const OrtApi * api, const OrtTensorTypeAndShapeInfo * info) { // Extract the information from the info struct. ONNXTensorElementDataType onnxType; checkOrtStatus(jniEnv,api,api->GetTensorElementType(info,&onnxType)); size_t numDim; checkOrtStatus(jniEnv,api,api->GetDimensionsCount(info,&numDim)); //printf("numDim %d\n",numDim); int64_t* dimensions = (int64_t*) malloc(sizeof(int64_t)*numDim); checkOrtStatus(jniEnv,api,api->GetDimensions(info, dimensions, numDim)); jint onnxTypeInt = convertFromONNXDataFormat(onnxType); // Create the long array for the shape. jlongArray shape = (*jniEnv)->NewLongArray(jniEnv, numDim); (*jniEnv)->SetLongArrayRegion(jniEnv, shape, 0, numDim, (jlong*)dimensions); // Free the dimensions array free(dimensions); dimensions = NULL; // Create the ONNXTensorType enum char *onnxTensorTypeClassName = "ai/onnxruntime/TensorInfo$OnnxTensorType"; jclass clazz = (*jniEnv)->FindClass(jniEnv, onnxTensorTypeClassName); jmethodID onnxTensorTypeMapFromInt = (*jniEnv)->GetStaticMethodID(jniEnv,clazz, "mapFromInt", "(I)Lai/onnxruntime/TensorInfo$OnnxTensorType;"); jobject onnxTensorTypeJava = (*jniEnv)->CallStaticObjectMethod(jniEnv,clazz,onnxTensorTypeMapFromInt,onnxTypeInt); //printf("ONNXTensorType class %p, methodID %p, object %p\n",clazz,onnxTensorTypeMapFromInt,onnxTensorTypeJava); // Create the ONNXJavaType enum char *javaDataTypeClassName = "ai/onnxruntime/OnnxJavaType"; clazz = (*jniEnv)->FindClass(jniEnv, javaDataTypeClassName); jmethodID javaDataTypeMapFromONNXTensorType = (*jniEnv)->GetStaticMethodID(jniEnv,clazz, "mapFromOnnxTensorType", "(Lai/onnxruntime/TensorInfo$OnnxTensorType;)Lai/onnxruntime/OnnxJavaType;"); jobject javaDataType = (*jniEnv)->CallStaticObjectMethod(jniEnv,clazz,javaDataTypeMapFromONNXTensorType,onnxTensorTypeJava); //printf("JavaDataType class %p, methodID %p, object %p\n",clazz,javaDataTypeMapFromONNXTensorType,javaDataType); // Create the TensorInfo object char *tensorInfoClassName = "ai/onnxruntime/TensorInfo"; clazz = (*jniEnv)->FindClass(jniEnv, tensorInfoClassName); jmethodID tensorInfoConstructor = (*jniEnv)->GetMethodID(jniEnv,clazz, "<init>", "([JLai/onnxruntime/OnnxJavaType;Lai/onnxruntime/TensorInfo$OnnxTensorType;)V"); //printf("TensorInfo class %p, methodID %p\n",clazz,tensorInfoConstructor); jobject tensorInfo = (*jniEnv)->NewObject(jniEnv, clazz, tensorInfoConstructor, shape, javaDataType, onnxTensorTypeJava); return tensorInfo; } jobject convertToMapInfo(JNIEnv *jniEnv, const OrtApi * api, const OrtMapTypeInfo * info) { // Create the java methods we need to call. // Get the ONNXTensorType enum static method char *onnxTensorTypeClassName = "ai/onnxruntime/TensorInfo$OnnxTensorType"; jclass onnxTensorTypeClazz = (*jniEnv)->FindClass(jniEnv, onnxTensorTypeClassName); jmethodID onnxTensorTypeMapFromInt = (*jniEnv)->GetStaticMethodID(jniEnv,onnxTensorTypeClazz, "mapFromInt", "(I)Lai/onnxruntime/TensorInfo$OnnxTensorType;"); // Get the ONNXJavaType enum static method char *javaDataTypeClassName = "ai/onnxruntime/OnnxJavaType"; jclass onnxJavaTypeClazz = (*jniEnv)->FindClass(jniEnv, javaDataTypeClassName); jmethodID onnxJavaTypeMapFromONNXTensorType = (*jniEnv)->GetStaticMethodID(jniEnv,onnxJavaTypeClazz, "mapFromOnnxTensorType", "(Lai/onnxruntime/TensorInfo$OnnxTensorType;)Lai/onnxruntime/OnnxJavaType;"); // Get the map info class char *mapInfoClassName = "ai/onnxruntime/MapInfo"; jclass mapInfoClazz = (*jniEnv)->FindClass(jniEnv, mapInfoClassName); jmethodID mapInfoConstructor = (*jniEnv)->GetMethodID(jniEnv,mapInfoClazz,"<init>","(ILai/onnxruntime/OnnxJavaType;Lai/onnxruntime/OnnxJavaType;)V"); // Extract the key type ONNXTensorElementDataType keyType; checkOrtStatus(jniEnv,api,api->GetMapKeyType(info,&keyType)); // Convert key type to java jint onnxTypeKey = convertFromONNXDataFormat(keyType); jobject onnxTensorTypeJavaKey = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxTensorTypeClazz,onnxTensorTypeMapFromInt,onnxTypeKey); jobject onnxJavaTypeKey = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxJavaTypeClazz,onnxJavaTypeMapFromONNXTensorType,onnxTensorTypeJavaKey); // according to include/onnxruntime/core/framework/data_types.h only the following values are supported. // string, int64, float, double // So extract the value type, then convert it to a tensor type so we can get it's element type. OrtTypeInfo* valueTypeInfo; checkOrtStatus(jniEnv,api,api->GetMapValueType(info,&valueTypeInfo)); const OrtTensorTypeAndShapeInfo* tensorValueInfo; checkOrtStatus(jniEnv,api,api->CastTypeInfoToTensorInfo(valueTypeInfo,&tensorValueInfo)); ONNXTensorElementDataType valueType; checkOrtStatus(jniEnv,api,api->GetTensorElementType(tensorValueInfo,&valueType)); api->ReleaseTypeInfo(valueTypeInfo); tensorValueInfo = NULL; valueTypeInfo = NULL; // Convert value type to java jint onnxTypeValue = convertFromONNXDataFormat(valueType); jobject onnxTensorTypeJavaValue = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxTensorTypeClazz,onnxTensorTypeMapFromInt,onnxTypeValue); jobject onnxJavaTypeValue = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxJavaTypeClazz,onnxJavaTypeMapFromONNXTensorType,onnxTensorTypeJavaValue); // Construct map info jobject mapInfo = (*jniEnv)->NewObject(jniEnv,mapInfoClazz,mapInfoConstructor,(jint)-1,onnxJavaTypeKey,onnxJavaTypeValue); return mapInfo; } jobject createEmptyMapInfo(JNIEnv *jniEnv) { // Create the ONNXJavaType enum char *onnxJavaTypeClassName = "ai/onnxruntime/OnnxJavaType"; jclass clazz = (*jniEnv)->FindClass(jniEnv, onnxJavaTypeClassName); jmethodID onnxJavaTypeMapFromInt = (*jniEnv)->GetStaticMethodID(jniEnv,clazz, "mapFromInt", "(I)Lai/onnxruntime/OnnxJavaType;"); jobject unknownType = (*jniEnv)->CallStaticObjectMethod(jniEnv,clazz,onnxJavaTypeMapFromInt,0); char *mapInfoClassName = "ai/onnxruntime/MapInfo"; clazz = (*jniEnv)->FindClass(jniEnv, mapInfoClassName); jmethodID mapInfoConstructor = (*jniEnv)->GetMethodID(jniEnv,clazz,"<init>","(Lai/onnxruntime/OnnxJavaType;Lai/onnxruntime/OnnxJavaType;)V"); jobject mapInfo = (*jniEnv)->NewObject(jniEnv,clazz,mapInfoConstructor,unknownType,unknownType); return mapInfo; } jobject convertToSequenceInfo(JNIEnv *jniEnv, const OrtApi * api, const OrtSequenceTypeInfo * info) { // Get the sequence info class char *sequenceInfoClassName = "ai/onnxruntime/SequenceInfo"; jclass sequenceInfoClazz = (*jniEnv)->FindClass(jniEnv, sequenceInfoClassName); // according to include/onnxruntime/core/framework/data_types.h the following values are supported. // tensor types, map<string,float> and map<long,float> OrtTypeInfo* elementTypeInfo; checkOrtStatus(jniEnv,api,api->GetSequenceElementType(info,&elementTypeInfo)); ONNXType type; checkOrtStatus(jniEnv,api,api->GetOnnxTypeFromTypeInfo(elementTypeInfo,&type)); jobject sequenceInfo; switch (type) { case ONNX_TYPE_TENSOR: { // Figure out element type const OrtTensorTypeAndShapeInfo* elementTensorInfo; checkOrtStatus(jniEnv,api,api->CastTypeInfoToTensorInfo(elementTypeInfo,&elementTensorInfo)); ONNXTensorElementDataType element; checkOrtStatus(jniEnv,api,api->GetTensorElementType(elementTensorInfo,&element)); // Convert element type into ONNXTensorType jint onnxTypeInt = convertFromONNXDataFormat(element); // Get the ONNXTensorType enum static method char *onnxTensorTypeClassName = "ai/onnxruntime/TensorInfo$OnnxTensorType"; jclass onnxTensorTypeClazz = (*jniEnv)->FindClass(jniEnv, onnxTensorTypeClassName); jmethodID onnxTensorTypeMapFromInt = (*jniEnv)->GetStaticMethodID(jniEnv,onnxTensorTypeClazz, "mapFromInt", "(I)Lai/onnxruntime/TensorInfo$OnnxTensorType;"); jobject onnxTensorTypeJava = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxTensorTypeClazz,onnxTensorTypeMapFromInt,onnxTypeInt); // Get the ONNXJavaType enum static method char *javaDataTypeClassName = "ai/onnxruntime/OnnxJavaType"; jclass onnxJavaTypeClazz = (*jniEnv)->FindClass(jniEnv, javaDataTypeClassName); jmethodID onnxJavaTypeMapFromONNXTensorType = (*jniEnv)->GetStaticMethodID(jniEnv,onnxJavaTypeClazz, "mapFromOnnxTensorType", "(Lai/onnxruntime/TensorInfo$OnnxTensorType;)Lai/onnxruntime/OnnxJavaType;"); jobject onnxJavaType = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxJavaTypeClazz,onnxJavaTypeMapFromONNXTensorType,onnxTensorTypeJava); // Construct sequence info jmethodID sequenceInfoConstructor = (*jniEnv)->GetMethodID(jniEnv,sequenceInfoClazz,"<init>","(ILai/onnxruntime/OnnxJavaType;)V"); sequenceInfo = (*jniEnv)->NewObject(jniEnv,sequenceInfoClazz,sequenceInfoConstructor,(jint)-1,onnxJavaType); break; } case ONNX_TYPE_MAP: { // Extract the map info const OrtMapTypeInfo* mapInfo; checkOrtStatus(jniEnv,api,api->CastTypeInfoToMapTypeInfo(elementTypeInfo,&mapInfo)); // Convert it using the existing convert function jobject javaMapInfo = convertToMapInfo(jniEnv,api,mapInfo); // Construct sequence info jmethodID sequenceInfoConstructor = (*jniEnv)->GetMethodID(jniEnv,sequenceInfoClazz,"<init>","(ILai/onnxruntime/MapInfo;)V"); sequenceInfo = (*jniEnv)->NewObject(jniEnv,sequenceInfoClazz,sequenceInfoConstructor,(jint)-1,javaMapInfo); break; } default: { sequenceInfo = createEmptySequenceInfo(jniEnv); throwOrtException(jniEnv,convertErrorCode(ORT_INVALID_ARGUMENT),"Invalid element type found in sequence"); break; } } api->ReleaseTypeInfo(elementTypeInfo); elementTypeInfo = NULL; return sequenceInfo; } jobject createEmptySequenceInfo(JNIEnv *jniEnv) { // Create the ONNXJavaType enum char *onnxJavaTypeClassName = "ai/onnxruntime/OnnxJavaType"; jclass clazz = (*jniEnv)->FindClass(jniEnv, onnxJavaTypeClassName); jmethodID onnxJavaTypeMapFromInt = (*jniEnv)->GetStaticMethodID(jniEnv,clazz, "mapFromInt", "(I)Lai/onnxruntime/OnnxJavaType;"); jobject unknownType = (*jniEnv)->CallStaticObjectMethod(jniEnv,clazz,onnxJavaTypeMapFromInt,0); char *sequenceInfoClassName = "ai/onnxruntime/SequenceInfo"; clazz = (*jniEnv)->FindClass(jniEnv, sequenceInfoClassName); jmethodID sequenceInfoConstructor = (*jniEnv)->GetMethodID(jniEnv,clazz,"<init>","(ILai/onnxruntime/OnnxJavaType;)V"); jobject sequenceInfo = (*jniEnv)->NewObject(jniEnv,clazz,sequenceInfoConstructor,-1,unknownType); return sequenceInfo; } size_t copyJavaToPrimitiveArray(JNIEnv *jniEnv, ONNXTensorElementDataType onnxType, uint8_t* tensor, jarray input) { uint32_t inputLength = (*jniEnv)->GetArrayLength(jniEnv,input); size_t consumedSize = inputLength * onnxTypeSize(onnxType); switch (onnxType) { case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT8: // maps to c type uint8_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT8: { // maps to c type int8_t jbyteArray typedArr = (jbyteArray) input; (*jniEnv)->GetByteArrayRegion(jniEnv, typedArr, 0, inputLength, (jbyte * ) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT16: // maps to c type uint16_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT16: { // maps to c type int16_t jshortArray typedArr = (jshortArray) input; (*jniEnv)->GetShortArrayRegion(jniEnv, typedArr, 0, inputLength, (jshort * ) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT32: // maps to c type uint32_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT32: { // maps to c type int32_t jintArray typedArr = (jintArray) input; (*jniEnv)->GetIntArrayRegion(jniEnv, typedArr, 0, inputLength, (jint * ) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT64: // maps to c type uint64_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT64: { // maps to c type int64_t jlongArray typedArr = (jlongArray) input; (*jniEnv)->GetLongArrayRegion(jniEnv, typedArr, 0, inputLength, (jlong * ) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16: { throwOrtException(jniEnv, convertErrorCode(ORT_NOT_IMPLEMENTED), "16-bit float not supported."); return 0; /* float *floatArr = malloc(sizeof(float) * inputLength); uint16_t *halfArr = (uint16_t *) tensor; for (uint32_t i = 0; i < inputLength; i++) { floatArr[i] = convertHalfToFloat(halfArr[i]); } jfloatArray typedArr = (jfloatArray) input; (*jniEnv)->GetFloatArrayRegion(jniEnv, typedArr, 0, inputLength, floatArr); free(floatArr); return consumedSize; */ } case ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT: { // maps to c type float jfloatArray typedArr = (jfloatArray) input; (*jniEnv)->GetFloatArrayRegion(jniEnv, typedArr, 0, inputLength, (jfloat * ) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_DOUBLE: { // maps to c type double jdoubleArray typedArr = (jdoubleArray) input; (*jniEnv)->GetDoubleArrayRegion(jniEnv, typedArr, 0, inputLength, (jdouble * ) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_STRING: { // maps to c++ type std::string throwOrtException(jniEnv, convertErrorCode(ORT_NOT_IMPLEMENTED), "String is not supported."); return 0; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_BOOL: { jbooleanArray typedArr = (jbooleanArray) input; (*jniEnv)->GetBooleanArrayRegion(jniEnv, typedArr, 0, inputLength, (jboolean *) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_COMPLEX64: // complex with float32 real and imaginary components case ONNX_TENSOR_ELEMENT_DATA_TYPE_COMPLEX128: // complex with float64 real and imaginary components case ONNX_TENSOR_ELEMENT_DATA_TYPE_BFLOAT16: // Non-IEEE floating-point format based on IEEE754 single-precision case ONNX_TENSOR_ELEMENT_DATA_TYPE_UNDEFINED: default: { throwOrtException(jniEnv, convertErrorCode(ORT_INVALID_ARGUMENT), "Invalid tensor element type."); return 0; } } } size_t copyJavaToTensor(JNIEnv *jniEnv, ONNXTensorElementDataType onnxType, uint8_t* tensor, size_t tensorSize, uint32_t dimensionsRemaining, jarray input) { if (dimensionsRemaining == 1) { // write out 1d array of the respective primitive type return copyJavaToPrimitiveArray(jniEnv,onnxType,tensor,input); } else { // recurse through the dimensions // Java arrays are objects until the final dimension jobjectArray inputObjArr = (jobjectArray) input; uint32_t dimLength = (*jniEnv)->GetArrayLength(jniEnv,inputObjArr); size_t sizeConsumed = 0; for (uint32_t i = 0; i < dimLength; i++) { jarray childArr = (jarray) (*jniEnv)->GetObjectArrayElement(jniEnv,inputObjArr,i); sizeConsumed += copyJavaToTensor(jniEnv, onnxType, tensor + sizeConsumed, tensorSize - sizeConsumed, dimensionsRemaining - 1, childArr); // Cleanup reference to childArr so it doesn't prevent GC. (*jniEnv)->DeleteLocalRef(jniEnv,childArr); } return sizeConsumed; } } size_t copyPrimitiveArrayToJava(JNIEnv *jniEnv, ONNXTensorElementDataType onnxType, uint8_t* tensor, jarray output) { uint32_t outputLength = (*jniEnv)->GetArrayLength(jniEnv,output); size_t consumedSize = outputLength * onnxTypeSize(onnxType); switch (onnxType) { case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT8: // maps to c type uint8_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT8: { // maps to c type int8_t jbyteArray typedArr = (jbyteArray) output; (*jniEnv)->SetByteArrayRegion(jniEnv, typedArr, 0, outputLength, (jbyte * ) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT16: // maps to c type uint16_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT16: { // maps to c type int16_t jshortArray typedArr = (jshortArray) output; (*jniEnv)->SetShortArrayRegion(jniEnv, typedArr, 0, outputLength, (jshort * ) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT32: // maps to c type uint32_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT32: { // maps to c type int32_t jintArray typedArr = (jintArray) output; (*jniEnv)->SetIntArrayRegion(jniEnv, typedArr, 0, outputLength, (jint * ) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT64: // maps to c type uint64_t case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT64: { // maps to c type int64_t jlongArray typedArr = (jlongArray) output; (*jniEnv)->SetLongArrayRegion(jniEnv, typedArr, 0, outputLength, (jlong * ) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16: { // stored as a uint16_t float *floatArr = malloc(sizeof(float) * outputLength); uint16_t *halfArr = (uint16_t *) tensor; for (uint32_t i = 0; i < outputLength; i++) { floatArr[i] = convertHalfToFloat(halfArr[i]); } jfloatArray typedArr = (jfloatArray) output; (*jniEnv)->SetFloatArrayRegion(jniEnv, typedArr, 0, outputLength, floatArr); free(floatArr); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT: { // maps to c type float jfloatArray typedArr = (jfloatArray) output; (*jniEnv)->SetFloatArrayRegion(jniEnv, typedArr, 0, outputLength, (jfloat * ) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_DOUBLE: { // maps to c type double jdoubleArray typedArr = (jdoubleArray) output; (*jniEnv)->SetDoubleArrayRegion(jniEnv, typedArr, 0, outputLength, (jdouble * ) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_STRING: { // maps to c++ type std::string // Shouldn't reach here, as it's caught by a different codepath in the initial OnnxTensor.getArray call. throwOrtException(jniEnv, convertErrorCode(ORT_NOT_IMPLEMENTED), "String is not supported by this codepath, please raise a Github issue as it should not reach here."); return 0; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_BOOL: { jbooleanArray typedArr = (jbooleanArray) output; (*jniEnv)->SetBooleanArrayRegion(jniEnv, typedArr, 0, outputLength, (jboolean *) tensor); return consumedSize; } case ONNX_TENSOR_ELEMENT_DATA_TYPE_COMPLEX64: // complex with float32 real and imaginary components case ONNX_TENSOR_ELEMENT_DATA_TYPE_COMPLEX128: // complex with float64 real and imaginary components case ONNX_TENSOR_ELEMENT_DATA_TYPE_BFLOAT16: // Non-IEEE floating-point format based on IEEE754 single-precision case ONNX_TENSOR_ELEMENT_DATA_TYPE_UNDEFINED: default: { throwOrtException(jniEnv, convertErrorCode(ORT_NOT_IMPLEMENTED), "Invalid tensor element type."); return 0; } } } size_t copyTensorToJava(JNIEnv *jniEnv, ONNXTensorElementDataType onnxType, uint8_t* tensor, size_t tensorSize, uint32_t dimensionsRemaining, jarray output) { if (dimensionsRemaining == 1) { // write out 1d array of the respective primitive type return copyPrimitiveArrayToJava(jniEnv,onnxType,tensor,output); } else { // recurse through the dimensions // Java arrays are objects until the final dimension jobjectArray outputObjArr = (jobjectArray) output; uint32_t dimLength = (*jniEnv)->GetArrayLength(jniEnv,outputObjArr); size_t sizeConsumed = 0; for (uint32_t i = 0; i < dimLength; i++) { jarray childArr = (jarray) (*jniEnv)->GetObjectArrayElement(jniEnv,outputObjArr,i); sizeConsumed += copyTensorToJava(jniEnv, onnxType, tensor + sizeConsumed, tensorSize - sizeConsumed, dimensionsRemaining - 1, childArr); // Cleanup reference to childArr so it doesn't prevent GC. (*jniEnv)->DeleteLocalRef(jniEnv,childArr); } return sizeConsumed; } } jobject createStringFromStringTensor(JNIEnv *jniEnv, const OrtApi * api, OrtAllocator* allocator, OrtValue* tensor) { // Get the buffer size needed size_t totalStringLength; checkOrtStatus(jniEnv,api,api->GetStringTensorDataLength(tensor,&totalStringLength)); // Create the character and offset buffers char * characterBuffer; checkOrtStatus(jniEnv,api,api->AllocatorAlloc(allocator,sizeof(char)*(totalStringLength+1),(void**)&characterBuffer)); size_t * offsets; checkOrtStatus(jniEnv,api,api->AllocatorAlloc(allocator,sizeof(size_t),(void**)&offsets)); // Get a view on the String data checkOrtStatus(jniEnv,api,api->GetStringTensorContent(tensor,characterBuffer,totalStringLength,offsets,1)); size_t curSize = (offsets[0]) + 1; characterBuffer[curSize-1] = '\0'; jobject tempString = (*jniEnv)->NewStringUTF(jniEnv,characterBuffer); checkOrtStatus(jniEnv,api,api->AllocatorFree(allocator,characterBuffer)); checkOrtStatus(jniEnv,api,api->AllocatorFree(allocator,offsets)); return tempString; } void copyStringTensorToArray(JNIEnv *jniEnv, const OrtApi * api, OrtAllocator* allocator, OrtValue* tensor, size_t length, jobjectArray outputArray) { // Get the buffer size needed size_t totalStringLength; checkOrtStatus(jniEnv,api,api->GetStringTensorDataLength(tensor,&totalStringLength)); // Create the character and offset buffers char * characterBuffer; checkOrtStatus(jniEnv,api,api->AllocatorAlloc(allocator,sizeof(char)*(totalStringLength+length),(void**)&characterBuffer)); // length + 1 as we need to write out the final offset size_t * offsets; checkOrtStatus(jniEnv,api,api->AllocatorAlloc(allocator,sizeof(size_t)*(length+1),(void**)&offsets)); // Get a view on the String data checkOrtStatus(jniEnv,api,api->GetStringTensorContent(tensor,characterBuffer,totalStringLength,offsets,length)); // Get the final offset, write to the end of the array. checkOrtStatus(jniEnv,api,api->GetStringTensorDataLength(tensor,offsets+length)); char * tempBuffer = NULL; size_t bufferSize = 0; for (size_t i = 0; i < length; i++) { size_t curSize = (offsets[i+1] - offsets[i]) + 1; if (curSize > bufferSize) { checkOrtStatus(jniEnv,api,api->AllocatorFree(allocator,tempBuffer)); checkOrtStatus(jniEnv,api,api->AllocatorAlloc(allocator,curSize,(void**)&tempBuffer)); bufferSize = curSize; } memcpy(tempBuffer,characterBuffer+offsets[i],curSize); tempBuffer[curSize-1] = '\0'; jobject tempString = (*jniEnv)->NewStringUTF(jniEnv,tempBuffer); (*jniEnv)->SetObjectArrayElement(jniEnv,outputArray,i,tempString); } if (tempBuffer != NULL) { checkOrtStatus(jniEnv,api,api->AllocatorFree(allocator,tempBuffer)); } checkOrtStatus(jniEnv,api,api->AllocatorFree(allocator,characterBuffer)); checkOrtStatus(jniEnv,api,api->AllocatorFree(allocator,offsets)); } jobjectArray createStringArrayFromTensor(JNIEnv *jniEnv, const OrtApi * api, OrtAllocator* allocator, OrtValue* tensor) { // Extract tensor info OrtTensorTypeAndShapeInfo* tensorInfo; checkOrtStatus(jniEnv,api,api->GetTensorTypeAndShape(tensor,&tensorInfo)); // Get the element count of this tensor size_t length; checkOrtStatus(jniEnv,api,api->GetTensorShapeElementCount(tensorInfo,&length)); api->ReleaseTensorTypeAndShapeInfo(tensorInfo); // Create the java array jclass stringClazz = (*jniEnv)->FindClass(jniEnv,"java/lang/String"); jobjectArray outputArray = (*jniEnv)->NewObjectArray(jniEnv,length,stringClazz,NULL); copyStringTensorToArray(jniEnv, api, allocator, tensor, length, outputArray); return outputArray; } jlongArray createLongArrayFromTensor(JNIEnv *jniEnv, const OrtApi * api, OrtValue* tensor) { // Extract tensor type OrtTensorTypeAndShapeInfo* tensorInfo; checkOrtStatus(jniEnv,api,api->GetTensorTypeAndShape(tensor,&tensorInfo)); ONNXTensorElementDataType value; checkOrtStatus(jniEnv,api,api->GetTensorElementType(tensorInfo,&value)); // Get the element count of this tensor size_t length; checkOrtStatus(jniEnv,api,api->GetTensorShapeElementCount(tensorInfo,&length)); api->ReleaseTensorTypeAndShapeInfo(tensorInfo); // Extract the values uint8_t* arr; checkOrtStatus(jniEnv,api,api->GetTensorMutableData((OrtValue*)tensor,(void**)&arr)); // Create the java array and copy to it. jlongArray outputArray = (*jniEnv)->NewLongArray(jniEnv,length); copyPrimitiveArrayToJava(jniEnv, value, arr, outputArray); return outputArray; } jfloatArray createFloatArrayFromTensor(JNIEnv *jniEnv, const OrtApi * api, OrtValue* tensor) { // Extract tensor type OrtTensorTypeAndShapeInfo* tensorInfo; checkOrtStatus(jniEnv,api,api->GetTensorTypeAndShape(tensor,&tensorInfo)); ONNXTensorElementDataType value; checkOrtStatus(jniEnv,api,api->GetTensorElementType(tensorInfo,&value)); // Get the element count of this tensor size_t length; checkOrtStatus(jniEnv,api,api->GetTensorShapeElementCount(tensorInfo,&length)); api->ReleaseTensorTypeAndShapeInfo(tensorInfo); // Extract the values uint8_t* arr; checkOrtStatus(jniEnv,api,api->GetTensorMutableData((OrtValue*)tensor,(void**)&arr)); // Create the java array and copy to it. jfloatArray outputArray = (*jniEnv)->NewFloatArray(jniEnv,length); copyPrimitiveArrayToJava(jniEnv, value, arr, outputArray); return outputArray; } jdoubleArray createDoubleArrayFromTensor(JNIEnv *jniEnv, const OrtApi * api, OrtValue* tensor) { // Extract tensor type OrtTensorTypeAndShapeInfo* tensorInfo; checkOrtStatus(jniEnv,api,api->GetTensorTypeAndShape(tensor,&tensorInfo)); ONNXTensorElementDataType value; checkOrtStatus(jniEnv,api,api->GetTensorElementType(tensorInfo,&value)); // Get the element count of this tensor size_t length; checkOrtStatus(jniEnv,api,api->GetTensorShapeElementCount(tensorInfo,&length)); api->ReleaseTensorTypeAndShapeInfo(tensorInfo); // Extract the values uint8_t* arr; checkOrtStatus(jniEnv,api,api->GetTensorMutableData((OrtValue*)tensor,(void**)&arr)); // Create the java array and copy to it. jdoubleArray outputArray = (*jniEnv)->NewDoubleArray(jniEnv,length); copyPrimitiveArrayToJava(jniEnv, value, arr, outputArray); return outputArray; } jobject createJavaTensorFromONNX(JNIEnv *jniEnv, const OrtApi * api, OrtAllocator* allocator, OrtValue* tensor) { // Extract the type information OrtTensorTypeAndShapeInfo* info; checkOrtStatus(jniEnv,api,api->GetTensorTypeAndShape(tensor, &info)); // Construct the TensorInfo object jobject tensorInfo = convertToTensorInfo(jniEnv, api, info); // Release the info object api->ReleaseTensorTypeAndShapeInfo(info); // Construct the ONNXTensor object char *tensorClassName = "ai/onnxruntime/OnnxTensor"; jclass clazz = (*jniEnv)->FindClass(jniEnv, tensorClassName); jmethodID tensorConstructor = (*jniEnv)->GetMethodID(jniEnv,clazz, "<init>", "(JJLai/onnxruntime/TensorInfo;)V"); jobject javaTensor = (*jniEnv)->NewObject(jniEnv, clazz, tensorConstructor, (jlong) tensor, (jlong) allocator, tensorInfo); return javaTensor; } jobject createJavaSequenceFromONNX(JNIEnv *jniEnv, const OrtApi * api, OrtAllocator* allocator, OrtValue* sequence) { // Setup // Get the ONNXTensorType enum static method char *onnxTensorTypeClassName = "ai/onnxruntime/TensorInfo$OnnxTensorType"; jclass onnxTensorTypeClazz = (*jniEnv)->FindClass(jniEnv, onnxTensorTypeClassName); jmethodID onnxTensorTypeMapFromInt = (*jniEnv)->GetStaticMethodID(jniEnv,onnxTensorTypeClazz, "mapFromInt", "(I)Lai/onnxruntime/TensorInfo$OnnxTensorType;"); // Get the ONNXJavaType enum static method char *javaDataTypeClassName = "ai/onnxruntime/OnnxJavaType"; jclass onnxJavaTypeClazz = (*jniEnv)->FindClass(jniEnv, javaDataTypeClassName); jmethodID onnxJavaTypeMapFromONNXTensorType = (*jniEnv)->GetStaticMethodID(jniEnv,onnxJavaTypeClazz, "mapFromOnnxTensorType", "(Lai/onnxruntime/TensorInfo$OnnxTensorType;)Lai/onnxruntime/OnnxJavaType;"); // Get the sequence info class char *sequenceInfoClassName = "ai/onnxruntime/SequenceInfo"; jclass sequenceInfoClazz = (*jniEnv)->FindClass(jniEnv, sequenceInfoClassName); // Get the element count of this sequence size_t count; checkOrtStatus(jniEnv,api,api->GetValueCount(sequence,&count)); // Extract the first element OrtValue* firstElement; checkOrtStatus(jniEnv,api,api->GetValue(sequence,0,allocator,&firstElement)); ONNXType elementType; checkOrtStatus(jniEnv,api,api->GetValueType(firstElement,&elementType)); jobject sequenceInfo; switch (elementType) { case ONNX_TYPE_TENSOR: { // Figure out element type OrtTensorTypeAndShapeInfo* firstElementInfo; checkOrtStatus(jniEnv,api,api->GetTensorTypeAndShape(firstElement,&firstElementInfo)); ONNXTensorElementDataType element; checkOrtStatus(jniEnv,api,api->GetTensorElementType(firstElementInfo,&element)); api->ReleaseTensorTypeAndShapeInfo(firstElementInfo); // Convert element type into ONNXTensorType jint onnxTypeInt = convertFromONNXDataFormat(element); jobject onnxTensorTypeJava = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxTensorTypeClazz,onnxTensorTypeMapFromInt,onnxTypeInt); jobject onnxJavaType = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxJavaTypeClazz,onnxJavaTypeMapFromONNXTensorType,onnxTensorTypeJava); // Construct sequence info jmethodID sequenceInfoConstructor = (*jniEnv)->GetMethodID(jniEnv,sequenceInfoClazz,"<init>","(ILai/onnxruntime/OnnxJavaType;)V"); sequenceInfo = (*jniEnv)->NewObject(jniEnv,sequenceInfoClazz,sequenceInfoConstructor,(jint)count,onnxJavaType); break; } case ONNX_TYPE_MAP: { // Extract key OrtValue* keys; checkOrtStatus(jniEnv,api,api->GetValue(firstElement,0,allocator,&keys)); // Extract key type OrtTensorTypeAndShapeInfo* keysInfo; checkOrtStatus(jniEnv,api,api->GetTensorTypeAndShape(keys,&keysInfo)); ONNXTensorElementDataType key; checkOrtStatus(jniEnv,api,api->GetTensorElementType(keysInfo,&key)); // Get the element count of this map size_t mapCount; checkOrtStatus(jniEnv,api,api->GetTensorShapeElementCount(keysInfo,&mapCount)); api->ReleaseTensorTypeAndShapeInfo(keysInfo); // Convert key type to java jint onnxTypeKey = convertFromONNXDataFormat(key); jobject onnxTensorTypeJavaKey = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxTensorTypeClazz,onnxTensorTypeMapFromInt,onnxTypeKey); jobject onnxJavaTypeKey = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxJavaTypeClazz,onnxJavaTypeMapFromONNXTensorType,onnxTensorTypeJavaKey); // Extract value OrtValue* values; checkOrtStatus(jniEnv,api,api->GetValue(firstElement,1,allocator,&values)); // Extract value type OrtTensorTypeAndShapeInfo* valuesInfo; checkOrtStatus(jniEnv,api,api->GetTensorTypeAndShape(values,&valuesInfo)); ONNXTensorElementDataType value; checkOrtStatus(jniEnv,api,api->GetTensorElementType(valuesInfo,&value)); api->ReleaseTensorTypeAndShapeInfo(valuesInfo); // Convert value type to java jint onnxTypeValue = convertFromONNXDataFormat(value); jobject onnxTensorTypeJavaValue = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxTensorTypeClazz,onnxTensorTypeMapFromInt,onnxTypeValue); jobject onnxJavaTypeValue = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxJavaTypeClazz,onnxJavaTypeMapFromONNXTensorType,onnxTensorTypeJavaValue); // Get the map info class char *mapInfoClassName = "ai/onnxruntime/MapInfo"; jclass mapInfoClazz = (*jniEnv)->FindClass(jniEnv, mapInfoClassName); // Construct map info jmethodID mapInfoConstructor = (*jniEnv)->GetMethodID(jniEnv,mapInfoClazz,"<init>","(ILai/onnxruntime/OnnxJavaType;Lai/onnxruntime/OnnxJavaType;)V"); jobject mapInfo = (*jniEnv)->NewObject(jniEnv,mapInfoClazz,mapInfoConstructor,(jint)mapCount,onnxJavaTypeKey,onnxJavaTypeValue); // Free the intermediate tensors. api->ReleaseValue(keys); api->ReleaseValue(values); // Construct sequence info jmethodID sequenceInfoConstructor = (*jniEnv)->GetMethodID(jniEnv,sequenceInfoClazz,"<init>","(ILai/onnxruntime/MapInfo;)V"); sequenceInfo = (*jniEnv)->NewObject(jniEnv,sequenceInfoClazz,sequenceInfoConstructor,(jint)count,mapInfo); break; } default: { sequenceInfo = createEmptySequenceInfo(jniEnv); throwOrtException(jniEnv,convertErrorCode(ORT_INVALID_ARGUMENT),"Invalid element type found in sequence"); break; } } // Free the intermediate tensor. api->ReleaseValue(firstElement); // Construct the ONNXSequence object char *sequenceClassName = "ai/onnxruntime/OnnxSequence"; jclass sequenceClazz = (*jniEnv)->FindClass(jniEnv, sequenceClassName); jmethodID sequenceConstructor = (*jniEnv)->GetMethodID(jniEnv,sequenceClazz, "<init>", "(JJLai/onnxruntime/SequenceInfo;)V"); jobject javaSequence = (*jniEnv)->NewObject(jniEnv, sequenceClazz, sequenceConstructor, (jlong)sequence, (jlong)allocator, sequenceInfo); return javaSequence; } jobject createJavaMapFromONNX(JNIEnv *jniEnv, const OrtApi * api, OrtAllocator* allocator, OrtValue* map) { // Setup // Get the ONNXTensorType enum static method char *onnxTensorTypeClassName = "ai/onnxruntime/TensorInfo$OnnxTensorType"; jclass onnxTensorTypeClazz = (*jniEnv)->FindClass(jniEnv, onnxTensorTypeClassName); jmethodID onnxTensorTypeMapFromInt = (*jniEnv)->GetStaticMethodID(jniEnv,onnxTensorTypeClazz, "mapFromInt", "(I)Lai/onnxruntime/TensorInfo$OnnxTensorType;"); // Get the ONNXJavaType enum static method char *javaDataTypeClassName = "ai/onnxruntime/OnnxJavaType"; jclass onnxJavaTypeClazz = (*jniEnv)->FindClass(jniEnv, javaDataTypeClassName); jmethodID onnxJavaTypeMapFromONNXTensorType = (*jniEnv)->GetStaticMethodID(jniEnv,onnxJavaTypeClazz, "mapFromOnnxTensorType", "(Lai/onnxruntime/TensorInfo$OnnxTensorType;)Lai/onnxruntime/OnnxJavaType;"); // Get the map info class char *mapInfoClassName = "ai/onnxruntime/MapInfo"; jclass mapInfoClazz = (*jniEnv)->FindClass(jniEnv, mapInfoClassName); // Extract key OrtValue* keys; checkOrtStatus(jniEnv,api,api->GetValue(map,0,allocator,&keys)); // Extract key type OrtTensorTypeAndShapeInfo* keysInfo; checkOrtStatus(jniEnv,api,api->GetTensorTypeAndShape(keys,&keysInfo)); ONNXTensorElementDataType key; checkOrtStatus(jniEnv,api,api->GetTensorElementType(keysInfo,&key)); // Get the element count of this map size_t mapCount; checkOrtStatus(jniEnv,api,api->GetTensorShapeElementCount(keysInfo,&mapCount)); api->ReleaseTensorTypeAndShapeInfo(keysInfo); // Convert key type to java jint onnxTypeKey = convertFromONNXDataFormat(key); jobject onnxTensorTypeJavaKey = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxTensorTypeClazz,onnxTensorTypeMapFromInt,onnxTypeKey); jobject onnxJavaTypeKey = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxJavaTypeClazz,onnxJavaTypeMapFromONNXTensorType,onnxTensorTypeJavaKey); // Extract value OrtValue* values; checkOrtStatus(jniEnv,api,api->GetValue(map,1,allocator,&values)); // Extract value type OrtTensorTypeAndShapeInfo* valuesInfo; checkOrtStatus(jniEnv,api,api->GetTensorTypeAndShape(values,&valuesInfo)); ONNXTensorElementDataType value; checkOrtStatus(jniEnv,api,api->GetTensorElementType(valuesInfo,&value)); api->ReleaseTensorTypeAndShapeInfo(valuesInfo); // Convert value type to java jint onnxTypeValue = convertFromONNXDataFormat(value); jobject onnxTensorTypeJavaValue = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxTensorTypeClazz,onnxTensorTypeMapFromInt,onnxTypeValue); jobject onnxJavaTypeValue = (*jniEnv)->CallStaticObjectMethod(jniEnv,onnxJavaTypeClazz,onnxJavaTypeMapFromONNXTensorType,onnxTensorTypeJavaValue); // Construct map info jmethodID mapInfoConstructor = (*jniEnv)->GetMethodID(jniEnv,mapInfoClazz,"<init>","(ILai/onnxruntime/OnnxJavaType;Lai/onnxruntime/OnnxJavaType;)V"); jobject mapInfo = (*jniEnv)->NewObject(jniEnv,mapInfoClazz,mapInfoConstructor,(jint)mapCount,onnxJavaTypeKey,onnxJavaTypeValue); // Free the intermediate tensors. checkOrtStatus(jniEnv,api,api->AllocatorFree(allocator,keys)); checkOrtStatus(jniEnv,api,api->AllocatorFree(allocator,values)); // Construct the ONNXMap object char *mapClassName = "ai/onnxruntime/OnnxMap"; jclass mapClazz = (*jniEnv)->FindClass(jniEnv, mapClassName); jmethodID mapConstructor = (*jniEnv)->GetMethodID(jniEnv,mapClazz, "<init>", "(JJLai/onnxruntime/MapInfo;)V"); jobject javaMap = (*jniEnv)->NewObject(jniEnv, mapClazz, mapConstructor, (jlong)map, (jlong) allocator, mapInfo); return javaMap; } jobject convertOrtValueToONNXValue(JNIEnv *jniEnv, const OrtApi * api, OrtAllocator* allocator, OrtValue* onnxValue) { // Note this is the ONNXType C enum ONNXType valueType; checkOrtStatus(jniEnv,api,api->GetValueType(onnxValue,&valueType)); switch (valueType) { case ONNX_TYPE_TENSOR: { return createJavaTensorFromONNX(jniEnv, api, allocator, onnxValue); } case ONNX_TYPE_SEQUENCE: { return createJavaSequenceFromONNX(jniEnv, api, allocator, onnxValue); } case ONNX_TYPE_MAP: { return createJavaMapFromONNX(jniEnv, api, allocator, onnxValue); } case ONNX_TYPE_UNKNOWN: case ONNX_TYPE_OPAQUE: case ONNX_TYPE_SPARSETENSOR: { throwOrtException(jniEnv,convertErrorCode(ORT_NOT_IMPLEMENTED),"These types are unsupported - ONNX_TYPE_UNKNOWN, ONNX_TYPE_OPAQUE, ONNX_TYPE_SPARSETENSOR."); break; } } return NULL; } jint throwOrtException(JNIEnv *jniEnv, int messageId, const char *message) { jstring messageStr = (*jniEnv)->NewStringUTF(jniEnv, message); char *className = "ai/onnxruntime/OrtException"; jclass exClazz = (*jniEnv)->FindClass(jniEnv,className); jmethodID exConstructor = (*jniEnv)->GetMethodID(jniEnv, exClazz, "<init>", "(ILjava/lang/String;)V"); jobject javaException = (*jniEnv)->NewObject(jniEnv, exClazz, exConstructor, messageId, messageStr); return (*jniEnv)->Throw(jniEnv,javaException); } jint convertErrorCode(OrtErrorCode code) { switch (code) { case ORT_OK: return 0; case ORT_FAIL: return 1; case ORT_INVALID_ARGUMENT: return 2; case ORT_NO_SUCHFILE: return 3; case ORT_NO_MODEL: return 4; case ORT_ENGINE_ERROR: return 5; case ORT_RUNTIME_EXCEPTION: return 6; case ORT_INVALID_PROTOBUF: return 7; case ORT_MODEL_LOADED: return 8; case ORT_NOT_IMPLEMENTED: return 9; case ORT_INVALID_GRAPH: return 10; case ORT_EP_FAIL: return 11; default: return -1; // Unknown error code } } void checkOrtStatus(JNIEnv *jniEnv, const OrtApi * api, OrtStatus * status) { if (status != NULL) { const char* message = api->GetErrorMessage(status); int len = strlen(message)+1; char* copy = malloc(sizeof(char)*len); memcpy(copy,message,len); int messageId = convertErrorCode(api->GetErrorCode(status)); api->ReleaseStatus(status); throwOrtException(jniEnv,messageId,copy); } }
Zimbabwe's quadrillionaires are about to disappear. The southern African country is finally mopping up what remains of its worthless currency by offering to swap bank deposits or cash for as little as one U.S. dollar per 35 quadrillion Zimbabwean dollars. Zimbabwe abandoned its money in 2009 after hyperinflation destroyed its value. Most transactions since have been conducted in U.S. dollars or South African rand. But a small number of Zimbabwe dollar notes with virtually no value remain in circulation and the central bank is now trying to take them out of the system. Zimbabweans have until the end of September to exchange them. "The decommissioning of the Z$ has been pending and long outstanding since 2009," the bank said in a statement. Zimbabwe spiraled into deep economic crisis after President Robert Mugabe introduced a radical policy of land distribution in the late 1990s and early 2000s. Facing chronic shortages of basic commodities, the central bank kept printing money to fund budget deficits, ultimately causing prices to go crazy. At the peak of the crisis, prices were doubling every 24 hours. Cato Institute economists estimate monthly inflation hit 7.9 billion percent in 2008. There are no official numbers -- the authorities gave up trying to keep track. Unemployment soared and public services collapsed. According to the World Bank, the economy shrank by nearly 18% in 2008. The Zimbabwe dollar was swiftly replaced by foreign currencies, first unofficially, then as part of government efforts to curb inflation and revive the economy. Within a year, the country had returned to growth. Zimbabwe's GDP rose by 3.2% in 2014, but growth is expected to slow this year.
CROSS-CULTURAL ADAPTATION AND VALIDATION OF THE MONTREAL CHILDRENS HOSPITAL FEEDING SCALE INTO BRAZILIAN PORTUGUESE ABSTRACT Objective: To cross-culturally adapt and validate the Montreal Childrens Hospital Feeding Scale (MCH-FS) into Brazilian Portuguese. Methods: The MCH-FS, originally validated in Canada, was validated in Brazil as Escala Brasileira de Alimentao Infantil (EBAI) and developed according to the following steps: translation, production of the Brazilian Portuguese version, testing of the original and the Brazilian Portuguese versions, back-translation, analysis by experts and by the developer of the original questionnaire, and application of the final version. The EBAI was applied to 242 parents/caregivers responsible for feeding children from 6 months to 6 years and 11 months of age between February and May 2018, with 174 subjects in the control group and 68 ones in the case group. The psychometric properties evaluated were validity and reliability. Results: In the case group, 79% of children were reported to have feeding difficulties, against 13% in the control group. The EBAI had good internal consistency (Cronbachs alpha=0.79). Using the suggested cutoff point of 45, the raw score discriminated between cases and controls with a sensitivity of 79.4% and specificity of 86.8% (area under the ROC curve=0.87). Conclusions: The results obtained in the validation process of the EBAI demonstrate that the questionnaire has adequate psychometric properties and, thus, can be used to identify feeding difficulties in Brazilian children from 6 months to 6 years and 11 months of age. INTRODUCTION Eating difficulties can occur in 20-35% of the pediatric population, with neurotypical development. These rates can reach 80% in populations at risk, such as those with developmental delays, premature births and/or chronic and complex medical conditions. Feeding difficulties are a high-impact clinical problem, with negative consequences for the child, 3,8,9 including failure to thrive, malnutrition, lethargy, developmental delay, aspiration, invasive medical procedures, admission to the inpatient unit and even death. 10 Likewise, feeding difficulties significantly affect family relationships, 7 leading to excessive stress during meals, 1,2,8,9,11 hindering many aspects of the life and general well-being of both children and their families. Due to the high prevalence 12 and the negative consequences of feeding difficulties, health professionals should have access to a valid and reliable screening instrument, of clinical applicability, able to quickly identify the complaints of parents or legal guardians on their children's feeding difficulties. Thus, referral to specialists can be carried out as soon as the problem is identified. 12,13 However, to date, few self-completed instruments, applicable to parents/caregivers, with standardized psychometric measures, have been validated to reliably identify their perceptions of children's feeding difficulties. 3,10, Previous eating scales, such as the Children's Eating Behavior Inventory (CEBI) 14 and the Behavioral Pediatrics Feeding Assessment Scale (BPFAS), 15,16 are used for scientific purposes, although proven to be long for clinical use. 13 The Montreal Children's Hospital Feeding Scale (MCH-FS) is applicable through the report of parents or guardians and was designed to identify difficulties with psychometric properties in children from six months to six years and 11 months of age. 12 Consisted of 14 items, it aims to determine the severity of feeding difficulties, the degree of eating problems, and the level of concern of parents/caregivers. 12 It is a one-page instrument, freely available and feasible for clinical application, having already been validated in other countries, 13,21 with results similar to the original scale. Thus, the objective of this study was to carry out the cross-cultural adaptation and validation of the MCH-FS scale into Brazilian Portuguese in children from six months to six years and 11 months of age. METHOD This is a cross-sectional study carried out at Hospital Materno Infantil Presidente Vargas. The study was approved by the institution's Research Ethics Committee, CAAE No. 81513317.0.0000.5329. All parents/caregivers included in the study signed an informed consent authorizing participation in the study. The scale validation process was carried out according to the methodology previously described in the literature. 22,23 In the initial translation stage, two bilingual translators, having Portuguese as their mother tongue and fluent in English, performed translations of the MCH-FS scale from English to Brazilian Portuguese, in an independent fashion. Brazilian cultural aspects were considered in this process rather than a literal translation. For the synthesis of the translations, the two versions were compared by specialized professionals and each item was evaluated, considering the best way to express it and the influence of cultural aspects. Disagreements related to the questions were adjusted in order to reach a consensus. Then, a single final version of the MCH-FS in Brazilian Portuguese was issued, namely the Brazilian Infant Feeding Scale (Escala Brasileira de Alimentao Infantil -EBAI) (Chart 1). The original scale in English and the EBAI were applied to 20 bilingual individuals (main caregivers/feeders of children with typical development) in an interval of 30 days in order to verify the equivalence of the scores between starting with the original scale. Inclusion criteria for bilingual individuals were: to be from the community and to have children with the characteristics of the study sample. Regarding the minimum sample size for factor analysis, the general rule of ten subjects per variable was used, with a minimum of one hundred subjects in the total sample. 24 Therefore, a sample size of at least 140 subjects was considered sufficient to perform the analyzes, as the instrument studied has 14 items. After the scale was translated, agreed to in its final version, back-translated, discussed, and approved by everyone involved in the process, the applicability of EBAI was tested in a sample of 242 parents/caregivers responsible for feeding children, divided into two groups: cases (n=68) and controls (n=174). The 20 bilingual parents were not part of this sample. Recruitment in both groups was carried out consecutively from February to May 2018. The sample included only parents/caregivers of children between six months and six years and 11 months of age. Seven children were excluded from the study, as they presented only dysphagia and were fed by tube or gastrostomy. The control group was composed of 174 parents/caregivers of healthy children with typical development. Participants were recruited through public recruitment ads in social media. Those interested in participating received informative written material with the objectives of the study, the inclusion criteria, and the explanations about the scale, with the possibility of clarifying doubts with the researchers, if any. Children born at term (>37 weeks) were included in this group, with birth weight ≥2,500 g, with no pre-, peri-or post-natal complications and with adequate neuropsychomotor development, which was assessed through open questions to parents/caregivers. These questions included the fact that children meet the milestones of motor development of the age group. The authors sent the scale and the instructions for completing it by e-mail to the parents/caregivers who agreed to participate. The case group consisted of 68 parents/caregivers of children who were undergoing treatment or who had been referred for speech-language assessment due to feeding difficulties in the pediatric inpatient units and in the speech therapy clinic of the hospital, as well as in speech therapists specialized in feeding problems. The scale was filled out in person by the group's participants after a brief explanation and with the possibility of clarifying doubts throughout the period. This group included children with a diagnosis or suspicion of feeding difficulties, characterized by inability to progress to a texture suitable for their chronological age, refusal of food or acceptance of only small amounts of it, frequent vomiting, resistance struggles with the feeder during meals, prolonged feeding time, use of distractions to increase intake, and using breastfeeding or a bottle at night. These difficulties could be associated or not with dysphagia. These 68 participants were subdivided into two groups: parents/caregivers of children who had feeding difficulties, but who did not have an associated medical diagnosis or developmental delay (group "feeding difficulty without comorbidities", n=17) and parents/caregivers of children who had feeding difficulties and an associated medical diagnosis, such as prematurity and gastrointestinal, cardiorespiratory, genetic, and neurological disorders (group "feeding difficulty with comorbidities", n=51). Neuropsychomotor development assessment was carried out through the same open questions asked to parents/ caregivers in the control group. Children who did not meet the expected motor development milestones for the age group were considered delayed. To verify the reliability of the test/retest of the study, 25 parents/caregivers from the case group, selected at random, filled out the scale again, 10 to 15 days from the first application. The selection was carried out using the WINPEPI 11.65 software, using a list of random numbers among the ones corresponding to the parents/caregivers of this group. The 25 selected subjects received the questionnaire by e-mail to retest. The responses submitted in both moments were compared to one another. Quantitative variables and ordinal variables were described by the mean (standard deviation -SD) and the median (interquartile range). The Kolmogorov-Smirnov test was performed to test the normality of the data. Items in the three groups were compared by the Kruskal-Wallis test or by analysis of variance, and between two groups by the Mann-Whitney test. Categorical variables were described by frequencies and percentages and compared using the 2 test. Quantitative variables were correlated by Spearman's correlation coefficient. To assess the internal consistency of the dimensions, Cronbach's alpha was used. Test/retest results were compared using Pearson's correlation coefficient and, later, their means, using the Student's t test for paired samples. Performance of the raw score were calculated for the suggested cut-off point of 45: sensitivity, specificity, positive predictive value, negative predictive value, and accuracy. A Receiver Operating Characteristics Curve (ROC curve) was performed to assess the ability of the raw score to discriminate between cases and controls. A significance level of 5% was considered statistically significant. Data analysis was performed using the Statistical Package for the Social Sciences (SPSS), version 20.0 (IBM SPSS ® Statistics, So Paulo, Brazil). RESULTS In this study, data were collected from 242 participants, 174 (71.9%) of the control group and 68 (28.1%) of the case group. In the latter, 17 (7.0%) belonged to the subgroup "feeding difficulty without comorbidities" and 51 (21.1%), to the subgroup "feeding difficulty with comorbidities". Table 1 shows the characteristics of both case and control groups. With regard to bilingual parents, although score values were not identical for all participants, the ratings obtained in in both languages were exactly the same. When comparing the scores of each of the 14 questions on the scale, there was no significant difference between the subgroups with and without comorbidities, except for question 1, with lower scores for those with comorbidities (p=0.025), and questions 5 and 12, with higher scores for those with comorbidities (p=0.018 and p=0.046, respectively). The overall raw scores were not statistically different between the subgroups with and without comorbidities, which allowed for further analysis with the two groups together. When comparing the scores of each of the 14 questions between the control and the case groups, it presented significantly higher values for all items, except for item 5, as well as higher raw scores and T-scores ( Table 2). The difference in final scores in the case and the control groups were statistically significant (p <0.001), with a higher frequency of children with values considered high (79.4%) in the group of cases. There was a difference between the case group and the control group regarding the severity of the feeding disorders. In the former, problems with mild difficulty were reported in 27.9% of cases, moderate difficulty in 17.6%, and severe in 33.8%, while, most reported no eating difficulties (86.8%) in the latter. Table 3 describes the comparison between cases and controls stratified by age range. It is possible to observe that both in the age group of six to 24 months and in the age group from 25 to 83 months, the group of cases had significantly higher values than the control group (p<0.001). There was a statistically significant difference between the groups (p <0.001) in the analysis of covariance (ANCOVA), adjusted for comparison between the T-score groups aged ≤24 months and > 24 months. Good internal consistency was found, in both groups, for all items on the scale (Cronbach's alpha=0.79). Using the suggested cutoff point of 45, the raw score differentiated control cases with sensitivity of 79.4%, specificity of 86.8%, positive predictive value of 70.1%, negative predictive value of 91.5%, and accuracy of 84.7%. The area under the ROC curve was 0.87 (p<0.001; 95% confidence interval 0.81-0.92) ( Figure 1). As shown in Figure 1, the best sensitivity/specificity ratio was obtained with the cutoff point of 42.5, with a sensitivity of 82.4% and specificity of 83.9%. The correlation of the total score between test and retest was considered strong (r=0.92; p<0.001). The mean (SD) of the score was 63.2 (10.0) on the test and 62.7 (11.1) on the retest, with no statistically significant difference between the two moments (p=0.264). DISCUSSION The results of this study demonstrated that the EBAI may be applicable in Brazil. It is, therefore, a useful tool for the identification of feeding difficulties in children from six months to six years and 11 months of age within the Brazilian cultural context. The translation and cross-cultural adaptation processes that resulted in the development of the EBAI made it possible to adapt the original scale and make it useful for use in Brazilian culture. To this end, it was taken into account that, in the process of cross-cultural adaptation, it is crucial to maintain the coherence of the concepts and properties apprehended by the original version. 22 In addition, each item of the initial protocol must be adapted in order to achieve semantic, linguistic, and contextual equivalence between the original and the adapted versions, so that it can also retain its equivalence in a specific situation of use. 23 The score obtained by the control group for question 2 ("How concerned are you with your child's feeding?") was higher when compared to the control groups in other countries where the scale has already been validated. 12,13,21 According to previous studies, 25,26 it is known that the feeding practices of Brazilian mothers are loaded with symbolic values and heavily involved in cultural aspects. In this cultural relationship, when children do not eat or reject food, they are "disqualifying" their mother's competence to ensure adequate nutrition. 25 In addition, Brazilian mothers reported concerns on the amount of food their children ingested; therefore, such mothers seemed to value good feeding rather based on the amount of food that children were able to ingest than on the energetic density that food provided. 25 Brazilian mothers also believe that eating well equals eating a lot. 26 Thus, it is believed that the highest score found by the controls in question 2 suggests that the greatest concern may be a Brazilian cultural issue. However, this data did not affect the total score of the present study when compared to the original version. 12 There was a statistically significant difference between the control group and the case group in relation to all questions, with the exception of question 5. This result demonstrates that the questions that make up the instrument are efficient in differentiating children with or without feeding difficulties. This difference in relation to the shorter observation of feeding time seems to be a cultural issue rather than a translation problem, since the total mean score found in this study is similar to the data already described in the literature. 21 These results highlight the universality of feeding parameters regardless of culture, whether in North America, Europe, Southeast Asia or, now, South America. In the comparison by age group (≤24 months and >24 months), the cases obtained a significantly higher mean than the controls. However, in the group of cases, older children had higher mean scores than younger ones, which suggests that there is no decrease in feeding difficulties with increasing age. This difference demonstrates a tendency to maintain feeding difficulties and has been previously described. 8,13 A possible explanation for this finding can be based on the likely increase in stress levels and other negative interactions between caregivers and children during meals. 8,13 Caregiver-child interactions are often mentioned as a contributing or sustaining factor for the persistence of feeding difficulties. 8 Higher means in the group of cases from six months of age are in accordance with those described in other studies reporting the early onset of feeding difficulties. 27,28 According to these reports, the onset of such difficulties occurs before the first year of life in 50% of children, 27 and at 18 months of age or earlier in up to 75% of children. 28 The findings in the present study regarding the persistence of feeding difficulties with advancing age, associated with the onset of symptoms before 25 months of age, further reinforces the need for validation of a screening instrument that can be used for babies from six months of age in order to identify feeding difficulties before the establishment of avoidant behavioral patterns. 3 To date, two instruments were published for that age group: the MCH-FS 12 and the Pediatric Eating Assessment Tool (PediEAT), 3,18,20 both not adapted and cross-culturally validated for Brazilian Portuguese. The EBAI, the Brazilian version of the MCH-FS developed in the present study, has an internal consistency similar to the original version 12 and the Dutch and Thai versions, 13,21 demonstrating good sensitivity and specificity with the suggested cutoff point of 45. These findings are also in agreement with those of the original scale. 12 The area under the ROC curve in the present study suggests good accuracy for the raw score between cases and controls, being quite close to the original scale. With regard to the difference found in the demographic data of parents/caregivers, the ones in the control group were older and had higher levels of education than those in the subgroup "food difficulties with comorbidities", which shows a higher socio-cultural level in the control group. Such data may represent a limitation in the study, since parents/caregivers with greater access to information can interpret the questions differently. It should also be considered that there is no other validated Brazilian instrument to compare these findings. The EBAI's proposal for cross-cultural adaptation and validation proved to be reliable, reaching initially set objectives. Results showed that the questionnaire can be understood by the target audience, being able to achieve the objectives described in the original scale, and has appropriate psychometric measures for the identification of feeding disorders in Brazilian children from six months to six years and 11 months of age. Future use of the instrument in control and case groups with a similar sociodemographic profile may reduce the potential biases of the present study. It is concluded that the availability of EBAI will allow health professionals to use a reliable and cost-free tool for the rapid detection of dietary problems, contributing to the early identification of these problems and the consequent faster referral to specialized treatment. Thus, it is expected to minimize the damage resulting from organic, social, financial, and emotional stress that feeding problems bring upon children and their families.
from django.db.models import ImageField from django.db.models.fields.files import ImageFieldFile from django.conf import settings def urljoin(parts): """ Takes a list of URL parts and smushes em together into a string, while ensuring no double slashes, but preserving any trailing slash(es) """ if len(parts) == 0: raise ValueError('urljoin needs a list of at least length 1') return '/'.join([x.strip('/') for x in parts[0:-1]] + [parts[-1].lstrip('/')]) class IIIFObject(object): def __init__(self, parent): # for each profile defined in settings for name in settings.IIIF_PROFILES: profile = settings.IIIF_PROFILES[name] if parent.name: if type(profile) is dict: iiif = profile elif callable(profile): iiif = profile(parent) identifier = parent.name.replace("/", "%2F") url = urljoin([iiif['host'], identifier, iiif['region'], iiif['size'], iiif['rotation'], '{}.{}'.format(iiif['quality'], iiif['format'])]) setattr(self, name, url) else: setattr(self, name, "") # Add info.json URL if parent.name: url = urljoin([iiif['host'], identifier, "info.json"]) setattr(self, "info", url) else: setattr(self, "info", "") class IIIFFieldFile(ImageFieldFile): @property def iiif(self): return IIIFObject(self) def __init__(self, *args, **kwargs): super(IIIFFieldFile, self).__init__(*args, **kwargs) class IIIFField(ImageField): attr_class = IIIFFieldFile def __init__(self, *args, **kwargs): super(IIIFField, self).__init__(*args, **kwargs)
<gh_stars>0 package spf import ( "net" "strings" "time" "github.com/miekg/dns" ) // LookupSPF returns spf txt record // if no records found or more than one record found, r value will be set accordingly to None or PermError // If dns lookup faild, r will be set to TempError func LookupSPF(domain string) (spf string, r Result) { txts, err := lookupTXT(domain) if err != nil { return "", TempError } var spfs []string for _, txt := range txts { txt = strings.ToLower(txt) if txt == "v=spf1" || strings.HasPrefix(txt, "v=spf1 ") { spfs = append(spfs, txt) } } switch len(spfs) { case 0: return "", None case 1: return spfs[0], Result("") default: return "", PermError } } // lookupTXT using miekg DNS since net.LookupTXT returns error if no TXT records // returns slice of TXT records and error func lookupTXT(d string) ([]string, error) { var txt []string r, _, err := dnsQuest(d, dns.TypeTXT) if err != nil { return txt, err } for _, answ := range r.Answer { t := answ.(*dns.TXT) txt = append(txt, strings.Join(t.Txt, "")) } return txt, nil } func lookupA(d string) ([]net.IP, error) { var ips []net.IP r, _, err := dnsQuest(d, dns.TypeA) if err != nil { return ips, err } for _, answ := range r.Answer { a := answ.(*dns.A) ips = append(ips, a.A) } return ips, nil } func lookupAAAA(d string) ([]net.IP, error) { var ips []net.IP r, _, err := dnsQuest(d, dns.TypeAAAA) if err != nil { return ips, err } for _, answ := range r.Answer { a := answ.(*dns.AAAA) ips = append(ips, a.AAAA) } return ips, nil } func lookupMX(d string) ([]string, error) { var mxs []string r, _, err := dnsQuest(d, dns.TypeMX) if err != nil { return mxs, err } for _, answ := range r.Answer { mx := answ.(*dns.MX) mxs = append(mxs, mx.Mx) } return mxs, nil } func lookupPTR(ip net.IP) ([]string, error) { var hosts []string ipstr := ip.String() if ip.To4() != nil { ipstr += ".in-addr.arpa." } else { ipstr += "ip6.arpa." } r, _, err := dnsQuest(ipstr, dns.TypePTR) if err != nil { return hosts, err } for _, answ := range r.Answer { p := answ.(*dns.PTR) hosts = append(hosts, p.Ptr) } return hosts, nil } func dnsQuest(d string, t uint16) (r *dns.Msg, rtt time.Duration, err error) { m := new(dns.Msg) m.Id = dns.Id() m.SetQuestion(dns.Fqdn(d), t) m.RecursionDesired = true c := new(dns.Client) return c.Exchange(m, "8.8.8.8:53") } func init() { //config, _ := dns.ClientConfigFromFile("/etc/resolv.conf") }
package com.hwloser.simple; public class SpeedTestUtil { // 存储启动时间 private static ThreadLocal<Long> startTime = new ThreadLocal<Long>(); public static void start() { // 记录启动时间 startTime.set(System.currentTimeMillis()); } public static void finish(String carName, String methodName) { // 获取结束时间 long finishTime = System.currentTimeMillis(); // 打印时间信息 System.out .println(carName + " 执行" + methodName + ",耗时:" + (finishTime - startTime.get()) + " ms.\n"); } }
ACS in children with sickle cell anaemia in Uganda: prevalence, presentation and aetiology ACS (ACS) is a serious complication of sickle cell anaemia (SCA). We set out to describe the burden, presentation and organisms associated with ACS amongst children with SCA attending Mulago Hospital, Kampala, Uganda. In a crosssectional study, 256 children with SCA and fever attending Mulago Hospital were recruited. Chest Xrays, blood cultures, complete blood count and sputum induction were performed. Sputum samples were investigated by ZiehlNielsen staining, culture and DNA polymerase chain reaction (PCR) for Chlamydia pneumoniae. Of the 256 children, 227% had ACS. Clinical and laboratory findings were not significantly different between children with ACS and those without, besides cough and abnormal signs on auscultation. Among the 83 sputum cultures Streptococcus pneumoniae (12%) and Moraxella spp (8%), were the commonest. Of the 59 sputa examined with DNA PCR, 593% were positive for Chlamydia pneumoniae. Mycobacterium tuberculosis was isolated in 6/83 sputa. These results show that one in 5 SCA febrile children had ACS. There were no clinical and laboratory characteristics of ACS, but cough and abnormalities on auscultation were associated with ACS. The high prevalence of Chlamydia pneumoniae in children with ACS in this setting warrants the addition of macrolides to treatment, and M. tuberculosis should be differential in subSaharan children with ACS.
Microstructure of Cheddar cheese: sample preparation and scanning electron microscopy Summary Three techniques were used for preparing Cheddar cheese specimens for examination by scanning electron microscopy. Resulting micrographs prepared from fresh cheese made with calf rennet revealed that while all the techniques were satisfactory, different structural features were observed depending upon the method used. A modified critical point drying technique and a freeze-drying method displayed surface features whereas trypsin hydrolysis showed internal microstructure. The surface microstructure of the cheese was found to be formed of protein aggregated and fused into structural units of 15 m. The internal microstructure appeared to be formed of thin compact walls. Cross sections prepared by freeze-drying displayed arrays in the protein matrix and locations where fat globules had been embedded.
Imbalance rotating machine balancing Imbalance analysis is essential in the rotating machine. However, some problems still remain in the aspects of computational efficiency and accuracy. In the present paper, a new method was proposed for estimating the mass imbalance of a rotating shaft by using the vibration signals. This is a new method for the detection of a mass imbalance and its phase position. Based on the signal processing with FFT, an estimator was designed to detect the mass of imbalance. And an improved Lissajous diagram was also introduced with statistical analysis, which make it possible to compute the phase position of the mass imbalance efficiently and arranged at a certain location of the shaft. The proposed method was demonstrated and validated through several test examples.
He didn’t read intelligence reports and mixed up classified material with what he had seen in newspaper clips. He seemed confused about the structure and purpose of organizations and became overwhelmed when meetings covered multiple subjects. He blamed immigrants for nearly every societal problem and uttered racist sentiments with shocking callousness. This isn’t how President Trump is depicted in a new book by former deputy FBI director Andrew McCabe. Instead, it’s McCabe’s account of what it was like to work for then-Attorney General Jeff Sessions. It’s a startling portrait that suggests that the Trump administration’s reputation for baseness and dysfunction has, if anything, been understated and too narrowly attributed to the president. The description of Sessions is one of the most striking revelations in “The Threat,” a memoir that adds to a rapidly expanding collection of score-settling insider accounts of Trump-era Washington. McCabe’s is an important voice because of his position at the top of the bureau during a critical series of events, including the firing of FBI chief James Comey, the appointment of special counsel Robert S. Mueller, and the ensuing scorched-earth effort by Trump and his Republican allies to discredit the Russia probe and destroy public confidence in the nation’s top law enforcement agency. The work is insightful and occasionally provocative. The subtitle, “How the FBI Protects America in the Age of Terror and Trump,” all but equates the danger posed by al-Qaeda and the Islamic State to that of the current president. But overall, the book isn’t the comprehensive account McCabe was presumably capable of delivering. He seems reluctant to reveal details about his role in conflicts at key moments, rarely shedding meaningful new light on areas of the Trump-Russia-FBI timeline established by Mueller, news organizations and previous authors. McCabe is a keen observer of detail, particularly when it comes to the president’s pettiness. He describes how Trump arranges Oval Office encounters so that his advisers are forced to sit before him in “little schoolboy chairs” across the Resolute Desk. Prior presidents met with aides on couches in the center of the room, but Trump is always angling to make others feel smaller. McCabe was known as a taciturn figure in the bureau, in contrast to the more garrulous Comey. His book reflects that penchant for brevity, with just 264 pages of text. Even so, he documents the president’s attempts to impair the Russia probe and his incessant attacks on the FBI, describing the stakes in sweeping, convincing language. McCabe, of course, has some baggage that hurt the reputation he’d built over 22 years at the bureau and raised questions about his credibility. He was accused by the FBI inspector general of making false statements about contacts with the media. McCabe also has ample motivation to lash out at the president. He had been a target of Trump insults and taunts for nearly two years by the time he was fired, mainly because McCabe’s wife, a pediatrician, had run for state office in Virginia with the financial backing of longtime Clinton ally and former governor Terry McAuliffe. But for all of the understandable alarm and indignation that McCabe registers, he seems, like other Trump dissidents, never to have found reason or opportunity to stand up to the president. There are paragraphs in “The Threat” that recount in detail McCabe’s inner outrage — but no indication that those thoughts escaped his lips in the presence of Trump. What is it that makes otherwise proud public servants, Comey included, willing to subject themselves to Trump-inflicted indignities? Deference to the office? A determination to cling to power? A view of oneself as an indispensable institutional savior? At one point, McCabe puts his odds of getting the FBI director’s position at “one-in-ten-million,” but he goes through a job interview with Trump that feels like a charade from the outset. One of the most frustrating aspects of “The Threat” is that it steers around scenes where McCabe might have provided more detail or insight. He is known to have had a series of tense interactions with Deputy Attorney General Rod Rosenstein in the aftermath of Comey’s firing, each suspicious of his counterpart and convinced that the other should recuse himself from the Russia probe. McCabe skims over the conduct of two of his FBI subordinates, Lisa Page and Peter Strzok, whose text exchanges during an illicit affair included disparaging remarks about Trump and, when they were later revealed, fueled doubts about the organization’s impartiality. When first confronted with the details of the Page-Strzok texts, McCabe was asked by the inspector general whether he knew that Page — his closest legal adviser — had had interactions with the press. McCabe said he didn’t, though in fact he had authorized those contacts. In the book, he downplays that false testimony as a momentary mental lapse during a confusing conversation — which sounds a lot like the excuses offered by countless defendants who find themselves being prosecuted by the FBI for lying. Logs on the electronic tablets used to deliver the President’s Daily Brief to Sessions came back with no indication he had ever punched in the passcode. The attorney general’s views on race and religion are described as reprehensible. McCabe notes that he would like to “say much more” about his firing and questions of his candor toward other bureau officials, but that he is restrained from doing so because he is pursuing a lawsuit. There is one area, however, in which he is considerably more forthcoming than Comey. He acknowledges that the bureau made major miscalculations in its handling of the Clinton probe in 2016 and its decision to discuss it publicly.
package gg.amy.soulfire.api.minecraft.physics; import gg.amy.soulfire.annotations.Bridge; import gg.amy.soulfire.annotations.BridgeMethod; /** * @author amy * @since 5/30/21. */ @Bridge("net.minecraft.core.Vec3i") public interface Vec3i { @BridgeMethod("getX()") int x(); @BridgeMethod("getY()") int y(); @BridgeMethod("getZ()") int z(); }
<gh_stars>0 export { TreeViewComponent } from './treeview.component'; export { TreeViewModule } from './treeview.module'; export { NodeTemplateDirective } from './node-template.directive'; export { CheckDirective } from './check.directive'; export { DisableDirective } from './disable.directive'; export { ExpandDirective } from './expand.directive'; export { SelectDirective } from './selection/select.directive'; export { SelectableSettings } from './selection/selectable-settings'; export { CheckableSettings } from './checkable-settings'; export { CheckMode } from './check-mode'; export { CheckedState } from './checkbox/checked-state'; export { HierarchyBindingDirective } from './hierarchy-binding.directive'; export { FlatDataBindingDirective } from './flat-binding.directive'; export { ItemLookup, TreeItemLookup } from './treeitem-lookup.interface'; export { TreeItem } from './treeitem.interface'; export { NodeClickEvent } from './node-click-event.interface';
Microsoft exits mobile phone business almost completely, but leaves a lifeline Finnish employees of Microsoft Mobile Phones business were invited to the Microsoft Mobile Headquarters in Finland for an internal event. It's expected that there will be news on the faltering smartphone business with the Lumia line of products. The business has not been doing well since Microsoft bought it from Nokia at the price of 5.5 Billion Euro. Just this week Gartner informed that the global marketshare of the phones had dropped to less then a percent of the smartphone market. Sources stated that Microsoft would exit the business completely, but instead they Microsoft announced that while they are cutting down significantly. They kept the door open to the business by leaving a handful of design and development staff to the Finnish HQ, while laying of some 1500 people. It's not the first time there has been bad news for ex-Nokians in Finland since the acquisition by Microsoft as in 2015 the company shedded 2300 jobs in Salo, Tampere and Espoo. Already in 2014 they shut down the Oulu unit and let go around a thousand employees. Last year the Salo research facility was closed completely. This was where Nokia sprang in the early 1990's to become the largest mobile phone manufacturer for a number of years. After selling the mobile phone business to Microsoft, Nokia still continues as an independent company focusing on mobile networks and is in fact returning to the smartphone market by licensing it's brand: Nokia will launch new smartphones in 2016 In an internal announcement of Microsoft Mobile Restructuring, Terry Myerson wrote to the staff: Team, Last week we announced the sale of our feature phone business. Today I want to share that we are taking the additional step of streamlining our smartphone hardware business, and we anticipate this will impact up to 1,850 jobs worldwide, up to 1,350 of which are in Finland. These changes are incredibly difficult because of the impact on good people who have contributed greatly to Microsoft. Speaking on behalf of Satya and the entire Senior Leadership Team, we are committed to help each individual impacted with our support, resources, and respect. For context, Windows 10 recently crossed 300 million monthly active devices, our Surface and Xbox customer satisfaction is at record levels, and HoloLens enthusiasts are developing incredible new experiences. Yet our phone success has been limited to companies valuing our commitment to security, manageability, and Continuum, and with consumers who value the same. Thus, we need to be more focused in our phone hardware efforts. With this focus, our Windows strategy remains unchanged: 1. Universal apps. We have built an amazing platform, with a rich innovation roadmap ahead. Expanding the devices we reach and the capabilities for developers is our top priority. 2. We always take care of our customers, Windows phones are no exception. We will continue to update and support our current Lumia and OEM partner phones, and develop great new devices. 3. We remain steadfast in our pursuit of innovation across our Windows devices and our services to create new and delightful experiences. Our best work for customers comes from our device, platform, and service combination. At the same time, our company will be pragmatic and embrace other mobile platforms with our productivity services, device management services, and development tools — regardless of a person’s phone choice, we want everyone to be able to experience what Microsoft has to offer them. With that all said… I used the words “be more focused” above. This in fact describes what we are doing (we’re scaling back, but we’re not out!), but at the same time I don’t love it because it lacks the emotional impact of this decision. When I look back on our journey in mobility, we’ve done hard work and had great ideas, but have not always had the alignment needed across the company to make an impact. At the same time, Ars Technica recently published a long story documenting our journey to create the universal platform for our developers. The story shows the real challenges we faced, and the grit required to get it done. The story closes with this: “And as long as it has taken the company, Microsoft has still arguably achieved something that its competitors have not… It took more than two decades to get there, but Microsoft still somehow got there first.” For me, that’s what focus can deliver for us, and now we get to build on that foundation to build amazing products. Terry Written by Janita on Wednesday May 25, 2016 Permalink -
// BeforeDelete is called before deleting any record func BeforeDelete(e *engine.Engine) error { if !scope.HasConditions(e, e.Scope.Value) { return errors.New("Missing WHERE clause while deleting") } return nil }
Finds registered Hispanic voters anywhere from 46% to 120% more likely than non-Hispanics to lack state-issued ID needed to vote under the new law... Brad Friedman Byon 3/12/2012, 8:02pm PT The good news for voters, of late, keeps coming --- at least against the title wave of GOP voter suppression laws instituted around the country by Republicans since taking over legislatures and executive branches in 2010. In addition to last week's temporary injunction of the Wisconsin's GOP polling place Photo ID restriction, and today's permanent injunction of the same law by a second judge in a separate complaint (both judges found the law in strict violation of the state Constitution's ironclad guarantee of the right to vote), today also saw the U.S. Dept. of Justice blocking a similarly disenfranchising Photo ID restriction enacted last year by Texas Republicans. Currently, according to data supplied to the DoJ by the state of TX, more than 600,000 legally registered voters do not possess the type of ID that would be required to vote under the law passed last year, as previously set to take effect before this year's Presidential Election. But it is the discriminatory effect of the new law which led the DoJ to nix the new changes to TX' voting laws. Finding that the state's own statistics reveal legally registered Hispanic voters will be disproportionately disenfranchised by the TX law --- by anywhere from 46% to 120% over non-Hispanics, depending upon which set of a data submitted by TX is used for the analysis --- the DoJ rejected the statute under Section 5 of the Voting Rights Act. That section of the federal law requires preclearance for new election laws in certain jurisdictions with a history of racial discrimination. Texas is one of those covered jurisdiction. Today, the DoJ objected to the new law after determining that the state had not met it's "burden of showing that a submitted change [to an election law] has neither a discriminatory purpose nor a discriminatory effect"... The finding was based on an analysis of two different sets of data submitted by TX to the DoJ for review in their Section 5 examination of the new law. Both sets were sent following a series of questions for TX by the DoJ last September when the federal agency also rejected the state's new Congressional redistricting plan, as similarly passed by Republicans and signed into law by Gov. Rick Perry (R). At the time, the DoJ had determined that TX had created the new voting districts "at least in part, for the purpose of diminishing the ability of citizens of the United States, on account of race, color, or membership in a language minority group, to elect their preferred candidates of choice to Congress." The redistricting was found to be purposely discriminatory and was rejected by the DoJ under Section 5 of the VRA. In reviewing the state's data on Photo ID --- no matter which of the two data sets were reviewed --- Hispanics, who make up approximately 22% of the state's legally registered voters, were found far and away more likely than non-Hispanics to be disenfranchised by the GOP's new polling place laws. According to the 6-page letter [PDF] sent by the DoJ to the Director of Elections in the office of the TX Secretary of State today to inform him of the objection, while the state offered no explanation for the disparities in the two different data sets, examination of both indicates a seriously disproportional racial disadvantage for Hispanic voters under the new law [emphasis added]: a Hispanic voter is 46.5 percent more likely than a non-Hispanic voter to lack these forms of identification. In addition, although Hispanic voters represent only 21.8 percent of the registered voters in the state, Hispanic voters represent fully 29.0 percent of the registered voters without such identification. Starting our analysis with the September data set, 6.3 percent of Hispanic registered voters do not have the forms of identification described above, but only 4.3 percent of non-Hispanic registered voters are similarly situated. Therefore,. In addition, although Hispanic voters represent only 21.8 percent of the registered voters in the state, Hispanic voters represent fully 29.0 percent of the registered voters without such identification. Our analysis of the January data indicates that 10.8 percent of Hispanic registered voters do not have a driver’s license or personal identification card issued by DPS [Dept. of Personal Safety], but only 4.9 percent of non-Hispanic registered voters do not have such identification. So, Hispanic registered voters are more than twice as likely as non-Hispanic registered voters to lack such identification. Under the data provided in January, Hispanics make up only 21.8 percent of all registered voters, but fully 38.2 percent of the registered voters who lack these forms of identification. Thus, we conclude that the total number of registered voters who lack a driver’s license or personal identification card issued by DPS could range from 603,892 to 795,955. The disparity between the percentages of Hispanics and non-Hispanics who lack these forms of identification ranges from 46.5 to 120.0 percent. That is, according to the state’s own data, a Hispanic registered voter is at least 46.5 percent, and potentially 120.0 percent, more likely than a non-Hispanic registered voter to lack this identification. Even using the data most favorable to the state, Hispanics disproportionately lack either a driver’s license or a personal identification card issued by DPS, and that disparity is statistically significant. The DoJ letter goes on to note that "The state has provided no data on whether African American or Asian registered voters are also disproportionately affected by" the new law. In addition to the racial disparity, the DoJ also found that it would be costly for all registered voters without the requisite forms of identification needed to procure the so-called "free" Photo IDs offered by the state, but that the costs would be more difficult for Hispanics who represent a disproportionate number of those who are impoverished in Texas: An applicant for an election identification certificate will be required to provide two pieces of secondary identification, or one piece of secondary identification and two supporting documents. If a voter does not possess any of these documents, the least expensive option will be to spend $22 on a copy of the voter’s birth certificate. There is a statistically significant correlation between the Hispanic population percentage of a county and the percentage of a county’s population that lives below the poverty line. The legislature tabled amendments that would have prohibited state agencies from charging for any underlying documents needed to obtain an acceptable form of photographic identification. Finally, the DoJ detailed three additional hurdles that all voters, but disproportionately Hispanic voters, according to the state's data, are likely to face in attempting to obtain their so-called "free" ID, if they do not already have one deemed suitable for voting purposes in Texas: ... Second, in 81 of the state’s 254 counties, there are no operational driver’s license offices. The disparity in the rates between Hispanics and non-Hispanics with regard to the possession of either a driver’s license or personal identification card issued by DPS is particularly stark in counties without driver’s license offices. According to the September 2011 data, 10.0 percent of Hispanics in counties without driver’s license offices do not have either form of identification, compared to 5.5 percent of non-Hispanics. According to the January 2012 data, that comparison is 14.6 percent of Hispanics in counties without driver’s license offices, as compared to 8.8 percent of non-Hispanics. During the legislative hearings, one senator stated that some voters in his district could have to travel up to 176 miles roundtrip in order to reach a driver’s license office. The legislature tabled amendments that would have, for example, provided reimbursement to voters who live below the poverty line for travel expenses incurred in applying for the requisite identification. First, according to the most recent American Community Survey three-year estimates, 7.3 percent of Hispanic or Latino households do not have an available vehicle, as compared with only 3.8 percent of non-Hispanic white households that lack an available vehicle......Second, in 81 of the state’s 254 counties, there are no operational driver’s license offices. The disparity in the rates between Hispanics and non-Hispanics with regard to the possession of either a driver’s license or personal identification card issued by DPS is particularly stark in counties without driver’s license offices. According to the September 2011 data, 10.0 percent of Hispanics in counties without driver’s license offices do not have either form of identification, compared to 5.5 percent of non-Hispanics. According to the January 2012 data, that comparison is 14.6 percent of Hispanics in counties without driver’s license offices, as compared to 8.8 percent of non-Hispanics. During the legislative hearings, one senator stated that some voters in his district could have to travel up to 176 miles roundtrip in order to reach a driver’s license office. The legislature tabled amendments that would have, for example, provided reimbursement to voters who live below the poverty line for travel expenses incurred in applying for the requisite identification. The third and final point is the limited hours that such offices are open. Only 49 of the 221 currently open driver’s license offices across the state have extended hours. Even Senator Troy Fraser, the primary author of this legislation in the Senate, acknowledged during the legislative hearing that, "You gotta work to make sure that [DPS offices] are open." Despite the apparent recognition of the situation, the legislature tabled an amendment that would have required driver’s license offices to be open until 7:00 p.m. or later on at least one weekday and during four or more hours on at least two Saturdays each month. Many of the DoJ's findings mirror those found last year when they rejected the state of South Carolina's similarly disenfranchising polling place Photo ID restriction law. In that case, it was determined that the state's own data showed that registered African-American voters in the state were more than 20% more likely to lack the requisite Photo ID than registered White voters. Both Texas and South Carolina have appealed the DoJ rulings to the U.S. District Court in Washington D.C. In both cases, however, as noted by the DoJ today, "until the objection is withdrawn or a judgment from the United States District Court for the District of Columbia is obtained, the submitted changes continue to be legally unenforceable." Critics of polling place Photo ID laws have long charged, with a great deal of evidence to support their case, that such restrictions are specifically targeted toward the elderly, minorities and student voters, all of whom tend to lack the needed ID, and all of whom tend to vote disproportionately in favor of Democrats. Republican proponents of such laws claim they are necessary to combat voter fraud, though none of them have succeeded in demonstrating any actual incidents of polling place impersonation --- the only type of voter fraud that can possibly be deterred by such laws. Indiana's first-in-the-nation Photo ID restriction law was granted narrow approval by the U.S. Supreme Court in 2008. Since then, elderly nuns and students, among others who had previously been legal voters, have been disenfranchised by the law. Ironically, last month the Republican Sec. of State of Indiana, Charlie White, was found guilty of six felony charges, three of them for voter fraud, after it was determined that he'd registered to vote at an address where he did not live. The state's law did nothing to prevent White's fraud. The same year that Indiana's law was approved, political appointees in the Bush Administration's DoJ granted VRA preclearance to Georgia's Republican-approved Photo ID law against the advice of the career attorneys in the Civil Rights Division. Currently the DoJ is set to review a Photo ID referendum approved by voters in Mississippi last January, and another is pending in Alabama as well. Both states are among the 16 "covered jurisdictions" where preclearance is required under Section 5 of the VRA. In other states which are not covered by Section 5, such as Kansas, Tennessee, Rhode Island, Pennsylvania and Wisconsin, Republicans have passed Photo ID restrictions and those are currently being challenged at the state level. The BRAD BLOG's legal analyst Ernest A. Canning recently argued that the DoJ should intervene in such cases under Section 2 of the Voting Rights Act, which also prohibits racially discriminatory election laws, but where the burden is on the challenging party to demonstrate a discriminatory intent or effect, rather than on the jurisdiction itself --- as in Section 5 states --- to demonstrate new changes to the law are not discriminatory. As Ari Berman reported last week at Rolling Stone, a concerted effort by Republicans since taking power in many swing states in 2010 is currently underway to implement these restrictive laws before the 2012 Presidential Election. Most of the new laws, he notes, are based on the same template legislation offered by the American Legislative Exchange Council (ALEC), a Rightwing DC-based non-profit which is funded, in no small part, by major corporate interests and whose membership includes many of the elected state officials who have sponsored the new laws in more than a dozen states over the past year. * * * Please support The BRAD BLOG's fiercely independent, award-winning coverage of your electoral system, as available from no other media outlet in the nation, with a donation to help us keep going (Snail mail, more options here). If you like, we'll send you some great, award-winning election integrity documentary films in return! Details right here...
An Information-Theoretic Approach to Waveform Design for Active Incoherent Microwave Imaging Active incoherent imaging is a new microwave imaging method that combines spatial frequency sampling methods with active noise radar techniques. In this paper, we present a method for characterizing image reconstruction fidelity by analyzing the transmitted noise waveforms in terms of the mutual information between them. Different statistical distributions of the transmitted noise are characterized, and we investigate the effect on mutual information and image fidelity in terms of the bandwidth of the received waveforms. We show that Gaussian noise generally outperforms other distributions, and that the mutual information follows the general trend of image fidelity, indicating that mutual information can be a useful metric for designing transmit noise waveforms for active incoherent imaging.
<filename>image-service/src/main/java/com/paoperez/imageservice/ImageMismatchException.java package com.paoperez.imageservice; class ImageMismatchException extends Exception { private static final long serialVersionUID = 1L; ImageMismatchException(String id, String imageId) { super(String.format("Image with id %s does not match image argument %s.", id, imageId)); } }
Commonwealth Transportation Board The Commonwealth Transportation Board, formerly the State Highway and Transportation Board, regulates and funds transportation in Virginia. It oversees the Virginia Department of Transportation. Highway rest stops As of 2008, Virginia operated 42 rest stops and visitor centers along its interstate highways. In response to budget pressures, the Board sought public input and determined to reduce costs by closing 19 rest stops and expanding the truck parking lots at the remaining stops to accommodate the trucks that would other park and sleep at the stops designated for closing. The Board also removed the two-hour limit on truck parking. The closures began on July 21, 2009 The Board's funding options were limited, because Federal law 23 U.S.C. §111 prohibits commercialization of interstate highway rest stops. The closing resulted in a $9 million annual saving. At the Board's first meeting in January 2010, it reversed the decision to close the rest stops and reassigned $3 million in highway maintenance funds to keep the 19 rest stops open until the end of the fiscal year. No long-term funding source was identified.
. Thirty four patients with tumors of the cerebellum and the fourth ventricle were examined. All the patients underwent craniography and computed tomography (CT), 28 had magnetic resonance imaging (MRI). According to the histological pattern, the tumors were divided as follows: astrocytomas in 14 patients, hemangioblastomas in 8, ependymomas of the fourth ventricle in 5, medulloblastomas in 3, choriopapillomas in 2, two patients were found to have metastases. MRI was shown to be the most informative technique in the diagnosis of bulky abnormal formations of the cerebellum and the fourth ventricle of the cerebrum. The maximum degree of contrast between normal and damaged tissue provides evidence for correctly establishing a tumor, determining its outlines, and relation to the brain stem and cerebrospinal system. The use of different pulse sequences with MRI makes it possible to have a more valid differential diagnosis. Computed tomography is an additional technique in examining patients with tumors of the cerebellum and the fourth ventricle to obtain data on the plain characteristics of an abnormal focus and their changes after intravenous reinforcement. The application of a complex of radiation studies leads to the conclusion that the bulky abnormal formations of the cerebellum and the fourth ventricle are likely to be morphologic.
Synthesis of extensive, possibly complete, DNA copies of poliovirus RNA in high yields and at high specific activities. The synthesis of large, possibly complete, complementary DNA (cDNA) copies of poliovirus RNA by avian myeloblastosis virus DNA polymerase is described. The cDNA consists of two size classes, the larger of which is approximately 7500 nucleotides. In the presence of excess deoxynucleoside triphosphates, ribonucleoside triphosphates, or sodium pyrophosphate, only the larger material is obtained. Yields of the large cDNA are 50-75% of the input RNA.
/** * The HTML5 Interactive tags (4 total) */ public void interactiveTags(TagInfo tagInfo) { tagInfo = new TagInfo("details", ContentType.all, BelongsTo.BODY, false, false, false, CloseTag.required, Display.block); tagInfo.defineCloseBeforeCopyInsideTags(CLOSE_BEFORE_COPY_INSIDE_TAGS); tagInfo.defineCloseBeforeTags(CLOSE_BEFORE_TAGS); this.put("details", tagInfo); tagInfo = new TagInfo("summary", ContentType.all, BelongsTo.BODY, false, false, false, CloseTag.required, Display.block); tagInfo.defineCloseBeforeCopyInsideTags(CLOSE_BEFORE_COPY_INSIDE_TAGS); tagInfo.defineCloseBeforeTags(CLOSE_BEFORE_TAGS); tagInfo.defineRequiredEnclosingTags("details"); tagInfo.defineForbiddenTags("summary"); this.put("summary", tagInfo); tagInfo = new TagInfo("command", ContentType.all, BelongsTo.BODY, false, false, false, CloseTag.required, Display.block); tagInfo.defineCloseBeforeCopyInsideTags(CLOSE_BEFORE_COPY_INSIDE_TAGS); tagInfo.defineForbiddenTags("command"); tagInfo.defineCloseBeforeTags(CLOSE_BEFORE_TAGS); this.put("command", tagInfo); tagInfo = new TagInfo("menu", ContentType.all, BelongsTo.BODY, false, false, false, CloseTag.required, Display.block); tagInfo.defineCloseBeforeCopyInsideTags(CLOSE_BEFORE_COPY_INSIDE_TAGS); tagInfo.defineCloseBeforeTags(CLOSE_BEFORE_TAGS); tagInfo.defineAllowedChildrenTags("menuitem,li"); this.put("menu", tagInfo); tagInfo = new TagInfo("menuitem", ContentType.all, BelongsTo.BODY, false, false, false, CloseTag.required, Display.block); tagInfo.defineCloseBeforeCopyInsideTags(CLOSE_BEFORE_COPY_INSIDE_TAGS); tagInfo.defineCloseBeforeTags(CLOSE_BEFORE_TAGS); tagInfo.defineRequiredEnclosingTags("menu"); this.put("menuitem", tagInfo); tagInfo = new TagInfo("dialog", ContentType.all, BelongsTo.BODY, false, false, false, CloseTag.required, Display.any); tagInfo.defineCloseBeforeTags(CLOSE_BEFORE_TAGS); this.put("dialog", tagInfo); }
Myeloid differentiation primary response gene - and toll-like receptor 2-deficient mice are susceptible to infection with aerosolized Legionella pneumophila. BACKGROUND Toll-like receptors (TLRs) are a family of proteins that orchestrate innate immune responses to microbes. Although pathogens are typically recognized by multiple TLRs, the specific roles of individual TLRs in mediating host protection during in vivo infection are not well understood. METHODS We compared the roles of myeloid differentiation primary response gene (MyD88), which regulates signaling through multiple TLRs, and TLR2 in mediating resistance to aerosolized Legionella pneumophila infection in vivo. RESULTS In comparison with wild-type mice, MyD88-deficient (MyD88(-/-)) mice had dramatically higher bacterial counts in the lungs, with decreased neutrophil counts in the bronchoalveolar lavage fluid as well as absent cytokine and chemokine production at early time points. By day 6 of infection, the MyD88(-/-) mice developed organizing pneumonia with dissemination of L. pneumophila to the lymph nodes and spleen. TLR2(-/-) mice were also more susceptible to L. pneumophila, with higher bacterial counts in the lung. However, TLR2(-/-) mice produced proinflammatory cytokines, recruited neutrophils to the lung alveoli, and cleared the infection without progression to organizing pneumonia and disseminated disease. CONCLUSIONS These results suggest that a MyD88-dependent pathway is required for eradication of L. pneumophila and prevention of organizing pneumonia. TLR2 mediates partial resistance to L. pneumophila, which indicates that additional MyD88-dependent, TLR2-independent pathways are essential for full protection.
Inspection of clinical investigations by the German health authorities. Based on the regulations of the German Drug Law of 1976 (Arzneimittelgesetz, AMG) the inspections of clinical investigations by the competent health authorities focus on the protection of the human being taking part in the trial. Standards for the planning, methodology, conduct, report and documentation of clinical trials are laid down in a guideline (Grundstze fr die ordnungsgemsse Durchfhrung der klinischen Prfung, Dec. 1987), a kind of national GCP standard which has to be respected by sponsors, physicians and authorities, not only for inspection but also for drug registration.
Small Intestinal Disaccharidase Activity and Ileal Villus Height Are Increased in Piglets Consuming Formula Containing Recombinant Human Insulin-Like Growth Factor-I The effect of orally administered IGF-I on intestinal development was assessed in piglets. Cesarean-derived, colostrum-deprived piglets received formula alone or formula containing 65 nM (500 g/L) of recombinant human IGF-I. IGF-I intake averaged 200 g/kg/d. On d 7 and 14 postpartum, piglets were killed, organs were removed and weighed, and tissue and blood samples were collected. The small intestine was divided into 13 segments that were weighed and measured. A sample of each segment was fixed in formalin, and the mucosa was scraped for enzyme analyses. Food intake, body and organ weights, intestinal weight, length, protein, DNA and RNA content did not differ between the treatment groups. Serum IGF-I, IGF-II, and IGF-binding protein profiles and tissue IGF-binding protein mRNA expression were also comparable between the treatment groups. In contrast, intestinal enzymes and villus height were increased by oral IGF-I. Lactase was ≈2-fold higher (p ≤ 0.05) in the jejunum and proximal ileum, and sucrase was ≈50% higher (p ≤ 0.05) in the jejunum of IGF-I-treated animals than in controls. Villus height in the terminal ileum was ≈50% greater in IGF-I-treated animals than in controls(p = 0.03). In conclusion, orally administered IGF-I at 200 g/kg did not affect whole body or organ growth or serum IGF-I concentrations; however, intestinal disaccharidase activity and ileal villus growth were responsive to orally administered IGF-I, supporting a potential role for milk-borne IGF-I in neonatal intestinal development.
package com.zooplus.fumbliebackend.model.dto.placeOrder; import io.swagger.annotations.ApiModel; import lombok.Data; import java.math.BigDecimal; import java.util.List; @Data @ApiModel public class PlaceOrderOrderDto { private List<PlaceOrderOrderItemDto> orderItems; private BigDecimal totalAmount; private String currency; private PlaceOrderAddressDto address; }
#!/usr/bin/env python """ Convert an Athena Project file to HDF5 """ import os import h5py from silx.io.dictdump import dicttoh5 from larch.io.athena_project import AthenaProject from larch.utils.logging import getLogger _logger = getLogger("athena_to_hdf5", level="INFO") def athena_to_hdf5( filename, fileout=None, overwrite=False, match=None, do_preedge=True, do_bkg=True, do_fft=True, use_hashkey=False, _larch=None, ): """Read Athena project file (.prj) and write to HDF5 (.h5) Arguments: filename (string): name of Athena Project file fileout (None or string): name of the output file [None -> filename_root.h5] overwrite (boolean): force overwrite if fileout exists [False] match (string): pattern to use to limit imported groups (see Note 1) do_preedge (bool): whether to do pre-edge subtraction [True] do_bkg (bool): whether to do XAFS background subtraction [True] do_fft (bool): whether to do XAFS Fast Fourier transform [True] use_hashkey (bool): whether to use Athena's hash key as the group name instead of the Athena label [False] Returns: None, writes HDF5 file. Notes: 1. There is currently a bug in h5py, track_order is ignored for the root group: https://github.com/h5py/h5py/issues/1471 """ aprj = AthenaProject(_larch=_larch) aprj.read( filename, match=match, do_preedge=do_preedge, do_bkg=do_bkg, do_fft=do_fft, use_hashkey=use_hashkey, ) adict = aprj.as_dict() if fileout is None: froot = filename.split(".")[0] fileout = f"{froot}.h5" if os.path.isfile(fileout) and os.access(fileout, os.R_OK): _logger.info(f"{fileout} exists") _fileExists = True if overwrite is False: _logger.info(f"overwrite is {overwrite} -> nothing to do!") return else: _fileExists = False if overwrite and _fileExists: os.remove(fileout) h5out = h5py.File(fileout, mode="a", track_order=True) create_ds_args = {"track_order": True} dicttoh5(adict, h5out, create_dataset_args=create_ds_args) h5out.close() _logger.info(f"Athena project converted to {fileout}") if __name__ == "__main__": # some tests while devel _curdir = os.path.dirname(os.path.realpath(__file__)) _exdir = os.path.join(os.path.dirname(os.path.dirname(_curdir)), "examples", "pca") fnroot = "cyanobacteria" atpfile = os.path.join(_exdir, f"{fnroot}.prj") if 0: from larch import Interpreter aprj = AthenaProject(_larch=Interpreter()) aprj.read(atpfile, do_bkg=False) # there is currently a bug in do_bkg! adict = aprj.as_dict() if 0: athena_to_hdf5(atpfile, fileout=f"{fnroot}.h5", overwrite=True) pass
// LoadConfig try to read config file. func LoadConfig(location string) error { bs, err := ioutil.ReadFile(location) if err != nil { return err } fmt.Printf("Init logger form: %s\n", location) lines := strings.Split(string(bs), "\n") LoadAppenders(lines) return nil }
I often notice people looking for reliable WordPress resources. I’ve answered as much on Quora before, and the newly relaunched Code Poet recently asked people to share resources on Twitter. But I really just need to blog about all of the resources I use most often. I’ve been meaning to make this for a long time, so here I am. Below is a thorough, though not totally complete, list of resources for support, news, tutorials, resources, etc. It’s basically a breakdown of the WordPress oriented websites I keep up with. Duh: The Codex is always the first place for WordPress documentation, and you can often find answers in WP.org forum posts, but there are also other ways. Other avenues for support: WordPress Stack Exchange is an excellent resource, especially for advanced developers. I find myself on this site more and more each month. WPQuestions.com is a good place if you need quick support. It’s a fee model where you set a price you’re willing to pay for an answer, and you choose from the answers people submit. #wordpress on IRC is quite helpful for beginners, and they could always use a hand answering questions if you’re willing. Resources: WordPress on Github. Go straight to the source! WordPress function reference (worth differentiating from the general codex – a must use resource) WordPress template tags (same as above – a must use resource) QueryPosts.com is a new project by Rarst that is an excellent resource. It’s truly a “better WordPress code reference.” It currently boasts a “what’s hot” section and all WP functions. Hooks and classes are on the way, and highly anticipated (by me at least). This is just a big WordPress functions reference. Handy. The Adam Brown hooks database is a great resource for advanced developers. Lorelle’s WordPress Resources can be helpful. I like to go to the new plugins archive periodically. Pretty fun to see what people are cooking up. People / blogs on WP If I listed every blog I read, this list would simply be too long. So here are a couple links to other posts that list all that stuff, and then I’ll share a few of my personal favorites. WPCandy has a list of people and blogs about WordPress to follow. It’s a big one. Here’s another list of blogs by WPMU that you can follow if you are super information hungry. And here are some special shout-outs to my really go-to WordPress oriented blogs. Please don’t be offended if you’re not included in this section. Like I said, it’s not a full list of sites I like: No single individual is doing more awesome stuff in the plugin sphere than Pippin Williamson. No one. Subscribe to his blog. Gravity Wiz is meta blogging at its finest. Only Gravity Forms tutorials, and I think that is just plain awesome. And speaking of Gravity Forms, if you haven’t bookmarked this post by Rocket Genius on targeting elements in CSS, then you’re welcome. Justin Tadlock may only post here and there, but when he does, his posts become the go-to resource on whatever he’s written about. I don’t even know Paul’s last name, but he posts frequently with nice, quick snippets. And they are always practical. Paulund is a great resource you may not have heard of. Tom McFarland is a freaking guru. He often posts on wp.Tuts+, and his are about the only articles on that site I’m not critical of. When he posts, he cross posts the links to his personal blog, where he also shares other nuggets. He’s the primary coder of Standard Theme, in case you needed more reason to trust me. Follow him and be smarter. Not really sure where to put this one, but I build all my custom post types and taxonomies over at Themergency. Try it, it’s great. And you get to own your code when you use it, unlike most WYSIWYG post type builders. Brad made these generators with multi-page gravity forms, and they spit out the raw code for use in your plugins and themes. Neat. And you should know all of these people, but I’d feel remiss if I didn’t at least mention them, as they’ve each helped me so much over the years. You should without a doubt be keeping up with Yoast, Nacin, Mark, and Otto. There are also some good blogs for following what’s happening in WordPress development land: WPDevel, Make Themes, Make UI, Make Plugins, Make Accessibility… just Make All The Things. Get your news from WPCandy, WPTavern, WP Mods, WPForce, the new WPRealm, WPLift, WPMU, and of course WordPress.org. Matt likes to make news too. Obviously, I definitely have a preference for WPCandy. And of course I’m unfairly pigeonholing all of these blogs to “news”, but really each does much more than that, and are all valuable to the community. I tentatively recommend some tutorial sites. Please take these for what they’re worth: sometimes the tutorials are great, sometimes.. not so much. But I still subscribe to them, and I think they all have good intentions. So check out the WordPress section of Smashing Magazine, DigWP, WPBeginner, and most cautiously, wp.Tuts+. I debated heavily whether to include these, but I think they need to be here. However, please use what you find on these sites with extreme caution. Forums: Theme Hybrid is a paid, $25 / year forum. Justin is one of the best, and has a general WordPress section if you don’t use his themes. It’s my go to place to “hang out”, but I also use his framework. This is how I learned pretty much everything I know, and I can’t recommend it enough. CSS Tricks often has good WP stuff, and there are good opportunities to help people that are more casual WordPress developers. The WordPress Reddit subgroup (surprisingly good resources pop up here). The WPTavern forum is good sometimes. The WPCandy forums are not bad for non-support, but WordPress oriented things. Code snippets / resource collections I like: Michael Fields’ code snippets are super handy. I often find myself back here. He’s a great theme and plugin developer with a lot of really beautiful code. WP Snippets is good sometimes, but as with any pure snippets site, use them with caution. Bill Erickson’s website. He’s a great dev, and even more helpful if you’re a Genesis fan. He has loads of snippets and little plugins I use all the time. The new Code Poet is awfully promising for ebooks, resources, and periodic news stuff. This is the section I know I’m not putting enough resources that I use. So many people are sharing great work that they do. Help me remember what/who I’m forgetting in the comments : ) Company Blogs There are a few company blogs I particularly like. Check out WooThemes, WPEngine, Page.ly, Sucuri, and StudioPress. But beware that each of these comes laced with product info. They’re still worth subscribing to. Newsletters worth getting: Daily Documentation Newsletter – this one is a lot of fun to take a glance at as I start my day. wpMail.me is an indispensable resource on what’s fresh in WordPress. I really love getting it every week. Here’s a secret of mine: subscribe to WordPress on Google Alerts. I get it once weekly, and usually find at least one interesting link. There’s a good bit of trash in it too, but it’s worth it for the random good stuff you find. Misc. The WordPress Twitter community is quite helpful. There are a lot of people I recommend following. I’m pretty much only following WordPress or web people from my primary Twitter account these days, so you could start following the people I follow, and you may as well start with me : ) Twitter is without a doubt my most active space in the WordPress community, and a great place to learn and find great stuff. In summary: This is longer than I thought it would be. Now I’m tired. I’m missing quite a lot I’m sure, but there are perhaps some new ones in here you will find useful. If there are any resources I haven’t listed that especially compel you, please feel free to list them below. However, try not to spam me with your own website unless you’ve really got the goods to back it up : )
<reponame>CDAGaming/EnderIO<gh_stars>0 package crazypants.enderio.base.gui.handler; import net.minecraft.client.gui.GuiScreen; import net.minecraft.entity.player.EntityPlayer; import net.minecraft.inventory.Container; import net.minecraft.util.EnumFacing; import net.minecraft.util.math.BlockPos; import net.minecraft.world.World; import net.minecraftforge.fml.relauncher.Side; import net.minecraftforge.fml.relauncher.SideOnly; import javax.annotation.Nonnull; import javax.annotation.Nullable; public interface IEioGuiHandler { @Nullable Object getGuiElement(boolean server, @Nonnull EntityPlayer player, @Nonnull World world, @Nonnull BlockPos pos, @Nullable EnumFacing facing, int param1, int param2, int param3); interface WithPos extends IEioGuiHandler.WithServerComponent { @Override default @Nullable Object getGuiElement(boolean server, @Nonnull EntityPlayer player, @Nonnull World world, @Nonnull BlockPos pos, @Nullable EnumFacing facing, int param1, int param2, int param3) { if (world.isBlockLoaded(pos)) { if (server) { return getServerGuiElement(player, world, pos, facing, param1); } else { return getClientGuiElement(player, world, pos, facing, param1); } } else { return null; } } @Nullable Container getServerGuiElement(@Nonnull EntityPlayer player, @Nonnull World world, @Nonnull BlockPos pos, @Nullable EnumFacing facing, int param1); @Nullable @SideOnly(Side.CLIENT) GuiScreen getClientGuiElement(@Nonnull EntityPlayer player, @Nonnull World world, @Nonnull BlockPos pos, @Nullable EnumFacing facing, int param1); } /** * This marker interface is needed for GUIs that are opened server-side. It will trigger the proper permissions to be created. * * @author <NAME> * */ interface WithServerComponent extends IEioGuiHandler { } }
package com.qinjun.autotest.tstest.cases; import com.qinjun.autotest.tstest.annotation.DemoTestData; import com.qinjun.autotest.tstest.base.BaseCase; import org.testng.annotations.Test; //Case related info like test data @DemoTestData(path="data/testdata.json") public class DemoCase extends BaseCase { @Test public void test() { } }
Mixing together video games and collectible toys, Activision? How very Captain Power of you. As many will no doubt remember, the good captain and his soldiers of the future wound up defending clearance aisles from evil until fading into obscurity. Should a simillar fate befall Skylanders: Spyro's Adventure? It's time for a Gut Check. Evan Narcisse, relatively new father: Skylanders smacks too much of an attempt to grab cash from two markets at once. The cross-platform portable save technology might be great, and the game may even be fun but I feel like this is one instance where voting with your wallet is super-important. The calculus behind this mash-up of collectible toys and video games just seems to be nakedly concerned with setting up another franchise, one that will engender a COD-style, rabidly loyal audience in young kids. The tech's the main story here and you can get the kind of gaming experience offered in Skylanders from other games that don't ask you to buy a bunch of plastic. No. Michael Fahey, even newer father twice over: I was right there with Evan as I begun playing Skylanders this weekend, smirking at the obvious attempt to transform one revenue stream into two. As I played the game I discovered doors that couldn't be opened without a certain type of Skylander. There were special collectible powers for characters I did not own, complete with a dynamic video commercial aimed at showing kids how amazing this figure they did not own was. These are moments manufactured specifically to get children to beg their parents for more toys. How diabolically evil. Well it won't work on me, Activision. Yesterday I went to Toys"R"Us and purchased $40 worth of additional Skylanders stuff. Damn it. Yes. Tristan, 10-year-old gamer, fan of Age of Empires Online (which he can play) and Call of Duty (which he can't): It's a fun game. I like that you get to run around with characters that are also toys. I also liked that there were a lot of guys to choose from. I like how when they go up in levels they get new sorts of attacks. Yes. Brian, father of Tristan: I sort of didn't like the game initially because it felt a bit light, a bit repetitive and had unimpressive graphics, but those physical representations of your characters sure do make a difference. After playing the game for several hours with my son, I was enjoying the expanding gameplay and abilities. Later he walked the rest of the family through one of our adventures using one of the toys, a dinner plate, napkin, fork and salt shaker. This is a video game that can ignite your child's imagination. Yes. I was almost positive this Gut Check would have ended up a no before I started gathering opinions, but I underestimated the power of fatherhood and (in my case) rabid toy collector mentality. I am a weak man. Skylanders: Spyro's Adventure game bundles and toys are available now for the Xbox 360, PlayStation 3, Wii, PC, and 3DS.
<gh_stars>1-10 import sys import logging import argparse import decimal import json import datetime import random import matplotlib.pyplot import matplotlib.ticker import prosperpy import prosperpy.traders import alpha LOGGER = logging.getLogger(__name__) def init_logging(options): handler = logging.StreamHandler(sys.stdout) handler.setLevel(logging.DEBUG) fmt = '%(asctime)s|%(levelname)s|%(name)s|%(message)s|%(filename)s|%(lineno)s' handler.setFormatter(logging.Formatter(fmt=fmt)) if options.verbosity: level = logging.DEBUG elif options.quiet: level = logging.ERROR else: level = logging.INFO if options.verbosity >= 2: prosperpy.engine.set_debug(True) logging.root.setLevel(level) logging.root.addHandler(handler) def get_messages(): seed = 'b6dcff52-3d0d-469a-9bdc-0b92be517ae7' random.seed(seed) with open('data.json', 'r') as data_file: raw = json.load(data_file) data = [] aux = [] timestamp = raw[0][0] for item in raw: aux.append(item) if aux[-1][0] > timestamp + 60: timestamp = timestamp + 60 data.append([ aux[0][0], aux[0][1], aux[-1][2], min([i[3] for i in aux]), max([i[4] for i in aux]), aux[0][5]]) aux = [] messages = [] for item in data: date = datetime.datetime.fromtimestamp(int(item[0])) bid = float(item[4]) ask = bid + random.choice([-1, -0.5, 0, 0.5, 1]) price = random.choice([bid, ask]) message = json.dumps(dict( time=date.isoformat(), best_ask=ask, best_bid=bid, price=price, last_size=item[5])) messages.append(message) return messages class Plot: curves = {} actions = [] @classmethod def add_action(cls, action, x, y): cls.actions.append((action, x, y)) @classmethod def plotme(cls): fig = matplotlib.pyplot.figure() plot = fig.add_subplot(111) for x, y, color in cls.curves.values(): plot.plot(x, y, color=color) formatter = matplotlib.ticker.FormatStrFormatter('$%1.2f') plot.yaxis.set_major_formatter(formatter) for tick in plot.yaxis.get_major_ticks(): tick.label1On = False tick.label2On = True tick.label2.set_color('green') for action, timestamp, price in cls.actions: bbox = dict(boxstyle="round", fc="0.8") arrowprops = dict(arrowstyle="->", connectionstyle="angle,angleA=0,angleB=90,rad=10") offset = 64 plot.annotate('{} ({:.2f})'.format(action, price), (timestamp, price), xytext=(-2 * offset, offset), textcoords='offset points', bbox=bbox, arrowprops=arrowprops) matplotlib.pyplot.show() def the_past(feed): Plot.curves['price'] = ([], [], 'blue') for message in get_messages(): feed.consume(message) Plot.curves['price'][0].append(feed.tick.timestamp) Plot.curves['price'][1].append(feed.tick.price) Plot.plotme() def real_time(feed): prosperpy.engine.create_task(feed()) prosperpy.engine.run_forever() prosperpy.engine.close() def main(): parser = argparse.ArgumentParser() parser.add_argument('-v', '--verbose', action='count', dest='verbosity', default=0) parser.add_argument('-q', '--quiet', action='store_true', dest='quiet', default=False) options = parser.parse_args() init_logging(options) product = 'ETH-USD' auth = prosperpy.gdax.auth.GDAXAuth( '727677e8492b36cd13b3b9325d20a5b7', 'G/EGnZRm5MG+gxZCgw1CIOlLBQcViib78486kJhsvAYkiyojJTI5EsLTEVc0UGw/W1Ko5xhqwmFOUIQGzigJwQ==', 'hus9I7src8U2') api = prosperpy.gdax.api.GDAXAPI(auth) run(product, api) def run(product, api): feed = prosperpy.gdax.TickGDAXFeed(product) trader = alpha.CoastlineTrader(product, feed, api, decimal.Decimal('0.005')) trader.plot = Plot.add_action feed.traders.append(trader) #feed.traders.append(prosperpy.traders.ADXTrader(product, feed, api)) #feed.traders.append(prosperpy.traders.RSITrader(product, feed, api)) #feed.traders.append(prosperpy.traders.HMATrader(product, feed, api)) #feed.traders.append(prosperpy.traders.SMATrader(product, feed, api)) #feed.traders.append(prosperpy.traders.PercentageTrader(decimal.Decimal('0.8'), product, feed, api)) #feed.traders.append(prosperpy.traders.RegressorTrader(sklearn.ensemble.RandomForestRegressor, product, feed, api)) #feed.traders.append(prosperpy.traders.RegressorTrader(sklearn.ensemble.ExtraTreesRegressor, product, feed, api)) #feed.traders.append(prosperpy.traders.RegressorTrader(sklearn.ensemble.AdaBoostRegressor, product, feed, api)) #feed.traders.append(prosperpy.traders.RegressorTrader(sklearn.ensemble.BaggingRegressor, product, feed, api)) #feed.traders.append(prosperpy.traders.RegressorTrader(sklearn.ensemble.GradientBoostingRegressor, product, feed, api)) #feed.traders.append(prosperpy.traders.HODLTrader(product, feed, api)) #feed.traders.append(prosperpy.traders.PerfectTrader(product, feed, api)) real_time(feed) #the_past(feed) if __name__ == '__main__': main()
A pathway for wisdom-focused education ABSTRACT Interest in the topic of wisdom-focused education has so far not resulted in empirically validated programs for teaching wisdom. To start filling this void, we explore the emerging empirical evidence concerning the fundamental elements required for understanding how one can foster wisdom, with a particular focus on wise reasoning. We define wise reasoning through a combination of intellectual humility, recognition of world in flux/change, open-mindedness to diverse viewpoints, and search for compromise/integration of diverse perspectives. In this article, we review evidence concerning how wise reasoning can be facilitated through experiences, teaching materials, environments and cognitive strategies. We also focus on educators, reviewing emerging evidence on how the process of explaining and guiding others impacts ones wisdom. We conclude by discussing the development of wisdom-focused education, proposing that greater attention to the situational demands and the variability in wisdom-related characteristics across social contexts should play a critical role in its development.
Alexander Payne follows The Descendants with a bittersweet, sparse comedy. Having contrasted vibrant Hawaiian hues with bitter emotional truths in The Descendants, Alexander Payne pulls off the equal-but-opposite trick in this unassuming road movie. Filmed in monochrome and scripted with a sparseness that borders on severity, Nebraska's austere surface belies the warmth and sweetness at its core. Bruce Dern plays Woody, a crotchety ageing alcoholic with the early signs of dementia, in a performance that won the 'Best Actor' gong at this year's Cannes Film Festival. When he receives a letter telling him that he's won a million dollars, he becomes single-minded in his determination to go to Nebraska and collect his winnings. After failing to convince his father that this is plainly a scam, his adult son David (Will Forte, proving the maxim that career comedians can segue into drama with ease) agrees to be his chauffeur for the 800-mile trip, eager for the bonding opportunity and the change of scenery. What initially promises to be a downbeat two-hander about a father and son struggling for connection is complicated by the addition of Woody's equally cantankerous wife (June Squibb) and a lengthy pit stop in his hometown, where an assortment of friends and relatives are waiting all-too-eagerly to congratulate him on his supposed earnings, with hands outstretched. As with The Descendants' battle over ancestral land, there's a shrewdly drawn sense here of how flimsy a thing family loyalty can be when people get dollar signs in their eyes. Bob Nelson's screenplay is more about the gaps between speech than speech, and Dern's performance is the same - he does wonders during a largely silent sequence in which the trio visit Woody's childhood home. Meanwhile David is the latest in a line of quietly disillusioned Payne protagonists, but there's none of the caustic rage or overtly depressive quality we've come to expect, and, one brief scene with an ex aside, we get little of his personal life. There's something intensely moving about watching this doomed journey play out against the barren, curiously evocative landscape of roadside middle America. Nebraska builds to a stirring third act that's comprised so entirely of small, loaded moments that you don't immediately register its emotional impact. It's a beguiling and intimate change of pace for Payne.
I enjoy a drink now and again, but can't tolerated drunkenness. I enjoy doing a bit of DIY around the house & to really relax sitting next to a river or dam having a braai with my companion. Live has it up and downs, I want to make a big change in my life in the near future, what it is I don't know. Oh yes, I also enjoy cooking. I'm looking for a partner who is relaxed and happy to take life as it comes, she must be a person who enjoys the outdoors and enjoy doing this with her partner. I am the type of guy that will help with the baking, the dishes and just sharing everything in life. avomac58 hasn't asked any friends to write a recommendation yet.
<filename>genetics/metropolis/SOSMCOperators.h /* * Mutation.h * * Created on: Nov 9, 2015 * Author: <NAME> */ #ifndef GPUPROCGENETICS_SRC_SOSMC_OPERATORS_H_ #define GPUPROCGENETICS_SRC_SOSMC_OPERATORS_H_ #include <iostream> #include "GeneticAlgoConfig.h" #include "Genome_IF.h" #include "Mutation_IF.h" #include "Population_IF.h" #include "Selection_IF.h" #include "SymbolManager.h" namespace PGA { class SoSmcSelection: public Selection_IF { private: ProceduralAlgorithm* _algo; PopulationConf* _population; SelectionConf* _conf; float _fitness_offset; float _sum_fitness; public: SoSmcSelection(ProceduralAlgorithm* base): _algo(base) {} const std::string getName() { return "Roulette"; } void prepareForSelection() { // get the negative fitness of the worst individual _fitness_offset = _population->impl()->getGenome(_population->impl()->activeGenerationSize() -1)->getEvalPoints() * -1; // i only need this offset if the worst fitness is negative if (_fitness_offset > 0.0) { _fitness_offset = 0.0f; } // normalize the fitness so they sum to 1 _sum_fitness = 0.0f; for (int i = 0; i < _population->impl()->activeGenerationSize(); ++i) { _sum_fitness += (_fitness_offset + _population->impl()->getGenome(i)->getEvalPoints()); } } void init(ProceduralAlgorithm* base) { _population = _algo->get<PopulationConf*>("population"); _conf = _algo->get<SelectionConf*>("resampling"); } Genome_IF* selectGenome() { float random_val = getRandomValue(0.0f, 1.0f); float relative_fitness = 0.0f; Genome_IF* candidate = nullptr; // start summing the fitness values from the lowest until we reach the random value for (int i = 0; i < _population->impl()->activeGenerationSize(); ++i) { candidate = _population->impl()->getGenome(i); float current_fitness = (_fitness_offset + candidate->getEvalPoints()) * 1.0f/_sum_fitness; relative_fitness += current_fitness; if (random_val <= relative_fitness) { break; } } if (candidate == nullptr) { std::cout << "nothing selected! " << std::endl; } return candidate; } }; class SoSmcGrowMutation: public Mutation_IF { private: ProceduralAlgorithm* _algo; MutationConf* _conf; public: SoSmcGrowMutation(ProceduralAlgorithm* base): _algo(base) { } ~SoSmcGrowMutation() {} void init(ProceduralAlgorithm* base); bool doMutation() { return true; } // Add a single element void mutate(GrammarConf& grammar, Genome_IF* genome); // do a grow-mutation void grow(GrammarConf& conf, Genome_IF* genome) { return mutate(conf, genome); } void do_move(GrammarConf& grammar, Genome_IF* genome, Gene_IF* gene) { throw(std::runtime_error("do_move not implemented for this Mutation-Operator!")); } }; } // namespace PGA #endif /* GPUPROCGENETICS_SRC_SOSMC_OPERATORS_H_ */
<filename>vm_tools/sommelier/sommelier-compositor.c<gh_stars>1-10 // Copyright 2018 The Chromium OS Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #include "sommelier.h" #include <assert.h> #include <errno.h> #include <gbm.h> #include <limits.h> //#include <linux/virtwl.h> #include <pixman.h> #include <stdlib.h> #include <string.h> #include <sys/ioctl.h> #include <unistd.h> #include <wayland-client.h> #include <wayland-util.h> #include "drm-server-protocol.h" #include "linux-dmabuf-unstable-v1-client-protocol.h" #include "viewporter-client-protocol.h" #define MIN_SIZE (INT_MIN / 10) #define MAX_SIZE (INT_MAX / 10) #define DMA_BUF_SYNC_READ (1 << 0) #define DMA_BUF_SYNC_WRITE (2 << 0) #define DMA_BUF_SYNC_RW (DMA_BUF_SYNC_READ | DMA_BUF_SYNC_WRITE) #define DMA_BUF_SYNC_START (0 << 2) #define DMA_BUF_SYNC_END (1 << 2) #define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) struct sl_host_region { struct sl_context* ctx; struct wl_resource* resource; struct wl_region* proxy; }; struct sl_host_compositor { struct sl_compositor* compositor; struct wl_resource* resource; struct wl_compositor* proxy; }; struct sl_output_buffer { struct wl_list link; uint32_t width; uint32_t height; uint32_t format; struct wl_buffer* internal; struct sl_mmap* mmap; struct pixman_region32 damage; struct sl_host_surface* surface; }; struct dma_buf_sync { uint64_t flags; }; static void sl_dmabuf_sync(int fd, uint64_t flags) { struct dma_buf_sync sync = {0}; int rv; sync.flags = flags; do { rv = ioctl(fd, DMA_BUF_IOCTL_SYNC, &sync); } while (rv == -1 && errno == EINTR); } static void sl_dmabuf_begin_write(int fd) { sl_dmabuf_sync(fd, DMA_BUF_SYNC_START | DMA_BUF_SYNC_WRITE); } static void sl_dmabuf_end_write(int fd) { sl_dmabuf_sync(fd, DMA_BUF_SYNC_END | DMA_BUF_SYNC_WRITE); } #if 0 static void sl_virtwl_dmabuf_sync(int fd, __u32 flags) { struct virtwl_ioctl_dmabuf_sync sync = {0}; int rv; sync.flags = flags; rv = ioctl(fd, VIRTWL_IOCTL_DMABUF_SYNC, &sync); assert(!rv); UNUSED(rv); } static void sl_virtwl_dmabuf_begin_write(int fd) { sl_virtwl_dmabuf_sync(fd, DMA_BUF_SYNC_START | DMA_BUF_SYNC_WRITE); } static void sl_virtwl_dmabuf_end_write(int fd) { sl_virtwl_dmabuf_sync(fd, DMA_BUF_SYNC_END | DMA_BUF_SYNC_WRITE); } #endif static uint32_t sl_gbm_format_for_shm_format(uint32_t format) { switch (format) { case WL_SHM_FORMAT_NV12: return GBM_FORMAT_NV12; case WL_SHM_FORMAT_RGB565: return GBM_FORMAT_RGB565; case WL_SHM_FORMAT_ARGB8888: return GBM_FORMAT_ARGB8888; case WL_SHM_FORMAT_ABGR8888: return GBM_FORMAT_ABGR8888; case WL_SHM_FORMAT_XRGB8888: return GBM_FORMAT_XRGB8888; case WL_SHM_FORMAT_XBGR8888: return GBM_FORMAT_XBGR8888; } assert(0); return 0; } static uint32_t sl_drm_format_for_shm_format(int format) { switch (format) { case WL_SHM_FORMAT_NV12: return WL_DRM_FORMAT_NV12; case WL_SHM_FORMAT_RGB565: return WL_DRM_FORMAT_RGB565; case WL_SHM_FORMAT_ARGB8888: return WL_DRM_FORMAT_ARGB8888; case WL_SHM_FORMAT_ABGR8888: return WL_DRM_FORMAT_ABGR8888; case WL_SHM_FORMAT_XRGB8888: return WL_DRM_FORMAT_XRGB8888; case WL_SHM_FORMAT_XBGR8888: return WL_DRM_FORMAT_XBGR8888; } assert(0); return 0; } static void sl_output_buffer_destroy(struct sl_output_buffer* buffer) { wl_buffer_destroy(buffer->internal); sl_mmap_unref(buffer->mmap); pixman_region32_fini(&buffer->damage); wl_list_remove(&buffer->link); free(buffer); } static void sl_output_buffer_release(void* data, struct wl_buffer* buffer) { struct sl_output_buffer* output_buffer = wl_buffer_get_user_data(buffer); struct sl_host_surface* host_surface = output_buffer->surface; wl_list_remove(&output_buffer->link); wl_list_insert(&host_surface->released_buffers, &output_buffer->link); } static const struct wl_buffer_listener sl_output_buffer_listener = { sl_output_buffer_release}; static void sl_host_surface_destroy(struct wl_client* client, struct wl_resource* resource) { wl_resource_destroy(resource); } static void sl_host_surface_attach(struct wl_client* client, struct wl_resource* resource, struct wl_resource* buffer_resource, int32_t x, int32_t y) { struct sl_host_surface* host = wl_resource_get_user_data(resource); struct sl_host_buffer* host_buffer = buffer_resource ? wl_resource_get_user_data(buffer_resource) : NULL; struct wl_buffer* buffer_proxy = NULL; struct sl_window* window; double scale = host->ctx->scale; host->current_buffer = NULL; if (host->contents_shm_mmap) { sl_mmap_unref(host->contents_shm_mmap); host->contents_shm_mmap = NULL; } if (host_buffer) { host->contents_width = host_buffer->width; host->contents_height = host_buffer->height; buffer_proxy = host_buffer->proxy; if (host_buffer->shm_mmap) host->contents_shm_mmap = sl_mmap_ref(host_buffer->shm_mmap); } if (host->contents_shm_mmap) { while (!wl_list_empty(&host->released_buffers)) { host->current_buffer = wl_container_of(host->released_buffers.next, host->current_buffer, link); if (host->current_buffer->width == host_buffer->width && host->current_buffer->height == host_buffer->height && host->current_buffer->format == host_buffer->shm_format) { break; } sl_output_buffer_destroy(host->current_buffer); host->current_buffer = NULL; } // Allocate new output buffer. if (!host->current_buffer) { size_t width = host_buffer->width; size_t height = host_buffer->height; uint32_t shm_format = host_buffer->shm_format; size_t bpp = sl_shm_bpp_for_shm_format(shm_format); size_t num_planes = sl_shm_num_planes_for_shm_format(shm_format); host->current_buffer = malloc(sizeof(struct sl_output_buffer)); assert(host->current_buffer); wl_list_insert(&host->released_buffers, &host->current_buffer->link); host->current_buffer->width = width; host->current_buffer->height = height; host->current_buffer->format = shm_format; host->current_buffer->surface = host; pixman_region32_init_rect(&host->current_buffer->damage, 0, 0, MAX_SIZE, MAX_SIZE); switch (host->ctx->shm_driver) { case SHM_DRIVER_DMABUF: { struct zwp_linux_buffer_params_v1* buffer_params; struct gbm_bo* bo; int stride0; int fd; bo = gbm_bo_create(host->ctx->gbm, width, height, sl_gbm_format_for_shm_format(shm_format), GBM_BO_USE_SCANOUT | GBM_BO_USE_LINEAR); stride0 = gbm_bo_get_stride(bo); fd = gbm_bo_get_fd(bo); buffer_params = zwp_linux_dmabuf_v1_create_params( host->ctx->linux_dmabuf->internal); zwp_linux_buffer_params_v1_add(buffer_params, fd, 0, 0, stride0, 0, 0); host->current_buffer->internal = zwp_linux_buffer_params_v1_create_immed( buffer_params, width, height, sl_drm_format_for_shm_format(shm_format), 0); zwp_linux_buffer_params_v1_destroy(buffer_params); host->current_buffer->mmap = sl_mmap_create( fd, height * stride0, bpp, 1, 0, stride0, 0, 0, 1, 0); host->current_buffer->mmap->begin_write = sl_dmabuf_begin_write; host->current_buffer->mmap->end_write = sl_dmabuf_end_write; gbm_bo_destroy(bo); } break; #if 0 case SHM_DRIVER_VIRTWL: { size_t size = host_buffer->shm_mmap->size; struct virtwl_ioctl_new ioctl_new = {.type = VIRTWL_IOCTL_NEW_ALLOC, .fd = -1, .flags = 0, .size = size}; struct wl_shm_pool* pool; int rv; rv = ioctl(host->ctx->virtwl_fd, VIRTWL_IOCTL_NEW, &ioctl_new); assert(rv == 0); UNUSED(rv); pool = wl_shm_create_pool(host->ctx->shm->internal, ioctl_new.fd, size); host->current_buffer->internal = wl_shm_pool_create_buffer( pool, 0, width, height, host_buffer->shm_mmap->stride[0], shm_format); wl_shm_pool_destroy(pool); host->current_buffer->mmap = sl_mmap_create( ioctl_new.fd, size, bpp, num_planes, 0, host_buffer->shm_mmap->stride[0], host_buffer->shm_mmap->offset[1] - host_buffer->shm_mmap->offset[0], host_buffer->shm_mmap->stride[1], host_buffer->shm_mmap->y_ss[0], host_buffer->shm_mmap->y_ss[1]); } break; case SHM_DRIVER_VIRTWL_DMABUF: { uint32_t drm_format = sl_drm_format_for_shm_format(shm_format); struct virtwl_ioctl_new ioctl_new = { .type = VIRTWL_IOCTL_NEW_DMABUF, .fd = -1, .flags = 0, .dmabuf = { .width = width, .height = height, .format = drm_format}}; struct zwp_linux_buffer_params_v1* buffer_params; size_t size; int rv; rv = ioctl(host->ctx->virtwl_fd, VIRTWL_IOCTL_NEW, &ioctl_new); if (rv) { fprintf(stderr, "error: virtwl dmabuf allocation failed: %s\n", strerror(errno)); _exit(EXIT_FAILURE); } size = ioctl_new.dmabuf.stride0 * height; buffer_params = zwp_linux_dmabuf_v1_create_params( host->ctx->linux_dmabuf->internal); zwp_linux_buffer_params_v1_add(buffer_params, ioctl_new.fd, 0, ioctl_new.dmabuf.offset0, ioctl_new.dmabuf.stride0, 0, 0); if (num_planes > 1) { zwp_linux_buffer_params_v1_add(buffer_params, ioctl_new.fd, 1, ioctl_new.dmabuf.offset1, ioctl_new.dmabuf.stride1, 0, 0); size = MAX(size, ioctl_new.dmabuf.offset1 + ioctl_new.dmabuf.stride1 * height / host_buffer->shm_mmap->y_ss[1]); } host->current_buffer->internal = zwp_linux_buffer_params_v1_create_immed(buffer_params, width, height, drm_format, 0); zwp_linux_buffer_params_v1_destroy(buffer_params); host->current_buffer->mmap = sl_mmap_create( ioctl_new.fd, size, bpp, num_planes, ioctl_new.dmabuf.offset0, ioctl_new.dmabuf.stride0, ioctl_new.dmabuf.offset1, ioctl_new.dmabuf.stride1, host_buffer->shm_mmap->y_ss[0], host_buffer->shm_mmap->y_ss[1]); host->current_buffer->mmap->begin_write = sl_virtwl_dmabuf_begin_write; host->current_buffer->mmap->end_write = sl_virtwl_dmabuf_end_write; } break; #endif } assert(host->current_buffer->internal); assert(host->current_buffer->mmap); wl_buffer_set_user_data(host->current_buffer->internal, host->current_buffer); wl_buffer_add_listener(host->current_buffer->internal, &sl_output_buffer_listener, host->current_buffer); } } x /= scale; y /= scale; if (host->current_buffer) { assert(host->current_buffer->internal); wl_surface_attach(host->proxy, host->current_buffer->internal, x, y); } else { wl_surface_attach(host->proxy, buffer_proxy, x, y); } wl_list_for_each(window, &host->ctx->windows, link) { if (window->host_surface_id == wl_resource_get_id(resource)) { while (sl_process_pending_configure_acks(window, host)) continue; break; } } } static void sl_host_surface_damage(struct wl_client* client, struct wl_resource* resource, int32_t x, int32_t y, int32_t width, int32_t height) { struct sl_host_surface* host = wl_resource_get_user_data(resource); double scale = host->ctx->scale; struct sl_output_buffer* buffer; int64_t x1, y1, x2, y2; wl_list_for_each(buffer, &host->busy_buffers, link) { pixman_region32_union_rect(&buffer->damage, &buffer->damage, x, y, width, height); } wl_list_for_each(buffer, &host->released_buffers, link) { pixman_region32_union_rect(&buffer->damage, &buffer->damage, x, y, width, height); } x1 = x; y1 = y; x2 = x1 + width; y2 = y1 + height; // Enclosing rect after scaling and outset by one pixel to account for // potential filtering. x1 = MAX(MIN_SIZE, x1 - 1) / scale; y1 = MAX(MIN_SIZE, y1 - 1) / scale; x2 = ceil(MIN(x2 + 1, MAX_SIZE) / scale); y2 = ceil(MIN(y2 + 1, MAX_SIZE) / scale); wl_surface_damage(host->proxy, x1, y1, x2 - x1, y2 - y1); } static void sl_frame_callback_done(void* data, struct wl_callback* callback, uint32_t time) { struct sl_host_callback* host = wl_callback_get_user_data(callback); wl_callback_send_done(host->resource, time); wl_resource_destroy(host->resource); } static const struct wl_callback_listener sl_frame_callback_listener = { sl_frame_callback_done}; static void sl_host_callback_destroy(struct wl_resource* resource) { struct sl_host_callback* host = wl_resource_get_user_data(resource); wl_callback_destroy(host->proxy); wl_resource_set_user_data(resource, NULL); free(host); } static void sl_host_surface_frame(struct wl_client* client, struct wl_resource* resource, uint32_t callback) { struct sl_host_surface* host = wl_resource_get_user_data(resource); struct sl_host_callback* host_callback; host_callback = malloc(sizeof(*host_callback)); assert(host_callback); host_callback->resource = wl_resource_create(client, &wl_callback_interface, 1, callback); wl_resource_set_implementation(host_callback->resource, NULL, host_callback, sl_host_callback_destroy); host_callback->proxy = wl_surface_frame(host->proxy); wl_callback_set_user_data(host_callback->proxy, host_callback); wl_callback_add_listener(host_callback->proxy, &sl_frame_callback_listener, host_callback); } static void sl_host_surface_set_opaque_region( struct wl_client* client, struct wl_resource* resource, struct wl_resource* region_resource) { struct sl_host_surface* host = wl_resource_get_user_data(resource); struct sl_host_region* host_region = region_resource ? wl_resource_get_user_data(region_resource) : NULL; wl_surface_set_opaque_region(host->proxy, host_region ? host_region->proxy : NULL); } static void sl_host_surface_set_input_region( struct wl_client* client, struct wl_resource* resource, struct wl_resource* region_resource) { struct sl_host_surface* host = wl_resource_get_user_data(resource); struct sl_host_region* host_region = region_resource ? wl_resource_get_user_data(region_resource) : NULL; wl_surface_set_input_region(host->proxy, host_region ? host_region->proxy : NULL); } static void sl_host_surface_commit(struct wl_client* client, struct wl_resource* resource) { struct sl_host_surface* host = wl_resource_get_user_data(resource); struct sl_viewport* viewport = NULL; struct sl_window* window; if (!wl_list_empty(&host->contents_viewport)) viewport = wl_container_of(host->contents_viewport.next, viewport, link); if (host->contents_shm_mmap) { uint8_t* src_addr = host->contents_shm_mmap->addr; uint8_t* dst_addr = host->current_buffer->mmap->addr; size_t* src_offset = host->contents_shm_mmap->offset; size_t* dst_offset = host->current_buffer->mmap->offset; size_t* src_stride = host->contents_shm_mmap->stride; size_t* dst_stride = host->current_buffer->mmap->stride; size_t* y_ss = host->contents_shm_mmap->y_ss; size_t bpp = host->contents_shm_mmap->bpp; size_t num_planes = host->contents_shm_mmap->num_planes; double contents_scale_x = host->contents_scale; double contents_scale_y = host->contents_scale; double contents_offset_x = 0.0; double contents_offset_y = 0.0; pixman_box32_t* rect; int n; // Determine scale and offset for damage based on current viewport. if (viewport) { double contents_width = host->contents_width; double contents_height = host->contents_height; if (viewport->src_x >= 0 && viewport->src_y >= 0) { contents_offset_x = wl_fixed_to_double(viewport->src_x); contents_offset_y = wl_fixed_to_double(viewport->src_y); } if (viewport->dst_width > 0 && viewport->dst_height > 0) { contents_scale_x *= contents_width / viewport->dst_width; contents_scale_y *= contents_height / viewport->dst_height; // Take source rectangle into account when both destionation size and // source rectangle are set. If only source rectangle is set, then // it determines the surface size so it can be ignored. if (viewport->src_width >= 0 && viewport->src_height >= 0) { contents_scale_x *= wl_fixed_to_double(viewport->src_width) / contents_width; contents_scale_y *= wl_fixed_to_double(viewport->src_height) / contents_height; } } } if (host->current_buffer->mmap->begin_write) host->current_buffer->mmap->begin_write(host->current_buffer->mmap->fd); rect = pixman_region32_rectangles(&host->current_buffer->damage, &n); while (n--) { int32_t x1, y1, x2, y2; // Enclosing rect after applying scale and offset. x1 = rect->x1 * contents_scale_x + contents_offset_x; y1 = rect->y1 * contents_scale_y + contents_offset_y; x2 = rect->x2 * contents_scale_x + contents_offset_x + 0.5; y2 = rect->y2 * contents_scale_y + contents_offset_y + 0.5; x1 = MAX(0, x1); y1 = MAX(0, y1); x2 = MIN(host->contents_width, x2); y2 = MIN(host->contents_height, y2); if (x1 < x2 && y1 < y2) { size_t i; for (i = 0; i < num_planes; ++i) { uint8_t* src_base = src_addr + src_offset[i]; uint8_t* dst_base = dst_addr + dst_offset[i]; uint8_t* src = src_base + y1 * src_stride[i] + x1 * bpp; uint8_t* dst = dst_base + y1 * dst_stride[i] + x1 * bpp; int32_t width = x2 - x1; int32_t height = (y2 - y1) / y_ss[i]; size_t bytes = width * bpp; while (height--) { memcpy(dst, src, bytes); dst += dst_stride[i]; src += src_stride[i]; } } } ++rect; } if (host->current_buffer->mmap->end_write) host->current_buffer->mmap->end_write(host->current_buffer->mmap->fd); pixman_region32_clear(&host->current_buffer->damage); wl_list_remove(&host->current_buffer->link); wl_list_insert(&host->busy_buffers, &host->current_buffer->link); } if (host->contents_width && host->contents_height) { double scale = host->ctx->scale * host->contents_scale; if (host->viewport) { int width = host->contents_width; int height = host->contents_height; // We need to take the client's viewport into account while still // making sure our scale is accounted for. if (viewport) { if (viewport->src_x >= 0 && viewport->src_y >= 0 && viewport->src_width >= 0 && viewport->src_height >= 0) { wp_viewport_set_source(host->viewport, viewport->src_x, viewport->src_y, viewport->src_width, viewport->src_height); // If the source rectangle is set and the destination size is not // set, then src_width and src_height should be integers, and the // surface size becomes the source rectangle size. width = wl_fixed_to_int(viewport->src_width); height = wl_fixed_to_int(viewport->src_height); } // Use destination size as surface size when set. if (viewport->dst_width >= 0 && viewport->dst_height >= 0) { width = viewport->dst_width; height = viewport->dst_height; } } wp_viewport_set_destination(host->viewport, ceil(width / scale), ceil(height / scale)); } else { wl_surface_set_buffer_scale(host->proxy, scale); } } // No need to defer client commits if surface has a role. E.g. is a cursor // or shell surface. if (host->has_role) { wl_surface_commit(host->proxy); // GTK determines the scale based on the output the surface has entered. // If the surface has not entered any output, then have it enter the // internal output. TODO(reveman): Remove this when surface-output tracking // has been implemented in Chrome. if (!host->has_output) { struct sl_host_output* output; wl_list_for_each(output, &host->ctx->host_outputs, link) { if (output->internal) { wl_surface_send_enter(host->resource, output->resource); host->has_output = 1; break; } } } } else { // Commit if surface is associated with a window. Otherwise, defer // commit until window is created. wl_list_for_each(window, &host->ctx->windows, link) { if (window->host_surface_id == wl_resource_get_id(resource)) { if (window->xdg_surface) { wl_surface_commit(host->proxy); if (host->contents_width && host->contents_height) window->realized = 1; } break; } } } if (host->contents_shm_mmap) { if (host->contents_shm_mmap->buffer_resource) wl_buffer_send_release(host->contents_shm_mmap->buffer_resource); sl_mmap_unref(host->contents_shm_mmap); host->contents_shm_mmap = NULL; } } static void sl_host_surface_set_buffer_transform(struct wl_client* client, struct wl_resource* resource, int32_t transform) { struct sl_host_surface* host = wl_resource_get_user_data(resource); wl_surface_set_buffer_transform(host->proxy, transform); } static void sl_host_surface_set_buffer_scale(struct wl_client* client, struct wl_resource* resource, int32_t scale) { struct sl_host_surface* host = wl_resource_get_user_data(resource); host->contents_scale = scale; } static void sl_host_surface_damage_buffer(struct wl_client* client, struct wl_resource* resource, int32_t x, int32_t y, int32_t width, int32_t height) { assert(0); } static const struct wl_surface_interface sl_surface_implementation = { sl_host_surface_destroy, sl_host_surface_attach, sl_host_surface_damage, sl_host_surface_frame, sl_host_surface_set_opaque_region, sl_host_surface_set_input_region, sl_host_surface_commit, sl_host_surface_set_buffer_transform, sl_host_surface_set_buffer_scale, sl_host_surface_damage_buffer}; static void sl_destroy_host_surface(struct wl_resource* resource) { struct sl_host_surface* host = wl_resource_get_user_data(resource); struct sl_window *window, *surface_window = NULL; struct sl_output_buffer* buffer; wl_list_for_each(window, &host->ctx->windows, link) { if (window->host_surface_id == wl_resource_get_id(resource)) { surface_window = window; break; } } if (surface_window) { surface_window->host_surface_id = 0; sl_window_update(surface_window); } if (host->contents_shm_mmap) sl_mmap_unref(host->contents_shm_mmap); while (!wl_list_empty(&host->released_buffers)) { buffer = wl_container_of(host->released_buffers.next, buffer, link); sl_output_buffer_destroy(buffer); } while (!wl_list_empty(&host->busy_buffers)) { buffer = wl_container_of(host->busy_buffers.next, buffer, link); sl_output_buffer_destroy(buffer); } while (!wl_list_empty(&host->contents_viewport)) wl_list_remove(host->contents_viewport.next); if (host->viewport) wp_viewport_destroy(host->viewport); wl_surface_destroy(host->proxy); wl_resource_set_user_data(resource, NULL); free(host); } static void sl_surface_enter(void* data, struct wl_surface* surface, struct wl_output* output) { struct sl_host_surface* host = wl_surface_get_user_data(surface); struct sl_host_output* host_output = wl_output_get_user_data(output); wl_surface_send_enter(host->resource, host_output->resource); host->has_output = 1; } static void sl_surface_leave(void* data, struct wl_surface* surface, struct wl_output* output) { struct sl_host_surface* host = wl_surface_get_user_data(surface); struct sl_host_output* host_output = wl_output_get_user_data(output); wl_surface_send_leave(host->resource, host_output->resource); } static const struct wl_surface_listener sl_surface_listener = { sl_surface_enter, sl_surface_leave}; static void sl_region_destroy(struct wl_client* client, struct wl_resource* resource) { wl_resource_destroy(resource); } static void sl_region_add(struct wl_client* client, struct wl_resource* resource, int32_t x, int32_t y, int32_t width, int32_t height) { struct sl_host_region* host = wl_resource_get_user_data(resource); double scale = host->ctx->scale; int32_t x1, y1, x2, y2; x1 = x / scale; y1 = y / scale; x2 = (x + width) / scale; y2 = (y + height) / scale; wl_region_add(host->proxy, x1, y1, x2 - x1, y2 - y1); } static void sl_region_subtract(struct wl_client* client, struct wl_resource* resource, int32_t x, int32_t y, int32_t width, int32_t height) { struct sl_host_region* host = wl_resource_get_user_data(resource); double scale = host->ctx->scale; int32_t x1, y1, x2, y2; x1 = x / scale; y1 = y / scale; x2 = (x + width) / scale; y2 = (y + height) / scale; wl_region_subtract(host->proxy, x1, y1, x2 - x1, y2 - y1); } static const struct wl_region_interface sl_region_implementation = { sl_region_destroy, sl_region_add, sl_region_subtract}; static void sl_destroy_host_region(struct wl_resource* resource) { struct sl_host_region* host = wl_resource_get_user_data(resource); wl_region_destroy(host->proxy); wl_resource_set_user_data(resource, NULL); free(host); } static void sl_compositor_create_host_surface(struct wl_client* client, struct wl_resource* resource, uint32_t id) { struct sl_host_compositor* host = wl_resource_get_user_data(resource); struct sl_host_surface* host_surface; struct sl_window *window, *unpaired_window = NULL; host_surface = malloc(sizeof(*host_surface)); assert(host_surface); host_surface->ctx = host->compositor->ctx; host_surface->contents_width = 0; host_surface->contents_height = 0; host_surface->contents_scale = 1; wl_list_init(&host_surface->contents_viewport); host_surface->contents_shm_mmap = NULL; host_surface->has_role = 0; host_surface->has_output = 0; host_surface->last_event_serial = 0; host_surface->current_buffer = NULL; wl_list_init(&host_surface->released_buffers); wl_list_init(&host_surface->busy_buffers); host_surface->resource = wl_resource_create( client, &wl_surface_interface, wl_resource_get_version(resource), id); wl_resource_set_implementation(host_surface->resource, &sl_surface_implementation, host_surface, sl_destroy_host_surface); host_surface->proxy = wl_compositor_create_surface(host->proxy); wl_surface_set_user_data(host_surface->proxy, host_surface); wl_surface_add_listener(host_surface->proxy, &sl_surface_listener, host_surface); host_surface->viewport = NULL; if (host_surface->ctx->viewporter) { host_surface->viewport = wp_viewporter_get_viewport( host_surface->ctx->viewporter->internal, host_surface->proxy); } wl_list_for_each(window, &host->compositor->ctx->unpaired_windows, link) { if (window->host_surface_id == id) { unpaired_window = window; break; } } if (unpaired_window) sl_window_update(window); } static void sl_compositor_create_host_region(struct wl_client* client, struct wl_resource* resource, uint32_t id) { struct sl_host_compositor* host = wl_resource_get_user_data(resource); struct sl_host_region* host_region; host_region = malloc(sizeof(*host_region)); assert(host_region); host_region->ctx = host->compositor->ctx; host_region->resource = wl_resource_create( client, &wl_region_interface, wl_resource_get_version(resource), id); wl_resource_set_implementation(host_region->resource, &sl_region_implementation, host_region, sl_destroy_host_region); host_region->proxy = wl_compositor_create_region(host->proxy); wl_region_set_user_data(host_region->proxy, host_region); } static const struct wl_compositor_interface sl_compositor_implementation = { sl_compositor_create_host_surface, sl_compositor_create_host_region}; static void sl_destroy_host_compositor(struct wl_resource* resource) { struct sl_host_compositor* host = wl_resource_get_user_data(resource); wl_compositor_destroy(host->proxy); wl_resource_set_user_data(resource, NULL); free(host); } static void sl_bind_host_compositor(struct wl_client* client, void* data, uint32_t version, uint32_t id) { struct sl_context* ctx = (struct sl_context*)data; struct sl_host_compositor* host; host = malloc(sizeof(*host)); assert(host); host->compositor = ctx->compositor; host->resource = wl_resource_create(client, &wl_compositor_interface, MIN(version, ctx->compositor->version), id); wl_resource_set_implementation(host->resource, &sl_compositor_implementation, host, sl_destroy_host_compositor); host->proxy = wl_registry_bind(wl_display_get_registry(ctx->display), ctx->compositor->id, &wl_compositor_interface, ctx->compositor->version); wl_compositor_set_user_data(host->proxy, host); } struct sl_global* sl_compositor_global_create(struct sl_context* ctx) { return sl_global_create(ctx, &wl_compositor_interface, ctx->compositor->version, ctx, sl_bind_host_compositor); }
import { GlobalCssKeyword, PropType, ValueOrFunc } from '../shared' export const serializeFontOpticalSizing = (type: PropType) => (x: 'auto' | 'normal' | 'none' | GlobalCssKeyword) => ({ [type === 'inline' ? 'fontOpticalSizing' : 'font-optical-sizing']: x, }) type FontOpticalSizingPropValue = 'auto' | 'none' | GlobalCssKeyword /** * @category RBDeclarationTypeAlias */ export type FontOpticalSizingDeclaration = { /** * Maps to CSS's **`font-optical-sizing`** property * @initial auto * @definition https://www.w3.org/TR/2020/WD-css-fonts-4-20201117/#propdef-font-optical-sizing * @specification {@link https://www.w3.org/TR/2020/WD-css-fonts-4-20201117/ CSS Fonts Module Level 4} */ fontOpticalSizing: FontOpticalSizingPropValue } export type FontOpticalSizingDeclarationJSS = { fontOpticalSizing: ValueOrFunc<FontOpticalSizingPropValue> }
Osteosarcoma of bone and its important recognizable varieties Osteosarcoma of bone is a recognizable entity if the histopathologist designates tumors as such when their malignant cells produce osteoid substance even if only in small foci. Such definition distinguishes this lesion from other sarcomas that arise in bone, especially chondrosarcoma and fibrosarcoma. There is a general tendency to consider that osteosarcomas represent a stereotyped form of disease for which new modalities of treatment can be applied and assessed. The question of whether a given osseous lesion is actually malignant and not a benign neoplasm or even a reactive non-neoplastic condition simulating a malignant tumor may be difficult for the histopathologist. Pathologists without considerable experience in the diagnosis of bone tumors find this question especially vexing. The establishment of a valid diagnosis of osteosarcoma introduces the additional problem that the 11 varieties considered in this paper may pose significant recognizable variations in the clinical capability of the disease. It is apparent that the physician must recognize the known clinicopathologic and prognostic factors of these subtypes in his assessment of the overall problem.
India, home to more than one billion people, has been a land of religious diversity for thousands of years. It is the birthplace of four religions, Hinduism, Buddhism, Jainism, and Sikhism, and has also assimilated two major faiths that were imported to its shores, Islam and Christianity. It is also home to the Parsees (Zoroastrians) who came from Persia a thousand years ago, and a small Jewish community has lived in Kerala since Roman times. Today, the majority of India's population is Hindu, but with 156 million Muslims, it is also the second largest Muslim country in the world. In addition, 24 million Christians, 19 million Sikhs, 8 million Buddhists, and 4 million Jains, along with members of many other lesser-known faiths and sects, are a vital part of the nation's multicultural fabric. The complexities of maintaining cohesion within such a pluralist society has been grappled with throughout India's history, from the Buddhist Mauryan emperor Ashoka to the Muslim rulers of the Mughal Empire, the Christian viceroys of the British Empire and today's democratically elected leaders. Since independence, India's commitment to secularism has remained resolute; its constitution does not recognize a specific religion, but faith remains an crucial part of everyday life as evidenced by the abundance of flourishing temples, mosques, churches, shrines, and pilgrimage sites found all over the country. Derived from the Sanskrit word "jina," meaning "to conquer," Jainism teaches that all life forms have an eternal soul bound by karma in a never-ending cycle of rebirth. Through nonviolence or ahimsa, the soul can break free of this cycle and achieve kaivalya. Traditions and ideas central to Jainism can be traced to the 7th century BCE, but Mahavira, the last of Jainism's 24 great spiritual teachers, formalized them into the Jain religion in the 6th century. Some scholars see the roots of the faith as far back as the Indus civilization in Gujarat. Central to Jainism are five vows: nonviolence (ahimsa), truthfulness (satya), non-stealing (asteya), chastity (brahmacharya), and non-possession or non-attachment (aparigraha). As a manifestation of ahimsa, Jain monks wear nets over their mouths and sweep the street with their clothing so as to avoid harming insects, thereby accruing karma from not injuring even the smallest life forms. Mahavira, whose teachings are recorded in the Agamas texts, taught liberation through the three principles of right faith (samyak darshana), right knowledge (samyak jnana), and right conduct (samyak charitra). Between the first and second centuries BCE, the Jains divided into an orthodox sect Digambara ("sky–clad") in which followers claimed adherence to Mahavira's philosophy by going without clothes, and the Shvetambara ("white–clad") sect. Approximately four million Jains practice the religion worldwide, and important places of pilgrimage among observers include Mt. Abu in Rajasthan, site of five ornate Jain temples, and Sravanabelagola, site of a 57.5 foot statue of Gomateshvara (Bahubali), Jainism's first spiritual leader or tirthankara. Today Sravanabelagola is the site of the Mahamastak Abhishek, the biggest Jain religious festival which takes place every 12 years, the last one in 2007. Although a tiny minority in modern India, Jews have a long history on the subcontinent, and in fact, it is home to several distinct Jewish communities. The first to arrive, possibly in the last centuries BCE, were the Jews who settled in Cochin (now called Kochi), in south India. They remain a small but important presence in Kochi, a trading hub on the Kerala coast since ancient times. Also existent are the Bene Israel, believed to have arrived some 2,100 years ago; they settled in and around Mumbai and in present day Pakistan. More recent arrivals were the Baghdadi Jews, so called because they are chiefly descended from Iraqi Jews who migrated to India during the British Raj, between 150 - 250 years ago. India's most prominent Jewish community—considered one of the oldest in the world east of Iran—remains the one in Kochi. Although very few members of the community remain, most having long since emigrated to Israel, the Kochin Jews were and are an important part of the Kerala coast's spice trade, with huge warehouses containing mountains of turmeric, chillies, and pepper located directly below their family living quarters. India's largest Jewish community, however, is the Bene Israel in Mumbai. Although their arrival in India is something of a mystery (some claim to have arrived in India in the 2nd century BCE), members of this community adopted the occupation of oil pressing and became known as "shanwar telis" or "Sabbath-observing oilmen" because they didn't work on the Sabbath. They were physically and linguistically indistinguishable to outsiders from the local population but had their own traditions, observed the Sabbath, circumcised their sons, and performed other rituals associated with Judaism. Without exception, all Jewish communities have been accepted and assimilated into Indian society. In fact, Indians tend to take pride in the fact that Jews in India have rarely had to deal with anti-Semitism from either Hindus or Muslims. When anti-Semitism did raise its head, it was perpetrated by Dutch colonialists. The recent attack (November 2008) on the Mumbai Chabad House Jewish Centre is believed to have been perpetrated by Islamic extremists from outside India. India is also the only place in the world where Jews are comfortable with using Swastikas in their signs—because it's an ancient Hindu symbol and has none of the negative connotations that is found in the West. The eighth incarnation of the Hindu god Vishnu, the preserver of the universe, Krishna is one of the most important and widely worshipped gods in India. In addition to being venerated as an avatar (human manifestation) of Vishnu, some traditions within Hinduism also acknowledge Krishna as the supreme being. Among the common representations of him is as a young man playing the flute and a baby stealing butter. These iconic images derive from the stories of his early life included in the ancient religious text the Bhagavata Purana. Krishna grew up among the cowherds and milkmaids (gopis) of a village in the kingdom of Mathura and became the darling of the gopis as a young man, seducing them with his flute playing and dancing with them in the moonlit woods. From these events, Krishna is commonly depicted as a handsome, dark or blue skinned youth standing with one leg bent in front of the other holding a flute to his lips. The lower half of his body is covered in a dhoti, often yellow in color, and he is adorned with jewels and a peacock feather. By Krishna's side is his favorite gopi, Radha, who is typically shown only in conjunction with him. In the middle ages the love between Krishna and the cowgirl Radha inspired a rich devotional literature still treasured by people of all communities in all walks of life. In a tale which glorifies the ideal of love between the sexes, Radha for many symbolizes the individual's surrender to the love of God. The monkey king, Hanuman, is the son of the Vedic wind god, Vayu, and the supreme embodiment of fealty. He has the head of a monkey and the body of a human, along with the power to fly and change size and shape. Representations of Hanuman often show him flying through the air while supporting a mountain in his left hand, a reference to one of his daring feats in the Hindu epic the Ramayana. After locating and rescuing Sita, the wife of Hindu diety Rama, from Ravana, the demon king who had abducted her, Hanuman and his army fight alongside Rama and his brother, Lakshman, in the great battle against Ravana in Lanka. When Lakshman is wounded in the battle, Hanuman is tasked with finding the herb that will save him. Hanuman flies to the Himalayas, but when he can't identify the correct herb, he returns to Lanka with an entire mountain and helps to save Lakshman's life. For his numerous services and loyalty to Rama, Hanuman has come to be revered as a symbol of strength and devotion. The Hindu goddess of knowledge, music, and the arts. Saraswati first appears in the Rig Veda as the celestial river of the same name, but over time, she has come to be inextricably linked to learning and the creative arts, most notably music. Although regarded as the consort of Brahma, the god of creation, Saraswati is worshipped independently of him as the deity who can bestow wisdom and drive out ignorance. Even today, students pray to her to achieve success in their studies and exams. Saraswati is traditionally shown as a fair young woman dressed in a white sari, seated on a lotus, with four arms. In her front two arms, Saraswati holds or plays a veena, a multi-stringed Indian musical instrument, while in her back two hands, she may carry other objects, such as prayer beads, a manuscript, vessel of water, or lotus blossom, that are symbols of meditation, knowledge, purification, and purity respectively. Her mount, a swan, is often situated near her feet. The Hindu goddess of beauty, wealth, and prosperity who is the consort of Vishnu, the preserver of the universe. She is often represented as a beautiful young women with four arms sitting or standing on a lotus bud. Her four arms symbolize the four goals of human life: artha (worldly wealth and success), kama (pleasure and desire), dharma (righteousness), and moksha (knowledge and liberation from the cycle of birth and death). She is usually shown clasping a lotus flower, a symbol of purity and fertility, in her two back hands, while gold coins, signs of wealth, tumble from one or both of her front hands. Lakshmi’s association with prosperity is also emphasized by her dress, an elaborate sari often red with gold embroidery, and the fine jewelry that adorns her. Since Hindus believe Lakshmi can bestow good fortune and well-being on the family, she is a common household deity and the focus of worship during the festival of Diwali. The elephant-headed Hindu god, also known as Ganapati, is considered to be the lord of beginnings and the remover of obstacles. One of the most beloved deities of the Hindu pantheon, individuals pray to him before embarking on a new endeavor or journey to ensure its success. The son of the god Shiva and his wife, the goddess Parvati, Ganesha is represented with the head of an elephant (a symbol of strength and wisdom) over a plump, potbellied human body with four arms. Typically, he is shown holding a goad or ax and a noose in his two back hands. The goad, in his back-right hand, helps to push humans toward the righteousness path and can also strike and remove obstacles, while the noose in his back-left hand harnesses impediments. Ganesha is said to have a sweet tooth and is often shown holding a tray of laddu (Indian sweets) in one of his front hands or with a bowl of it near his feet, where his mount, a rat, is found. His front right-hand may be presented in the abhaya pose, with the palm facing out and the fingers pointing up, a gesture that confers protection on the devotee. Among the popular depictions of the Hindu god Krishna is of him as a baby purloining butter from a pot. His name means the 'dark one' and some scholars think he was a pre-Aryan aborginal deity worshipped by the people of the ancient city of Mathura, south of Delhi. His birth and childhood exploits in a village of cowherds are described in the ancient Sanskrit text the Bhagavata Purana. As the eighth child of Devaki and Vasudeva, it was prophesized that Krishna—the eighth avatar (human manifestation) of the god Vishnu—would grow up to kill King Kansa, the tyrannical ruler of the kingdom where he was born. His father, Vasudeva, was able to save Krishna by switching him with a baby born to one of the cowherds in a local village, Vrindavan. It was here that he gained a reputation as a lovable mischief-maker by stealing butter, his favorite food, from the gopis (milkmaids) and playing pranks on them. These playful episodes from his life have made Krishna an endearing figure, who's fondly referred to as the "butter thief," and has been immortalized through the representations of him as a smiling young child with his hands in a pot of butter. Why is a secular constitution important to the stability of India? Do you think that Indian stability would have been threatened with a religion-based government? Hinduism includes thousands of different gods, each playing a different role. What does this tell us about the nature of Hinduism? Which Hindu god fascinates you the most? Why? Why are the stories of the gods so important? How do these stories compare to stories from the New and Old Testaments?
Designing a reporting tool for space mission simulations for use in elementary schools This paper presents the design recommendation of a dashboard to accompany a weeklong computer mission simulation for use in fifth grade classrooms. The mission is a hands-on learning experience developed by the Challenger Center, a non-profit organization committed to exciting students about science, technology, engineering, and math through innovative learning tools. The Challenger Center has a valuable opportunity to gather data on student performance and experience throughout the mission. The dashboard is a user interface that will share this data with educators, allowing them to guide classroom instruction both during and after a class simulation. To design the dashboard, the team used an iterative process that continuously engaged key partners including the Challenger Center, educators, and the software developers. This iterative and inclusive approach was particularly important because the new mission was developed concurrently with the dashboard, giving the team a unique opportunity to influence the development of the mission as well. The key findings include specifications on what features and information need to be included in the dashboard design as well as prototypes and guidance on how it should be displayed. The specifications identify the need for two types of educator feedback: live feedback during the mission to ensure the class's successful execution of the mission, and more detailed summary feedback following the mission that focuses on performance against learning standards and planning for future classroom instruction.
(Reuters) - Warren Buffett takes pride in naming his price to buy a company, and not paying a nickel more. But the largest U.S. natural gas distribution utility, an unyielding hedge fund, and a Delaware bankruptcy judge now present one of the biggest challenges to the billionaire’s legendary discipline. FILE PHOTO: Berkshire Hathaway CEO Warren Buffett visits the BNSF booth before the Berkshire Hathaway annual meeting in Omaha, Nebraska, U.S. May 6, 2017. REUTERS/Rick Wilking/File Photo The board of bankrupt Texas utility Energy Future Holdings will meet later on Sunday to decide whether to sell its crown jewel, power transmission company Oncor, to Buffett’s Berkshire Hathaway Inc or accept an opposing bid from Sempra Energy, a person familiar with the confidential deliberations said on condition of anonymity. The rival bid for Oncor was disclosed on Friday by Energy Future’s biggest creditor, billionaire Paul Singer’s hedge fund Elliott Management Corp. The identity of the bidder was not publicly announced, but Bloomberg News first reported on Saturday that Sempra was the mystery bidder, citing anonymous sources. Berkshire Hathaway Energy, Buffett’s energy unit, has offered $9 billion in cash for Oncor, while the rival bid is for $9.3 billion, a lawyer for Elliott said on Friday. The gap is pocket change for Berkshire, but Buffett pledged last Wednesday not to raise his offer. “Paying extra is not the way he does business,” said Jim Shanahan, a senior analyst at Edward Jones & Co with a “buy” rating on Berkshire. “He is willing to be patient and wait for opportunities. That’s what analysts expect, and that’s what investors expect.” Berkshire did not respond to requests for comment, while Sempra and Elliott declined to comment. Berkshire said on Friday that its bid had won support from key stakeholders, including the staff of the Public Utility Commission of Texas, the regulator that has to approve the sale of Oncor. The commission’s executive director, Brian Lloyd, has also praised Berkshire’s bid. Berkshire has told the regulator it will accept “ringfencing” on its acquisition of Oncor, restricting its ability to extract cash from the company or add more debt to it. It is unclear whether Sempra could offer the same assurances. “Berkshire Hathaway Energy has offered a positive, simple, straightforward deal that benefits Oncor and its customers,” Oncor CEO Bob Shapard said in a statement on Saturday. Even if regulatory concerns trump price considerations, and Berkshire’s bid for Oncor prevails on Sunday over that of Sempra, a San Diego-based utility, the sale has to be approved on Monday by U.S. Bankruptcy Judge Christopher Sontchi in Wilmington, Delaware. Elliott has said it opposes the sale to Berkshire because it believes it undervalues Oncor, and has argued it owns enough of Energy Future’s debt to veto the deal. Elliott has also been trying to put together its own bid for $9.3 billion to buy Oncor. TAKEOVER LULL Buffett, 86, is trying to end the two-year lull since announcing his last major acquisition, a $32.1 billion takeover of aircraft parts maker Precision Castparts Corp. Many analysts at the time said that price looked relatively costly by Berkshire’s standards. Complicating Buffett’s hunt for bargains are soaring stock market valuations and competition from private equity firms with a lot of funds to spend, as well as from companies with anemic earnings growth that are turning to acquisitions for a recovery in their fortunes. Buffett’s Omaha, Nebraska-based conglomerate, whose more than 90 businesses include auto insurer Geico and railroad BNSF, ended June with close to $100 billion of cash and equivalents. That is up $27 billion in the last year, and five times Buffett's $20 billion stated target. (tmsnrt.rs/2fTsMnt) Some analysts say idle cash is also weighing on Berkshire’s operating profit, which has dropped for three straight quarters. “Buffett likes to achieve 11 percent returns,” and low-yielding cash may deprive Berkshire of billions of dollars annually, said Robert Miles, an author of books about Buffett. But George Morgan, a finance professor at the University of Nebraska Omaha, said that “as far as cash being a drag on performance, Warren would say no. Just because you’re in a store doesn’t mean you need to buy something.” Buffett has in 2017 used some Berkshire cash to become one of Apple Inc’s largest shareholders, and shore up Canadian lender Home Capital Group Inc’s finances. But he never faced the need to revisit his original deal once he clinched it. ONE-PRICE GUY In its annual reports, Berkshire tells shareholders “we don’t participate in auctions,” using italics for emphasis. “I’m a ‘one-price’ guy,” Buffett wrote in a February 2008 shareholder letter. At Berkshire’s annual meeting in May, Buffett said he would gladly do a “very, very big deal,” including with Brazilian investment firm 3G Capital, which together with Berkshire controls the food company Kraft Heinz Co. Earlier this year, European food and consumer goods conglomerate Unilever Plc snubbed a takeover approach by Kraft Heinz. Buffett, true to form, backed down. If Berkshire lost Oncor, it could receive a consolation prize, a $270 million breakup fee. Berkshire accepted a $175 million fee when Constellation Energy Group Inc terminated its $4.7 billion takeover in 2008, and took an investment from Electricite de France SA. “Walking away from Oncor would reinforce Buffett’s reputation that once he determines a fair price, that’s the price, or there’s no deal,” Shanahan said. Deviations are rare. When Berkshire in 2000 bought three-fourths of MidAmerican Energy, as Berkshire Hathaway Energy was then known, Buffett upped his original $35 per share offer to $35.05. MidAmerican’s bankers “caught me in a moment of weakness,” Buffett wrote in the 2008 letter. “I explained, they could tell their client they had wrung the last nickel out of me.”
// // IVVStartUpManager.h // AwesomeCurrencyConverter // // Created by <NAME> on 04/03/2017. // Copyright © 2017 Ignatov inc. All rights reserved. // #import <Foundation/Foundation.h> #import "IVVStartUpManager.h" @class IVVWelocmeAssembly, IVVStoryboardAssembly; @interface IVVStartUpManagerImplementation : NSObject <IVVStartUpManager> @property (nonatomic, strong) IVVStoryboardAssembly *storyboardAssembly; @end
/** Calculates recursivly the storage requirements for a sub-branch of the P-tree. @param storage the required storage sofar. @param linkref the reference to current location in the P-tree branch. @return total required storage in bytes for sub-branch of P-tree. */ private int calculateStorage(int storage, PtreeNode linkRef) { if (linkRef != null) { storage = storage+12+(linkRef.itemSet.length*2); storage = calculateStorage(storage,linkRef.childRef); storage = calculateStorage(storage,linkRef.siblingRef); } return(storage); }
He said at the time that he hoped that the next administration would "really break its spear on the question of can we get a sensible climate policy with the Congress and the public behind it" in time to go to the final round of negotiations on a new international treaty late next year "and finally have a voice that is respected by other countries." Harvard issued a news release Friday saying that Obama is expected to announce Holdren's appointment Saturday. The science adviser heads the Office of Science and Technology Policy in the Executive Office of the President. Holdren has gone to China half a dozen times a year since 1984 to meet with Chinese climate and energy officials. Tsinghua University, one of China's most prestigious schools, named him this year as a three-year nonresident guest professor. Holdren has said he thinks that if the United States leads with emission reduction requirements, China and the rest of the world will follow, because their countries already are suffering from water and agricultural problems. Holdren has also said that scientists need to get better at explaining what's happening with more urgency. The term "global warming" could be part of the problem, he argued, because it implies something uniform, gradual and benign, and it's none of those. "It is rapid compared to the capacity of ecosystems to adjust and, alas, rapid compared to our capacity as a society to adjust," he said. "We should be calling it global climatic disruption." Holdren is a professor of environmental policy at Harvard's John F. Kennedy School of Government and the director of the science, technology and public policy program at the school's Belfer Center for Science and International Affairs. He also is the director of the Woods Hole Research Station. He earned Master of Science and Ph.D. degrees in aerospace engineering and plasma physics from the Massachusetts Institute of Technology and Stanford University. He's a specialist in nuclear arms control and nonproliferation as well as global climate change and energy policy. "John is the very model of a policy-relevant scientist," Belfer Center Director Graham Allison said in a statement Friday. "He has a deep understanding of the dynamics of science and technology as drivers of the challenges society faces — from climate disruption to nuclear danger — and new opportunities for feasible solutions."
<filename>places-search/src/main/java/com/jdoneill/placessearch/MainActivity.java package com.jdoneill.placessearch; import android.content.Intent; import android.graphics.Color; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.support.v7.widget.Toolbar; import android.view.Menu; import android.view.MenuInflater; import android.view.MenuItem; import com.esri.arcgisruntime.geometry.Point; import com.esri.arcgisruntime.geometry.SpatialReferences; import com.esri.arcgisruntime.mapping.ArcGISMap; import com.esri.arcgisruntime.mapping.Basemap; import com.esri.arcgisruntime.mapping.view.Graphic; import com.esri.arcgisruntime.mapping.view.GraphicsOverlay; import com.esri.arcgisruntime.mapping.view.MapView; import com.esri.arcgisruntime.symbology.SimpleMarkerSymbol; public class MainActivity extends AppCompatActivity { public static final String EXTRA_LATLNG = "com.jdoneill.placesearch.LATLNG"; private MapView mMapView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Toolbar toolbar = findViewById(R.id.toolbar); setSupportActionBar(toolbar); double lat = Double.NaN; double lon = Double.NaN; Bundle extras = getIntent().getExtras(); if(extras != null){ lat = extras.getDouble(PlaceSearchActivity.EXTRA_PLACE_LATITUDE, Double.NaN); lon = extras.getDouble(PlaceSearchActivity.EXTRA_PLACE_LONGITUDE, Double.NaN); } // create MapView from layout mMapView = findViewById(R.id.mapView); // create a map with the BasemapType topographic ArcGISMap map = new ArcGISMap(Basemap.Type.TOPOGRAPHIC, 47.498277, -121.783975, 12); // set the map to be displayed in this view mMapView.setMap(map); // graphics overlay for place search location marker GraphicsOverlay graphicsOverlay = addGraphicsOverlay(mMapView); // check if activity is returned from place search if(!Double.isNaN(lat) && !Double.isNaN(lon)){ graphicsOverlay.getGraphics().clear(); // create a point from returned place search Point point = new Point(lon, lat, SpatialReferences.getWgs84()); // create a marker at search location SimpleMarkerSymbol sms = new SimpleMarkerSymbol(SimpleMarkerSymbol.Style.CROSS, Color.BLACK, 15.0f); Graphic graphic = new Graphic(point, sms); graphicsOverlay.getGraphics().add(graphic); // zoom in to point location mMapView.setViewpointCenterAsync(point, 50000.0); } } @Override public boolean onCreateOptionsMenu(Menu menu){ MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.map_dashboard, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { switch ( item.getItemId() ) { case R.id.menu_search: { openPlaceSearchActivity(); return true; } } return super.onOptionsItemSelected(item); } @Override protected void onPause() { super.onPause(); mMapView.pause(); } @Override protected void onResume() { super.onResume(); mMapView.resume(); } @Override protected void onDestroy() { super.onDestroy(); mMapView.dispose(); } /** * Create a Graphics Overlay * * @param mapView MapView to add the graphics overlay to */ private GraphicsOverlay addGraphicsOverlay(MapView mapView){ //create graphics overlay GraphicsOverlay graphicsOverlay = new GraphicsOverlay(); // add overlay to mapview mapView.getGraphicsOverlays().add(graphicsOverlay); return graphicsOverlay; } /** * Notification on selected place */ private void openPlaceSearchActivity() { Intent intent = new Intent(this, PlaceSearchActivity.class); intent.putExtra(EXTRA_LATLNG, "47.498277,-121.783975"); startActivity(intent); } }
/* ** EPITECH PROJECT, 2019 ** cpp_d14m_2019 ** File description: ** Fruit.hpp */ #ifndef FRUIT_HPP_ #define FRUIT_HPP_ #include <string> #include <iostream> class Fruit { public: Fruit() {}; Fruit(Fruit const &Fruit); Fruit(int vitamins, const std::string &name); virtual ~Fruit() {}; Fruit& operator=(const Fruit&); virtual std::string getName() const = 0; virtual int getVitamins() const = 0; protected: int _vitamins; std::string _name; }; #endif /* FRUIT_HPP_ */
<reponame>kurone-kito/jsonresume-to-rirekisho-docx<filename>webpack.config.ts import ESLintPlugin from 'eslint-webpack-plugin'; import path from 'path'; import type webpack from 'webpack'; import { dependencies, name } from './package.json'; import { compilerOptions } from './tsconfig.json'; // eslint-disable-next-line @typescript-eslint/no-var-requires const DtsBundleWebpack = require('dts-bundle-webpack'); const createAliases = () => { const baseUrl = path.resolve(__dirname, compilerOptions.baseUrl || '.'); return Object.fromEntries( Object.entries(compilerOptions.paths || {}).map(([key, [value]]) => [ key.replace('*', ''), path.resolve(baseUrl, value.replace('*', '')), ]) ); }; export default <webpack.Configuration>{ cache: true, devtool: 'source-map', externals: Object.keys(dependencies || {}), mode: 'production', module: { rules: [{ test: /\.ts$/, use: 'ts-loader' }] }, output: { filename: 'index.js', path: `${__dirname}/dist`, library: name, libraryTarget: 'umd', }, plugins: [ new DtsBundleWebpack({ indent: ' ', main: `${__dirname}/dist/src/index.d.ts`, name, out: `${__dirname}/dist/index.d.ts`, }), new ESLintPlugin({}), ], resolve: { alias: createAliases(), extensions: ['.js', '.json', '.ts'] }, target: 'node', };
// Run starts a listening thread func (x *SACN) run() { log.Info().Msg("Started E1.31 goroutine") defer log.Info().Msg("Exit goroutine") b := make([]byte, maxDatagramSize) for { n, addr, err := x.socket.ReadFrom(b) if err != nil { log.Error().Err(err).Msg("Error") break } if n < 0x7d { continue } if b[0x7d] > 0 { continue } x.OnFrame(addr, b[0x7e:n]) } x.socket.Close() }
#include <bits/stdc++.h> using namespace std; typedef long long int LL; #define N 3123456 #define M 312345 int nxt[N][26]; int sz[N]; int rt[M]; int ctr; int ans = 0; int c[M]; string s; vector<int> adj[M]; bool vis[M]; int ansc = 0; inline int getsz(int i){ return (i >= 0 ? sz[i] : -1); } int merge(int r1, int r2){ if(r1 == -1){ return r2; } if(r2 == -1){ return r1; } sz[r1] = 0; for(int i = 0; i < 26; ++i){ nxt[r1][i] = merge(nxt[r1][i], nxt[r2][i]); sz[r1] += getsz(nxt[r1][i]) + 1; } return r1; } void dfs(int u){ vis[u] = true; for(int i = 0; i < adj[u].size(); ++i){ int v = adj[u][i]; if(vis[v] == false){ dfs(v); merge(rt[u], rt[v]); } } ctr++; nxt[ctr][s[u] - 'a'] = rt[u]; sz[ctr] = sz[rt[u]] + 1; rt[u] = ctr; if(ans < sz[rt[u]] + c[u]){ ans = sz[rt[u]] + c[u]; ansc = 1; } else if(ans == sz[rt[u]] + c[u]){ ansc++; } } int main() { int n, i, u, v; memset(nxt, -1, sizeof nxt); cin>>n; for(i = 0; i < n; ++i){ rt[i] = i; cin>>c[i]; } cin>>s; ctr = n; for(i = 1; i < n; ++i){ cin>>u>>v; u--; v--; adj[u].push_back(v); adj[v].push_back(u); } ans = 0; dfs(0); cout<<ans<<endl; cout<<ansc<<endl; return 0; }
<filename>tests/profiling/ostringstream.cpp #include <iostream> #include <sstream> #include "conditional_ostream.h" #include "profile.h" int main() { std::ostringstream stream; time_streaming(stream); return 0; }
<filename>src/main/java/org/nalecz/vksaver/entities/friends/Main.java<gh_stars>0 package org.nalecz.vksaver.entities.friends; import com.google.common.base.Functions; import com.google.common.collect.Lists; import com.vk.api.sdk.client.Lang; import com.vk.api.sdk.client.VkApiClient; import com.vk.api.sdk.client.actors.UserActor; import com.vk.api.sdk.exceptions.ApiException; import com.vk.api.sdk.exceptions.ClientException; import com.vk.api.sdk.objects.friends.responses.GetResponse; import com.vk.api.sdk.objects.users.Fields; import com.vk.api.sdk.objects.users.UserXtrCounters; import org.dozer.DozerBeanMapper; import org.nalecz.vksaver.entities.AbstractEntity; import java.util.ArrayList; import java.util.List; import java.util.stream.Collectors; public class Main extends AbstractEntity { public Main(DozerBeanMapper mapper, VkApiClient apiClient, UserActor actor, Integer userId) { super(mapper, apiClient, actor, userId); } private List<Fields> getFields() { return new ArrayList<>() {{ add(Fields.BDATE); add(Fields.CITY); add(Fields.COUNTRY); add(Fields.EXPORTS); add(Fields.PHOTO_MAX_ORIG); add(Fields.SCREEN_NAME); add(Fields.NICKNAME); add(Fields.SITE); }}; } public String proceed() throws ClientException, ApiException { GetResponse friendsResponse = apiClient.friends() .get(actor) .userId(userId) .execute(); List<Fields> friendFields = getFields(); List<String> friendsIds = friendsResponse .getItems() .stream() .map(Functions.toStringFunction()::apply) .collect(Collectors.toList()); List<UserXtrCounters> getUsersResponse = apiClient.users() .get(actor) .userIds(friendsIds) .fields(friendFields) .lang(Lang.RU) .execute(); List<Friend> friends = new ArrayList<>(); for (UserXtrCounters user : getUsersResponse) { friends.add(mapper.map(user, Friend.class)); } return toJSON(friends); } }
Islamic State has acknowledged the presence of Indians in the group for the very first time, releasing a 22-minute-long propaganda video promising to avenge the killing of Muslims in "Kashmir, Godhra and Mumbai". Released early on Friday, the video features individual jihadists who have joined the cause, and features propaganda meant to convince their "brothers" in India and South Asia to stop mingling and trading and living among Hindus. In the video, stretches of which are in Arabic, one English-speaking jihadist says Muslims in India have three options: "To accept Islam, to pay Jizya, or prepare to be slaughtered". The video also talks about horrific acts committed against Muslims, and asks them to stop following the ways of the West, to leave their professions as doctors or engineers, and join and support the cause of the Caliphate. The video also features several still-to-be-identified members, suspected to once be a part of the Indian Mujahideen, whose members are known to have been serving with IS forces after breaking from their Pakistan-based leadership. As reported by The Indian Express, the video features several people who are yet to be identified but suspected to be members of the Indian Mujahideen (IM). Among these are militants who had left India earlier and were suspected to have joined the extremist militant organisation. According to a report in DNA, this is the first time the IS has confirmed the presence of Indian fighters among its ranks and addressed the Muslim population directly for "mingling with polytheistic Hindus, who are trying to convert them to Hinduism". The report adds that according to Indian intelligence agencies, 23 Indian nationals are enrolled with an organisation affiliated to al-Qaeda in Syria and IS. The Indian Express report goes on to say the only one conclusively identified in the video is Fahad Tanvir Sheikh, an engineering student from Thane who travelled to Syria in 2014. He vows to avenge acts committed against Muslims in India — and asks if New Delhi has forgotten incidents like the Mumbai train bombings, Godhra riots, etc. It also asked Muslims in the country to stop mingling with the infidels (Hindus) and to stop believing that Islam is a religion of peace, and say that it has always been a religion of war. Firstpost is now on WhatsApp. For the latest analysis, commentary and news updates, sign up for our WhatsApp services. Just go to Firstpost.com/Whatsapp and hit the Subscribe button.
class Instituto: """Estructuras de Institutos""" def __init__(self, nombre, codigo): """Constructor: Inicializa propiedades de instancia.""" self.nombre = nombre self.codigo = codigo def __str__(self): """Cadena de representación.""" return "{}: {}".format(self.nombre, self.codigo)
#ifndef __PROPERTY_EDITOR_STATE_HELPER__H__ #define __PROPERTY_EDITOR_STATE_HELPER__H__ #include "Main/QTreeViewStateHelper.h" #include "Tools/QtPropertyEditor/QtPropertyModel.h" #include "Tools/QtPropertyEditor/QtPropertyData.h" class PropertyEditorStateHelper : public DAVA::QTreeViewStateHelper<QString> { public: virtual void SaveTreeViewState(bool needCleanupStorage); PropertyEditorStateHelper(QTreeView* treeView, QtPropertyModel* model); protected: virtual QString GetPersistentDataForModelIndex(const QModelIndex& modelIndex); private: QtPropertyModel* model; }; #endif /* defined(__PROPERTY_EDITOR_STATE_HELPER__H__) */
from os import system from time import sleep c = 0 while c < 100: for c in range(0, 4): print() print() print() if c == 0: print(f'{"●":>7}') elif c == 1: print() print(f'{"●":>9}') elif c == 3: print() print(f'{"●":>5}') elif c == 2: print() print() print(f'{"●":>7}') c += 1 sleep(0.2) system('cls') # 1 2 4 6 8 16 32 64 128 256 512 1024 2048 4096 8192 # 16 32 64 128 256 512 1024 2048 4096 8192 16384 # 1 2 3 4 5 6 7 8 9 A B C D E F # 22 = 16 # 642 = 282 # ● ● ● ● # ● ● # ● ● # ● ● # ● ● # ● ● # ● ● ● ●
Energy-Efficiency with Domestic Water Heating in Commercial Buildings This paper presents measured energy savings data from implementation of one simple control strategyturning off hot water re-circulating pumps during unoccupied hours in commonly used central storage type DHW systems in a commercial office building. Though turning off water heaters and/or recirculation pumps during unoccupied hours sounds simple and straightforward, it is not a common practice. Costs for retrofit controls have to be weighed against potential energy costs savings for economic decisions. The estimation of energy savings with any good degree of confidence can be time consuming and difficult. This paper presents field monitored data of such savings from a large 267,000 sq. ft. commercial office building with no-cost/low-cost measures. Water heating energy savings of 27% is demonstrated in gas fired water heating systems, and another potential 23% is identified if the heating burners could be cost effectively turned off during unoccupied periods. Excellent payback period of about 3 months was recorded from the simple recirculation pump cycling control alone.
/// fx = x^2 -sin(x)-.5 #include <iostream> #include <cmath> using namespace std; double fx(double x); int main() { double x0, x1, xnew, soln, tol; int itcount=0; x0=0; x1= 2; tol=.001; xnew=x1; soln=fx(xnew); itcount= 0; cout<<"checking end points for opposite sign"<<endl; cout<<fx(x0)<<" and "<<fx(x1)<<endl; while( fabs(soln)>tol ) { xnew = (x0+x1)/2.0; soln = fx(xnew); if((soln>0)&&(x1>0)) { x1=xnew; } else {x0=xnew;} itcount++; cout<<x0<<" "<<xnew<<" "<<x1<<endl; } cout<<"Solution = "<<xnew<<" has fx= "<<soln<<" in "<<itcount<<" iterations"<<endl; return 0; } double fx(double x){ x = pow(x,2)-sin(x)-.5; return x; }
Political parties and the policy process This chapter examines the role of political parties in the policy process. The chapter employs a model of the policy process stages to examine how Irish political parties operate in each stage. This constitutes an exploration of the extent to which so-called new politics might have impacted on recent political party roles and performance. However, new politics, governments without a clear majority seeking consensual support for their policies in the Dil is nothing new, with no single party majority Government since 1977. Programmatic Government has been normalised and consensus-seeking has become the modus operandi for parties. What is new is that long established parties are now joined by an increasing number of smaller parties in the Dil, raising the potential to shift the balance of power away from the larger parties, with consequences for the style of, and capacity for, policy analysis. However, the chapter shows that this tendency has been less marked than might have been expected.
<filename>export_fpga_test_data.py<gh_stars>1-10 #%% # import os import numpy as np import sys sys.path.append("..") from soil_classifier.dataset import Landsat from sklearn.preprocessing import OneHotEncoder SEED = 0 np.random.seed(SEED) # cwd = os.getcwd() DATA_FOLDER = '../data/' DATASET_NAME = 'Landsat' #%% def main(): dataset = Landsat(data_folder=DATA_FOLDER) x_train, y_train, x_test, y_test = dataset.load(shuffle=True, seed=SEED) num_classes = dataset.num_classes num_bands = dataset.num_bands # dataset standarization x_mean = np.mean(x_train,axis=0) x_std = np.std(x_train,axis=0) x_mean = np.transpose(x_mean[:,np.newaxis]) x_std = np.transpose(x_std[:,np.newaxis]) # Replace zero sigma values with 1 x_std[x_std == 0] = 1 x_train_norm = np.divide( (x_train-x_mean), x_std) x_test_norm = np.divide( (x_test-x_mean), x_std) # labels to one hot encoding onehotencoder = OneHotEncoder(categories='auto') y_train = onehotencoder.fit_transform(y_train[:,np.newaxis]).toarray() y_test = onehotencoder.fit_transform(y_test[:,np.newaxis]).toarray() print('\nSaving data in text (dat) file ...') np.savetxt(DATA_FOLDER+DATASET_NAME+'_x_train.dat', x_train_norm, fmt='%.6f') np.savetxt(DATA_FOLDER+DATASET_NAME+'_y_train.dat', y_train, fmt='%.6f') np.savetxt(DATA_FOLDER+DATASET_NAME+'_x_test.dat', x_test_norm, fmt='%.6f') np.savetxt(DATA_FOLDER+DATASET_NAME+'_y_test.dat', y_test, fmt='%.6f') print('done!') if __name__ == "__main__": main()
FINANCIAL COMPENSATION FOR WILDLIFE DAMAGE: A REVIEW OF PROGRAMS IN NORTH AMERICA Financial compensation is 1 of several management options proposed as alternatives to traditional wildlife damage management techniques. However, little is known about compensation programs currently in place. I surveyed United States and Canadian fish and wildlife programs to obtain information on the species causing damage, type of damage, extent of reimbursement, and budget for wildlife damage compensation programs. Of the 58 respondents, 36% have a compensation program, and 64% loan equipment and/or provide supplies for wildlife damage management. Programs compensating landowners for damage caused by deer (Odocoileus spp.), black bear (Ursus americanus), elk {Cervus elaphus), and moose (Alces alces), were the most common. Information was also provided on the 12 programs that have been canceled to help identify situations where reimbursement may not be appropriate. Page 90 in R.E. Masters and J.G. Huggins, eds. Twelfth Great Plains Wildl. Damage Control Workshop Proc, Published by Noble Foundation, Ardmore, Okla.
Effect of insulin on fatty acid uptake and esterification in L-cell fibroblasts. We examined the effects of insulin on fatty acid uptake in L-cell fibroblasts, using cis-parinaric acid to measure uptake rates in the absence of esterification and oleic acid to measure uptake rates in the presence of esterification. L-cells exhibited both high and low affinity insulin binding sites with Kd of 23 nM and 220 nM and a cellular density of 1.4 and 6.8 x 10 sites/cell, respectively. Insulin in the range 10(-9) to 10(-7) M significantly decreased both the initial rate and maximal extent of cis-parinaric acid uptake by 24 to 30%. Insulin also reduced oleic acid uptake up to 35%, depending on insulin concentration and decreased the amount of fatty acid esterified into the phospholipids and neutral lipids by 28 and 70%, respectively. In contrast, glucagon or epinephrine stimulated both the initial rate and extent of cis-parinaric acid uptake 18 and 25%, respectively. Because L-cells lack P-adrenergic receptors, the epinephrine effect was not the result of P-receptor stimulation. Hence, insulin altered not only fatty acid uptake, as determined by cis-parinaric and oleic acid uptake, but also altered the intracellular oleic acid esterification.
Nathaniel Givens’ latest article on John Dehlin just went live on Meridian Magazine’s website.  While it’s unlikely responding to Givens will do much to alter the opinions of those already decided on this matter, I realized as I read his article that I wasn’t reading it how most Meridian Magazine readers might read it.  So, here are my thoughts as I read his article. First, his opening line is a winner: “Excommunication is always evidence of deep spiritual tragedy.” Givens asserts a truth that is both impossible to prove and ambiguous enough that he could weasel out of the claim if someone tries to pin him down on it.  Whenever someone writes, “X is always Y,” I immediately want to come up with an exception.  In this case, I think there are likely lots of exceptions: For many people who are excommunicated on the grounds of apostasy, they probably don’t consider themselves “spiritual tragedies”.  Many of these people changed their views about Mormonism or believe Mormonism has changed.  Thus, from their perspective, their excommunication wasn’t a spiritual tragedy.  If there is a spiritual tragedy involved, it’s the Church that is experiencing one, not them. Of course, Givens would likely weasel out of this interpretation of his statement and argue that the spiritual tragedy is the loss of that person’s spiritual blessings.  In other words, from the perspective of the devout, faithful Mormon, everyone who is excommunicated is losing the chance for godhood, so the devout, faithful Mormon sees this as a tragedy, regardless of how the person who is excommunicated sees it.  Which, therefore, makes this statement true, because, like the entire article, it’s all about maintaining a faithful perspective with absolutely no regard for what anyone else might think or believe. Most importantly, I think an enterprising former Mormon needs to create a “spiritual tragedy” badge (both analog and digital) that people who have been excommunicated from the LDS Church can proudly display. Something like this would work: Givens follows up his opening line with an excuse for why he is going to hatchet John Dehlin: “For this and many other sound reasons, formal charges of apostasy should never be treated lightly or tried in the court of public opinion. John Dehlin’s decision to make his own disciplinary council public has moved the issue onto a national stage, however. It is still not appropriate for us to speculate or advocate about the outcome of the disciplinary council, but it would be unfair for Dehlin to take the story national—with implications for the Church to which we all belong—and then expect every other Mormon to acquiesce to his version.” Translation: This is supposed to be a private matter, but I need an excuse to attack John Dehlin.  So, since he made it public, I have every right to go public, too.  Forget that whole “Turn the other cheek” or “Do unto others” crap Jesus taught.  John’s a public figure, so I’m going to attack him publicly and I don’t feel guilty doing so because I don’t actually have a moral conscience. I have a Mormon conscience, which is way, way better. The next section says John is making money off of Mormon stories, as though that is some how immoral (like, maybe, charging people to enter temples or get baptized). Givens then notes that John Dehlin has interviewed faithful Mormons and “hostile critics,” like this person: Givens then labels John a “critic” of the Church.  I’ll give him credit for at least not calling John an anti-Mormon, which is probably the term Givens would prefer to use (and uses for some of the people John associates with), but hopefully we’re past claiming that anyone who disagrees with the LDS Church wants to kill Mormons.  Calling John a critic is an effort to discredit John by poisoning the well.  But, you know what, I don’t really care on this point.  John is a critic of the LDS Church, as is anyone else who has ever said, “Hey, wait a minute.  The LDS Church leaders just did what?”  The second you question anything in the LDS Church, you’re now a critic, because that’s all it takes to be critical of an institution – questioning it.  So, props to John for being publicly labeled a critic – he joins the ranks of many other well-respected critics of the LDS Church! Givens then moves to the real meat of his argument: John Dehlin has claimed that among the reasons why he is being excommunicated is because of his support for Ordain Women and same-sex marriage.  In fact, it’s this part that really gets Givens because the NYTimes picked this part up and ran with it, sending a breaking news text to millions of people that said, “Prominent Mormon Faces Excommunication for Backing Gay Marriage.” Why is it that Nathaniel Givens doesn’t want to admit that John Dehlin’s excommunication may be rooted in his support of same-sex marriage and Ordain Women?  Oh, right, because if it is true, then it makes the LDS Church look bigoted, archaic, and hateful.  Givens tries to argue that these issues aren’t core to the excommunication, “Among the concerns King felt were most important, gay rights and same sex marriage are not mentioned in any way, and female ordination is at most implied tangentially by point #3 (although that is far from certain).” Givens then points out that John Dehlin has been inconsistent in emphasizing how central his support for same-sex marriage and Ordain Women have been to his disciplinary council.  Yeah, that’s kind of true.  It’s not entirely clear how central they are.  Are his support for same-sex marriage and Ordain Women 10% of the reason for the disciplinary council?  50%?  80%?  100%? The actual answer is: the only people who know what percentage of the disciplinary council is based on John Dehlin’s public support for same-sex marriage and Ordain Women are those who called or arranged the disciplinary council (i.e., his Stake President and, in all likelihood, some people at Church headquarters – no one believes they aren’t involved since this is too high profile of a case).  John Dehlin doesn’t even know, because disciplinary council’s are basically opaque.  Disciplinary councils are basically like military tribunals or federal criminal court cases where the evidence against the accused is classified as confidential. The only people who get to see it are the prosecutors and the judge (or panel of judges).  Everyone else basically just hears, “He engaged in espionage” or “He stole state secrets” but we don’t ever get to find out exactly what happened, because the government doesn’t have to reveal it.  And that is what happens in disciplinary courts: We have no idea: (1) Who started the process against John; (2) What the actual reasons for this are; (3) and Who is actually running the show.  Why?  Because the Church doesn’t have to do that and its members are too sheepish to call them out on this.  If the process were confidential to protect John’s interests, that would make sense.  I can understand this in the case of someone who had an affair or did something else that might tarnish their reputation.  But, in John’s case, what he did isn’t tarnishing his reputation; it’s tarnishing the Church’s reputation.  That’s also why the Church has a caveat in the Church Handbook of Instructions that says no one is supposed to record disciplinary councils – not to protect the accused, but to protect the Church.  They don’t want a record of what the actual reasons are for the disciplinary council, because that would make them look bad.  Really bad.  Instead, they keep it all hush hush and then let their apologetic minions do all the necessary work to attack critics of the Church for them.  This way, the Church’s hands look clean, even though, when you make them take off their gloves, they are nasty. So, we don’t know whether or not John’s support for same-sex marriage or Ordain Women is 10% or 90% of the reason for the disciplinary council.  But Givens then turns to John’s stated beliefs and practices, noting that John has admitted he isn’t sure he accepts everything.  Givens then claims that the key here is that John is a public figure.  So, John isn’t really allowed to state his doubts or raise concerns because he has a public following.  He even notes that this is why he thinks John has crossed the line, “It is one thing to disbelieve privately. It is another thing to disbelieve publicly, and with a very large following. And it is yet another to act openly in accord with this disbelief, and to evangelize others to share that rejection of Church teachings. It is in that last instance in particular that Church leaders may have considered that Dehlin crossed a crucial line.” In other words, Givens thinks it’s wrong to openly disbelieve in teachings of the Church, particularly if you are well-known and have lots of followers on social media. Wait! Hold on! Givens thinks it’s wrong to openly express disbelief!  Isn’t this the equivalent of saying, “If you have questions, keep them to yourself!”  Or, “It’s fine if you don’t believe everything so long as you never tell anyone.”  What is Givens really saying?  He’s saying that anyone who questions Mormon teachings is a threat to those who don’t.  Publicly expressing questions or “disbelief” threatens to pop Givens’s Mormon bubble.  Popping the Mormon bubble might just make other Mormons question.  And questioning is bad! Believing is good! Doubting is bad! Letting the prophet and apostles think for you is good! Thinking for yourself is bad! For Givens, then, it’s much more acceptable to kick someone out of the LDS Church because they have admitted they don’t believe everything and that may allow other people to think for themselves than it is to kick them out for supporting equality publicly.  Hmm…  From my perspective, any institution that would consider kicking someone out for doing either of those isn’t an institution worthy of respect.  It’s an institution that doesn’t allow freedom of speech, freedom of conscience, or freedom of association.  It’s an authoritarian dictatorship. Givens concludes that he is clearly right, “An objective observer would reasonably infer that King is concerned that Dehlin is using his position of prominence in order to undermine the Church and its mission and in so doing has placed his affiliation with the Church in jeopardy. That, at least, seems a plain and reasonable interpretation of the public record.” The irony, of course, is that Givens is anything but an “objective observer”.  Givens has a leg in this game – he’s a believer and an apologist, trying to defend his religion’s efforts to control what the members say and do publicly.  Givens isn’t objective and is about as far away from being able to think objectively about this as is possible. Finally, Givens ends with this absurd statement, “There may be personal motives and considerations that further amplify or ameliorate the alleged offenses. They are—and should be—beyond the purview of a treatment like this one. But the details outlined above based on publicly available sources are sufficient to correct media reports that an individual is being sanctioned for following his conscience, or for holding particular personal beliefs.” To Givens, John Dehlin isn’t being sanctioned for what he believes or does.  He’s being sanctioned for hurting the Church.  Never mind that those may be (though arguably are not) the same thing.  John is following his conscience, both in his support for Ordain Women and same-sex marriage, and in his disbelief.  Givens just thinks that it’s dangerous to ask questions and publicly express disbelief, because it might pop someone’s protective bubble. So, he prioritizes protecting blind faith over thinking, and ends up concluding that Dehlin is being sanctioned for making it easier for people to ask questions, but instead frames it as John being a threat to the Church.  John IS a threat, because people need to threaten authoritarian dictatorships. Don’t get me wrong.  I think Nathaniel Givens genuinely believes that the Church is going after John because he is a threat.  But Givens also can’t see the reality behind why John is a threat – because John might make people question.  Because Givens likely believes that the Church can do no wrong (it can’t, or would why he do so much for it?), this is about what John Dehlin did that was wrong and the Church defending itself, not the Church victimizing John Dehlin for expressing his conscience.  Givens has to defend his religion to defend himself.  As a result, he can’t see what the Church is really doing.  He can only see what he wants the Church to be doing. At the end of the day, Givens is putting the continued existence of an institution ahead of what may be best for its members.  And that is sad….
Towards a "Knowledge-Based Marketplace" Model (KBM) for Cooperation between Agents E-Marketplaces are not usually studied as places for cooperative work between suppliers and buyers, involving knowledge and creation of new knowledge. The present work starts from a theoretical point of view, in order to build a model of cooperation that we have entitled Knowledge-Based Marketplace (KBM). e-Marketplaces catalogues proceed from a twofold problem of modelling information and knowledge from multiple points of view and from multiple experts. Buyers and sellers each bring complementary expertises to co-construct the catalogues. As a first step, in this paper, we justify and start to develop that model, by studying it through an experiments program of a concrete case of KBM, with a particular interest in the possibilities for new knowledge to emerge from the collaboration processes.
Sinclair & Carroll Co. v. Interchemical Corp. Background Interchemical Corporation asserted that inks made by Sinclair & Carroll Co. infringed U.S. Patent No. 2,087,190 to Albert E. Gessler (assigned to Interchemical Corporation). The District Court held the Gessler patent invalid as anticipated by the prior art. The Circuit Court reversed, holding the patent valid and infringed. The Gessler patent claims to an ink that does not dry at room temperature but which will dry almostly instantly upon the application of heat. The ink has utility in the printing of magazines and other materials that use smooth non-absorbent paper, and it or similar inks were claimed to have been used in The New Yorker, Collier's and The Saturday Evening Post. Such publications previously required more time for printing since the reverse side of the paper could not be printed until the first side was dry. Prior to Gessler, many efforts were made to eliminate this delay. The problem was complicated by the fact that the printing presses included a long series of rollers to spread out the ink. Hence, when ink with volatile components were used, they would dry on the rollers before they got to the type. And, if nonvolatile inks were used, they would not dry except by slow oxidation. Gessler's ink combined the qualities of an ink which does not dry on the rollers and one which dries quickly after printing when heat is applied. This characteristic of the ink resulted from the solvent being relatively non-volatile at ordinary room temperature but highly volatile at a temperature of 150 C. The inks containing these solvents enabled magazines to be printed on high-speed rotary presses furnished with heating devices, without interruption for drying. In 1930, Gessler was asked to make an odorless ink, and he selected from a catalog of a chemical manufacturer three solvents which the catalog indicated to be relatively odorless. He tried inks made with each of the compounds as a solvent and decided that butyl carbitol was the most satisfactory, since it did not dry while on the rollers, at ordinary temperatures. The company which had requested the odorless ink, however, found that it was unsatisfactory for other reasons and, after some further effort, Gessler stopped trying to solve that problem. Sometime in 1932, however, the same company asked Gessler whether he could supply them with an ink that would be dry after being put over a heating device. Gessler's answered that one of those inks he previously made would do that. Once Gessler learned that steam-heated rollers were used on printing presses, then he knew that the inks would dry almost instantaneously. Opinion of the Court According to the Court, an essential requirement for validity of a patent is that the "subject matter display invention, more ingenuity than the work of a mechanic skilled in the art". The court held that the Gessler patent was not invention. The Court found that the Gessler patent was "not the product of long and difficult experimentation," and that "reading a list and selecting a known compound to meet known requirements is not more ingenious than selecting the last piece to put into the last opening in a jig-saw puzzle."
/** * Composes a string from byte array. */ private String composeString(byte[] bytes) { StringBuilder builder = new StringBuilder(); for (byte b : bytes) { builder.append(b); builder.append(" "); } return builder.toString(); }
<reponame>harveycas/project_spring_2020 import setuptools with open("README.md", "r") as fh: long_description = fh.read() setuptools.setup( name="tetrad_analysis", version="0.0.1", author="Catherine_Harvey", author_email="<EMAIL>", description="A package to help analyze tetrads", long_description= "A package to facilitate analyzing the antibiotic markers of yeast tetrads", long_description_content_type="text/markdown", url="https://github.com/harveycas/project_spring_2020", packages=setuptools.find_packages(), install_requires =['xlsxwriter'], classifiers=[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", ], python_requires='>=3.6', )
Screening of in silico identified inhibitors of breast cancer resistance protein (BCRP) transporter in BCRPoverexpressing MCF7 M100 cancer cells The overexpression of breast cancer resistance protein (BCRP) is associated with multidrug resistance (MDR) in specific cancers. BCRP is an ATPbinding cassette (ABC) transporter that is also functionally important in the blood brain barrier. The ability of BCRP to export cancer chemotherapeutics to subtherapeutic concentrations renders chemotherapy treatment ineffective in cancers that overexpress BCRP. BCRP can also be problematic in its ability to inhibit the delivery of drugs to the brain. There are currently no clinically approved inhibitors of BCRP. The identification of inhibitors of BCRP that have the potential for clinical use would therefore greatly improve drug delivery systems to both cancers and the brain.
Looking ahead: Where does Mississippi State women's basketball go from here? PORTLAND, Oregon — Mississippi State women’s basketball knows what heartbreak feels like. This year, though, it’s different. The Bulldogs don’t get to call themselves national finalists. They don’t get to call themselves national semifinalists, either. Head coach Vic Schaefer’s team lost before the Final Four for the first time since 2016. The season ended with an 88-84 loss to No. 2 seed Oregon in the Elite Eight. Schaefer said all year this could have been his best Bulldog bunch ever. This could have been the team that accomplished what no other team in Mississippi State history has. In a way, it was – MSU won the SEC Tournament for the first time ever. But the ultimate prize went uncaptured. Schaefer is still searching for that elusive national championship. Like last year, he’s losing nearly a handful of players who almost had the mettle to bring it home. Also like last year, he has to regroup, reload and try to put together a team capable of maintaining its status as one of the preeminent powers in women’s college basketball. A record-setting senior class is on the way out. Centers Teaira McCowan and Zion Campbell and point guard Jazzmun Holmes won more games at Mississippi State than any other players in program history. Guard Jordan Danberry and forward Anriel Howard joined McCowan and Holmes as transfers to give State four parts of its starting five. Now all of them are leaving. Schaefer said next year’s team will be young. That’s an understatement. As it stands, rising red-shirt sophomore Myah Taylor will be the starting point guard. Taylor is talented, but very unpolished. She’s not as dynamic on either end of the floor as Holmes, and she’s taking over for a point guard who finished her career second in program history in assists (476). Since the start of the SEC Tournament, Taylor only averaged roughly four and a half minutes of floor time per game. Rising junior Andra Espinoza-Hunter will likely start at shooting guard. She surely came into her own as a shooter down the stretch this season, but her defense is still a bit shaky. Schaefer isn’t sold on Espinoza-Hunter as a defender; he had to take her out of numerous games because of lapses on that end of the floor. The shot isn’t going anywhere, though, and with another year of experience in Schaefer’s system, Espinoza-Hunter should see improvement defensively. Rising junior Bre'Amber Scott will be available whenever Espinoza-Hunter gets yanked, but Schaefer thinks the latter is improving mightily anyway. Incoming freshman Rickea Jackson, one of the 10 best players in the 2019 recruiting class and the first McDonald’s All-American to ever sign with Mississippi State, should start on the wing opposite Espinoza-Hunter. Jackson, a 6-foot-3 do-it-all player from Detroit, scored a team-high 11 points in the All-American game in Atlanta last week. She led her high school in Michigan to three-straight state championships, too. Joining Jackson in the best recruiting class Schaefer has ever signed are Jayla Hemingway, JaMya Mingo-Young, Esmery Martinez and Aliyah Martharu. None of those four crack the starting lineup, though. Neither will rising sophomore Xaria Wiggins, who Schaefer thinks will be a great player at some point in her career. Rising junior Chloe Bibby will probably start at forward, and rising sophomore Jessika Carter will most likely replace McCowan at center. Former Ole Miss center Promise Taylor, who sat out this season due to transfer rules, will also be a big post presence. With that projected starting five, Mississippi State's offense could look a whole lot different next season. Jackson will probably become the Bulldogs' dominant ball-handler no matter who ends up starting at point guard. She's that good, but she'll also be a true freshman. Schaefer has to find a delicate balance in terms of how much pressure to put on his budding star. Gone are Holmes and Danberry, the two key cogs in Schaefer's usual dribble-drive, weaving offense. Espinoza-Hunter and Bibby should spread the floor with their sharpshooting ability, which could make Mississippi State resemble more of the team that lost in the 2018 national title game. That team, with seniors Blair Schaefer, Roshunda Johnson and Victoria Vivians leading the way, made 278 threes. This year's team made 185. For the first few months of the 2018-19 season, Schaefer constantly said this team was just "different" from teams he's had at State in years past. He'll probably be saying the same when basketball season comes around again this fall. One thing that won't be different, though, is the way Schaefer prepares his players for the grind of another grueling SEC season, one in which the Bulldogs would love to repeat as undisputed conference champions. "We got a locker room in there with some kids who are ready to go to work and get better," Schaefer said immediately after his team's loss to Oregon.
(i) Field to which the invention relates The present invention is with respect to a system for working leads of electrical components, having an inlet unit with a guideway and fixedly supported on a base, two tools designed for operation with said inlet unit and powered by a driving motor, the tools each being positioned on a different one of two tool supports so that they may be moved towards each other for bending and kinking the leads of the said components between them, the leads running into the working path of the tools, which are supported on a carriage designed for a backward and forward working motion, parallel to the inlet unit's guideway and which are able to be moved towards each other for shutting in a direction normal to the inlet direction of the components and in timed relation to the working motion of said carriage. In connection with this apparatus, the wording "supply path" is taken to be the path of the components as supplied by the inlet unit into the range of motion of the tools. (ii) The prior art Electrical components are generally produced with more or less straight leads which have to be bent and kinked for wiring electrical circuits and for being electrically joined. Generally, such leads have to be cut to a desired length by the user. Such operations are generally simple in the case of cylindrical components with leads sticking out of their opposite end faces, because the leads are generally bent in a single direction and are kinked in the same way. For this purpose, apparatus has been put forward in the past, see the German Offenlegungsschrift No. 2,400,307, in which the components are transported by having their two leads taken up in spaces between the teeth of one or more pairs of toothed transport wheels, used for presenting the components to the bending and cutting tool. On the other hand, the present invention is with respect to the processing or working of components whose leads are not placed sticking out from opposite ends of the body of the component but come out of it side-by-side on one side of the component or have been bent to one side of the component body in an earlier stage of processing. Because of their unsymmetrical form, such components may not readily be smoothly and continuously transported and presented to the processing stations. Furthermore, the leads have to be bent and kinked in different directions. In one apparatus of the sort noted at the start of the present specification (see German Offenlegungsschrift No. 2,920,059), the components are moved out of a supply box, for example by way of a normal vibratory conveyer, to an inlet unit having a chute, by which they are then presented to the tools, the chute ending at the position at which the tools come into operation. A compound motion of the tools is produced, such motion taking the form of a shutting motion normal to the length of the opening of the chute and, at the same time, a working motion taking place in the direction of, or towards, the chute. The shutting and working parts of the compound motion take place at the same time, in timed relation to each other, as a backward and forward periodic motion. At the time of the shutting motion, the leads of the components are gripped by the tools, worked or processed and then freed again, while the working motion is responsible for taking the components from the chute and handing them over to an output point, the working of the leads taking place at the time of this forwardly-directed part of the compound working motion. Driving of the carriage for producing the working motion takes places by way of an eccentric and pitman, by a driving cam and a follower roller or the like on the carriage or the like, the necessary power coming from the driving motor. The shutting motion of the tools is produced, in the case of this apparatus (forming the starting point of the present invention) from the working motion because the driving fingers, joined with the tool support, are guided, in each case, in a double-acting cam, that is to say a cam in which the follower is run in a groove. While it is possible to say that this form of apparatus, forming the starting point of the present invention, has gone down well in the industry, it has turned out that, in general use, producing the shutting motion from the working motion as noted is responsible for serious shortcomings. On the one hand, very exact adjustment is necessary to make certain that the tools are controlled and powered in the desired way smoothly, while, on the other hand, on use in the trade, it is necessary, in the case of such apparatus, for the shutting motion to undergo exact adjustment with respect to the length of the motion and the middle position of the tools in the shut position. Such adjustment is necessary, for example, for exactly producing the desired kink form, or, on changing the tools, for retooling, and other purposes. Adjustment of the shutting motion is still, generally speaking, complex in the case of the apparatus noted, taken as the starting point by my present invention.
Refugees in Bosnia and Herzegovina Growing numbers of Bosnian refugees are returning to their former homes at the scenes of wartime atrocities committed against their communities. The trend is changing the face of post-war Bosnia and slowly reversing the effects of ethnic cleansing. Property-law changes, the capture of war criminals and a more assertive approach on the part of the international community have all encouraged the returnees. But with considerable funding still required to build new houses in war-ravaged areas, donor fatigue could stall the process.
import Checkbox from './checkbox'; export { CheckboxProps } from './checkbox'; export default Checkbox;
<gh_stars>1-10 package com.exasol; import java.math.BigDecimal; import java.sql.Date; import java.sql.Timestamp; /** * This interface enables UDF scripts to iterate over input data and to emit * output. */ public interface ExaIterator { /** * @return number of input rows for for this script. * If this is a "SET" UDF script, it will return the number of rows for the current group. * If this is a "SCALAR" UDF script, it will return the total number of rows to be processed by this JVM instance. */ public long size() throws ExaIterationException; /** * Increases the iterator to the next row of the current group, if there is a new row. * It only applies for "SET" UDF scripts. The iterator initially points to the first row, * so call this method after processing a row. * * The following code can be used to process all rows of a group: * <blockquote><pre> * public static void run(ExaMetadata meta, ExaIterator iter) throws Exception { * do { * // access data here, e.g. with iter.getString("MY_COLUMN"); * } while (iter.next()); * } * </pre></blockquote> * * @return true, if the is a next row and the iterator was increased to it, false, * if there is no more row for this group */ public boolean next() throws ExaIterationException; /** * Resets the iterator to the first input row. This is only allowed for "SET" UDF scripts. */ public void reset() throws ExaIterationException; /** * Emit an output row. This is only allowed for "SET" UDF scripts. * Note that you can emit using multiple function arguments or an object array: * <blockquote><pre> * iter.emit(1, "a"); * iter.emit(new Object[] {1, "a"}); * </pre></blockquote> */ public void emit(Object... values) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as an Integer object. * This can be used for the data type DECIMAL(p,0). * * @param column index of the column, starting with 0 */ public Integer getInteger(int column) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as an Integer object. * This can be used for the data type DECIMAL(p,0). * * @param name name of the column */ public Integer getInteger(String name) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a Long object. * This can be used for the data type DECIMAL(p,0). * * @param column index of the column, starting with 0 */ public Long getLong(int column) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a Long object. * This can be used for the data type DECIMAL(p,0). * * @param name name of the column */ public Long getLong(String name) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a BigDecimal object. * This can be used for the data type DECIMAL(p,0) and DECIMAL(p,s). * * @param column index of the column, starting with 0 */ public BigDecimal getBigDecimal(int column) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a BigDecimal object. * This can be used for the data type DECIMAL(p,0) and DECIMAL(p,s). * * @param name name of the column */ public BigDecimal getBigDecimal(String name) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a Double object. * This can be used for the data type DOUBLE. * * @param column index of the column, starting with 0 */ public Double getDouble(int column) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a Double object. * This can be used for the data type DOUBLE. * * @param name name of the column, starting with 0 */ public Double getDouble(String name) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a String object. * This can be used for the data type VARCHAR and CHAR. * * @param column index of the column, starting with 0 */ public String getString(int column) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a String object. * This can be used for the data type VARCHAR and CHAR. * * @param name name of the column */ public String getString(String name) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a Boolean object. * This can be used for the data type BOOLEAN. * * @param column index of the column, starting with 0 */ public Boolean getBoolean(int column) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a Boolean object. * This can be used for the data type BOOLEAN. * * @param name name of the column */ public Boolean getBoolean(String name) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a {@link java.sql.Date} object. * This can be used for the data type DATE. * * @param column index of the column, starting with 0 */ public Date getDate(int column) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a {@link java.sql.Date} object. * This can be used for the data type DATE. * * @param name name of the column */ public Date getDate(String name) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a {@link java.sql.Timestamp} object. * This can be used for the data type TIMESTAMP. * * @param column index of the column, starting with 0 */ public Timestamp getTimestamp(int column) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row as a {@link java.sql.Timestamp} object. * This can be used for the data type TIMESTAMP. * * @param name name of the column */ public Timestamp getTimestamp(String name) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row. * This can be used for all data types. You have to cast the value appropriately. * * @param column index of the column, starting with 0 */ public Object getObject(int column) throws ExaIterationException, ExaDataTypeException; /** * @return value of the specified column of the current row. * This can be used for all data types. You have to cast the value appropriately. * * @param name name of the column */ public Object getObject(String name) throws ExaIterationException, ExaDataTypeException; }
import { ExpressionType } from '../expression'; import { TemplateType, ExpressionTemplate } from '../template'; export function expr( expression: ExpressionTemplate['expression'] ): ExpressionTemplate { return { type: TemplateType.Expression, expression, }; } export function useContext<T = unknown>() { return (nameOrGetter: keyof T | Function, ...deps: (keyof T)[]) => { if (typeof nameOrGetter === 'function') return expr({ type: ExpressionType.Function, func: nameOrGetter, deps }); else return expr({ type: ExpressionType.Property, name: nameOrGetter, }); }; }
/** * cairo_in_stroke: * @cr: a cairo context * @x: X coordinate of the point to test * @y: Y coordinate of the point to test * * Tests whether the given point is inside the area that would be * affected by a cairo_stroke() operation given the current path and * stroking parameters. Surface dimensions and clipping are not taken * into account. * * See cairo_stroke(), cairo_set_line_width(), cairo_set_line_join(), * cairo_set_line_cap(), cairo_set_dash(), and * cairo_stroke_preserve(). * * Return value: A non-zero value if the point is inside, or zero if * outside. **/ cairo_bool_t cairo_in_stroke (cairo_t *cr, double x, double y) { cairo_status_t status; cairo_bool_t inside = FALSE; if (cr->status) return 0; status = _cairo_gstate_in_stroke (cr->gstate, cr->path, x, y, &inside); if (status) _cairo_set_error (cr, status); return inside; }
/** * Derive macro for index zome code generator. * * Generates a complete, self-contained zome def. * * @package hdk_semantic_indexes * @author pospi <<EMAIL>> * @since 2021-10-10 */ extern crate proc_macro; use self::proc_macro::TokenStream; use quote::{quote, format_ident}; use syn::{ parse_macro_input, AttributeArgs, Data, DataStruct, DeriveInput, Fields, Type, TypePath, PathSegment, PathArguments::AngleBracketed, AngleBracketedGenericArguments, GenericArgument, punctuated::Punctuated, token::Comma, }; use darling::FromMeta; use convert_case::{Case, Casing}; #[derive(Debug, FromMeta)] struct MacroArgs { #[darling(default)] query_fn_name: Option<String>, } #[proc_macro_attribute] pub fn index_zome(attribs: TokenStream, input: TokenStream) -> TokenStream { let raw_args = parse_macro_input!(attribs as AttributeArgs); let args = match MacroArgs::from_list(&raw_args) { Ok(v) => v, Err(e) => { return TokenStream::from(e.write_errors()); } }; let input = parse_macro_input!(input as DeriveInput); let fields = match &input.data { Data::Struct(DataStruct { fields: Fields::Named(fields), .. }) => &fields.named, _ => panic!("expected a struct with named fields"), }; // build toplevel variables for generated code let record_type = &input.ident; let record_type_str_attribute = record_type.to_string().to_case(Case::Snake); let record_type_index_attribute = format_ident!("{}_index", record_type_str_attribute); let record_read_api_method_name = format_ident!("get_{}", record_type_str_attribute); let exposed_query_api_method_name = match &args.query_fn_name { None => format_ident!("query_{}s", record_type_str_attribute), Some(query_fn) => format_ident!("{}", query_fn), }; let record_index_field_type = format_ident!("{}Address", record_type.to_string().to_case(Case::UpperCamel)); // build iterators for generating index update methods and query conditions let all_indexes = fields.iter() .map(|field| { let relationship_name = field.ident.as_ref().unwrap().to_string().to_case(Case::Snake); let path = match &field.ty { Type::Path(TypePath { path, .. }) => path, _ => panic!("expected index type of Local or Remote"), }; let (index_type, args) = match path.segments.first() { Some(PathSegment { arguments: AngleBracketed(AngleBracketedGenericArguments { args, .. }), ident, .. }) => (ident, args), _ => panic!("expected parameterised index with <related_record_type, relationship_name>"), }; assert_eq!(args.len(), 2, "expected 2 args to index defs"); let mut these_args = args.to_owned(); let related_relationship_name: String = next_generic_type_as_string(&mut these_args).to_case(Case::Snake); let related_record_type: String = next_generic_type_as_string(&mut these_args); let related_index_field_type = format_ident!("{}Address", related_record_type.to_case(Case::UpperCamel)); let related_index_name = format_ident!("{}_{}", record_type_str_attribute, relationship_name); let related_record_type_str_attribute = related_record_type.to_case(Case::Snake); let reciprocal_index_name = format_ident!("{}_{}", related_record_type_str_attribute, related_relationship_name); ( index_type, relationship_name, related_record_type_str_attribute, related_index_field_type, related_index_name, reciprocal_index_name, ) }); let index_accessors = all_indexes.clone() .map(|( _index_type, relationship_name, _related_record_type_str_attribute, related_index_field_type, related_index_name, _reciprocal_index_name, )| { let local_dna_read_method_name = format_ident!("_internal_read_{}_{}", record_type_str_attribute, relationship_name); quote! { #[hdk_extern] fn #local_dna_read_method_name(ByAddress { address }: ByAddress<#record_index_field_type>) -> ExternResult<Vec<#related_index_field_type>> { Ok(read_index(&stringify!(#record_type_str_attribute), &address, &stringify!(#related_index_name))?) } } }); let index_mutators = all_indexes.clone() .map(|( index_type, relationship_name, related_record_type_str_attribute, related_index_field_type, related_index_name, reciprocal_index_name, )| { // :TODO: differentiate Local/Remote indexes as necessitated by final HC core APIs let dna_update_method_name = match index_type.to_string().as_ref() { "Local" => format_ident!("_internal_index_{}_{}", record_type_str_attribute, relationship_name), "Remote" => format_ident!("index_{}_{}", record_type_str_attribute, relationship_name), _ => panic!("expected index type of Local or Remote"), }; quote! { #[hdk_extern] fn #dna_update_method_name(indexes: RemoteEntryLinkRequest<#related_index_field_type, #record_index_field_type>) -> ExternResult<RemoteEntryLinkResponse> { let RemoteEntryLinkRequest { remote_entry, target_entries, removed_entries } = indexes; Ok(sync_index( &stringify!(#related_record_type_str_attribute), &remote_entry, &stringify!(#record_type_str_attribute), target_entries.as_slice(), removed_entries.as_slice(), &stringify!(#reciprocal_index_name), &stringify!(#related_index_name), )?) } } }); let query_handlers = all_indexes .map(|( _index_type, relationship_name, related_record_type_str_attribute, _related_index_field_type, _related_index_name, reciprocal_index_name, )| { let query_field_ident = format_ident!("{}", relationship_name); quote! { match &params.#query_field_ident { Some(#query_field_ident) => { entries_result = query_index::<ResponseData, #record_index_field_type, _,_,_,_,_,_>( &stringify!(#related_record_type_str_attribute), #query_field_ident, &stringify!(#reciprocal_index_name), &read_index_target_zome, &READ_FN_NAME, ); }, _ => (), }; } }); TokenStream::from(quote! { use hdk::prelude::*; use hdk_semantic_indexes_zome_lib::*; // unrelated toplevel zome boilerplate entry_defs![PathEntry::entry_def()]; // :TODO: obviate this with zome-specific configs #[derive(Clone, Serialize, Deserialize, SerializedBytes, PartialEq, Debug)] pub struct DnaConfigSlice { pub #record_type_index_attribute: IndexingZomeConfig, } // zome properties access helper fn read_index_target_zome(conf: DnaConfigSlice) -> Option<String> { Some(conf.#record_type_index_attribute.record_storage_zome) } // define struct to wrap query parameter inputs, so that other meta-args (eg. pagination) can be added later #[derive(Debug, Serialize, Deserialize)] struct SearchInputs { pub params: QueryParams, } // define zome API function name to read indexed records const READ_FN_NAME: &str = stringify!(#record_read_api_method_name); // public zome API for reading indexes to determine related record IDs #( #index_accessors )* // public zome API for updating indexes when associated records change #( #index_mutators )* // define query results structure as a flat array which separates errors into own list #[derive(Debug, Serialize, Deserialize)] struct QueryResults { #[serde(default)] pub results: Vec<ResponseData>, // :TODO: pagination metadata #[serde(default)] #[serde(skip_serializing_if = "Vec::is_empty")] pub errors: Vec<WasmError>, } // declare public query method with injected handler logic #[hdk_extern] fn #exposed_query_api_method_name(SearchInputs { params }: SearchInputs) -> ExternResult<QueryResults> { let mut entries_result: RecordAPIResult<Vec<RecordAPIResult<ResponseData>>> = Err(DataIntegrityError::EmptyQuery); // :TODO: proper search combinator logic, this just does exclusive boolean ops #( #query_handlers )* let entries = entries_result?; Ok(QueryResults { results: entries.iter() .cloned() .filter_map(Result::ok) .collect(), errors: entries.iter() .cloned() .filter_map(Result::err) .map(|err| { WasmError::from(err) }) .collect(), }) } }) } fn next_generic_type_as_string(args: &mut Punctuated<GenericArgument, Comma>) -> String { match args.pop().unwrap().value() { GenericArgument::Type(Type::Path(TypePath { path, .. })) => path.get_ident().unwrap().to_string(), _ => panic!("expecting a Type argument of length 1"), } }
/** * Class to mantain mapping configuration for PATCH operation needed by the Submission process * * @author Luigi Andrea Pascarelli (luigiandrea.pascarelli at 4science.it) * */ public class PatchConfigurationService { private Map<String, Map<String, PatchOperation>> map; public Map<String, Map<String, PatchOperation>> getMap() { return map; } public void setMap(Map<String, Map<String, PatchOperation>> map) { this.map = map; } }
Last week’s “first of the ISIS war” combat death for a US soldier in Iraq gave way to admissions, over the past two days, that the Pentagon is engaged in ground combat as a fairly regular matter during what officials have presented as an exclusively “advisory” deployment. It’s also apparently not new. Apparently determined to protest the charge of “mission creep” in the war, officials are now conceding that they’ve been engaged in secret ground combat for months now, and therefore this isn’t mission creep, but rather a transition to public admission of what they’ve been doing all along. Officials also made reference to a US special operations office being run out of the Kurdish capital of Irbil, saying the matter was kept so highly classified that even the name of the office itself is considered a state secret that won’t be released. Sen. Bob Corker (R – TN), head of the Foreign Relations Committee, downplayed the seriousness of the White House carrying out a secret ground war even as they were publicly telling the American people that no ground combat would ever happen in Iraq, saying “it’s the way our government is set up.” Corker did however express concern about the lack of information given to Congress about the scope of the special operations ground combat, saying that Congress isn’t “even close to fully knowledgeable as to what is happening.” That apparently even leaves open the question of whether last week’s death was the first “combat casualty” of the war, as officials are now suggesting that there are at least five American ground soldiers who were wounded in Iraq over the course of the war, and the details of all of those incidents are being kept secret. Sgt. Joshua Wheeler’s death last week appears to have been the first actual death of the conflict, and covering that up appears to have been a step too far for the Pentagon leadership. This is at least the public explanation for why the Pentagon went from “ruling out” combat to insisting a ground war was self-evidence in the matter of about 48 hours. It may be too soon to rule out mission creep as well, however, as even if the US has been in secret ground combat for months doesn’t mean the sudden admission of limited ground combat might not suggest the “secret” part of the war is going to transition into something even more aggressive. Last 5 posts by Jason Ditz
#include<cstdio> int vc[27]; int main () {int n,i,s=0; char a; scanf("%d%c",&n,&a); for(i=1;i<=2*n-2;i++) {scanf("%c",&a); if(a<='z'&&a>='a') vc[a-'a'+1]++; else if(vc[a-'A'+1]>0) vc[a-'A'+1]--; else s++; } printf("%d",s); return 0; }
Angus cattle Scotland Aberdeen Angus cattle have been recorded in Scotland since at least the 16th century in the country's northeast. For some time before the 1800s, the hornless cattle in Aberdeenshire and Angus were called Angus doddies. In 1824, William McCombie of Tillyfour, MP for South Aberdeenshire, began to improve the stock and is regarded today as the father of the breed. Many local names emerged, including doddies or hummlies. The first herd book was created in 1862, and the society was formed in 1879. This is considered late, given that the cattle gained mainstream acceptance in the middle of the eighteenth century. The cattle became commonplace throughout the British Isles in the middle of the 20th century. Argentina As stated in the fourth volume of the Herd Book of the UK's Angus, this breed was introduced to Argentina in 1879 when "Don Carlos Ortega" imported one bull and two cows for his Estancia "Charles" located in Juancho, Partido de General Madariaga, Provincia de Buenos Aires. The bull was born on 19 April 1878; named "Virtuoso 1626" and raised by Colonel Ferguson. The cows were named "Aunt Lee 4697" raised by J. James and "Cinderela 4968" raised by R. Walker and were both born in 1878, on 31 January and 23 April respectively. Australia Angus cattle were first introduced to Tasmania (then known as Van Diemen's Land) in the 1820s and to the southern mainland in 1840. The breed is now found in all Australian states and territories with 62,000 calves registered with Angus Australia in 2010. Canada In 1876 William Brown, a professor of agriculture and then superintendent of the experimental farm at Guelph, Ontario, was granted permission by the government of Ontario to purchase Aberdeen Angus cattle for the Ontario Agricultural College. The herd comprised a yearling bull, Gladiolus, and a cow, Eyebright, bred by the Earl of Fife and a cow, Leochel Lass 4th, bred by R.O. Farquharson. On 12 January 1877, Eyebright gave birth to a calf, sired by Sir Wilfrid. It was the first to be born outside of Scotland. The OAC went on to import additional bulls and cows, eventually began selling Aberdeen Angus cattle in 1881. United States On 17 May 1873, George Grant brought four Angus bulls, without any cows, to Victoria, Kansas. These were seen as unusual as the normal American cattle consisted of Shorthorns and Longhorns, and the bulls were used only in crossbreeding. However, the farmers noticed the good qualities of these bulls and afterwards, many more cattle of both sexes were imported. On 21 November 1883, the American Angus Association was founded in Chicago, Illinois. The first herd book was published on March 1885. At this time both red and black animals were registered without distinction. However, in 1917 the Association barred the registering of red and other coloured animals in an effort to promote a solid black breed. The Red Angus Association of America was founded in 1954 by breeders of Red Angus cattle. It was formed because the breeders had had their cattle struck off the herd book for not conforming to the changed breed standard regarding colour. Germany A separate breed was cross bred in Germany called the German Angus. It is a cross between the Angus and several different cattle such as the German Black Pied Cattle, Gelbvieh, and Fleckvieh. The cattle are usually larger than the Angus and appear in black and red colours. Characteristics Because of their native environment, the cattle are very hardy and can survive the Scottish winters, which are typically harsh, with snowfall and storms. Cows typically weigh 550 kilograms (1,210 lb) and bulls weigh 850 kilograms (1,870 lb). Calves are usually born smaller than is acceptable for the market, so crossbreeding with dairy cattle is needed for veal production. The cattle are naturally polled and black in colour. They typically mature earlier than other native British breeds such as the Hereford or North Devon. However, in the middle of the 20th century a new strain of cattle called the Red Angus emerged. The United States does not accept Red Angus cattle into herd books, while the UK and Canada do. Except for their colour genes, there is no genetic difference between black and red Angus, but they are regarded as different breeds in the US. However, there have been claims that black angus are more sustainable to cold weather, though unconfirmed. The cattle have a large muscle content and are regarded as medium-sized. The meat is very popular in Japan for its marbling qualities. Genetic disorders There are four recessive defects that can affect calves worldwide. A recessive defect occurs when both parents carry a recessive gene that will affect the calf. One in four calves will show the defect even when both parents carry the defective gene. The four recessive defects in the Black Angus breed that are currently managed with DNA tests are arthrogryposis multiplex (AM), referred to as curly calf, which lowers the mobility of joints; neuropathic hydrocephalus (NH), sometimes known as water head, which causes an enlarged malformed skull; contractural arachnodactyly (CA), formerly referred to by the name of "fawn calf syndrome", which reduces mobility in the hips; and dwarfism, which affects the size of calves. Both parents need to carry the genes for a calf to be affected with one of these disorders. Because of this, the American Angus Association will remove the carrier cattle from the breed in an effort to reduce the number of cases. Between 2008 and 2010, the American Angus Association reported worldwide recessive genetic disorders in Angus cattle. It has been shown that a small minority of Angus cattle can carry osteoporosis. A further defect called notomelia, a form of polymelia ("many legs") was reported in the Angus breed in 2010. Uses The main use of Angus cattle is for beef production and consumption. The beef can be marketed as superior due to its marbled appearance. This has led to many markets, including Australia, Japan and the United Kingdom to adopt it into the mainstream. Angus cattle can also be used in crossbreeding to reduce the likelihood of dystocia (difficult calving), and because of their dominant polled gene, they can be used to crossbreed to create polled calves.
Wireless communication is a virtual necessity in today's society. People use cordless phones, cellular phones, wireless data communication devices, etc. on a daily basis. The ability to communicate wirelessly has become pervasive in homes, businesses, retail establishments, and in the outdoors generally. Consequently, people can now communicate while in transit and in almost any environment. Wireless communication involves the use of a limited resource: the electromagnetic spectrum. Different wireless communication schemes involve using different bands or segments of the electromagnetic spectrum in different manners. Typically, each particular segment of the electromagnetic spectrum is utilized in accordance with a wireless standard that has been created by a government entity and/or an industry consortium. There are many wireless standards under which wireless devices operate today. Example wireless standards include, but are not limited to, Bluetooth, Digital Enhanced Cordless Telecommunications (DECT), Code Division Multiple Access (CDMA)-2000, Wideband-CDMA (WCDMA), Wi-Fi, WiMAX, and so forth. Wireless standards that have a marketing-oriented name typically also have a corresponding more technical name for the standard. For example, the term “Wi-Fi” is usually considered to correspond to at least the IEEE 802.11(a), (b), and (g) standards. Similarly, the term “WiMAX” is usually considered to correspond to at least a subset of the IEEE 802.16 standard. Devices that operate in accordance with any of these or many other standards can generally receive and transmit electromagnetic signal waves. The power involved in the transmission and reception of the signals is usually regulated to avoid wasting power at the device and to avoid unnecessary interference between competing electromagnetic signal waves that are simultaneously traveling through the same airspace. Consequently, measuring the energy of a received signal wave is a common aspect of the signal wave reception process.
// -*- C++ -*- /*! \file Node_OneIncCell.h \brief Class for a vertex in a mesh. */ #if !defined(__geom_mesh_simplicial_Node_OneIncCell_h__) #define __geom_mesh_simplicial_Node_OneIncCell_h__ #include "Node_VertSelfId.h" namespace geom { //! Vertex in a simplicial mesh that stores one cell incidence. /*! \param Mesh is the simplicial mesh. The base class has the Cartesian point and a vertex iterator. This class stores an iterator to one incident cell. */ template <class Mesh> class Node_OneIncCell : public Node_VertSelfId<Mesh> { // // Private types. // private: typedef Node_VertSelfId<Mesh> base_type; // // Enumerations. // public: //! The space dimension. enum { N = base_type::N }; // // Public types. // public: //! The simplicial mesh. typedef Mesh mesh_type; //! An iterator to a cell. typedef typename mesh_type::cell_iterator cell_iterator; //! An iterator to a const cell. typedef typename mesh_type::cell_const_iterator cell_const_iterator; //! An iterator to a node. typedef typename base_type::node_iterator node_iterator; //! An iterator to a const node. typedef typename base_type::node_const_iterator node_const_iterator; //! The number type. typedef typename base_type::number_type number_type; //! A vertex (a Cartesian point). typedef typename base_type::vertex_type vertex_type; // // Data // private: //! An incident cell. cell_iterator _cell; public: //-------------------------------------------------------------------------- //! \name Constructors and Destructor. //! @{ //! Default constructor. Uninitialized point, and null identifier and iterators. Node_OneIncCell() : base_type(), _cell(0) {} //! Construct from a point, an identifier, and a cell iterator. Node_OneIncCell(const vertex_type& point, const std::size_t identifier = -1, const cell_iterator cell = 0) : base_type(point, identifier), _cell(cell) {} //! Build from a point, an identifier, and a cell iterator. void build(const vertex_type& point, const std::size_t identifier = -1, const cell_iterator cell = 0) { base_type::build(point, identifier); _cell = cell; } //! Copy constructor. Node_OneIncCell(const Node_OneIncCell& x) : base_type(x), _cell(x._cell) {} //! Destructor. ~Node_OneIncCell() {} //! @} //-------------------------------------------------------------------------- //! \name Assignment operators. //! @{ //! Assignment operator. Node_OneIncCell& operator=(const Node_OneIncCell& x) { if (&x != this) { base_type::operator=(x); _cell = x._cell; } return *this; } //! @} //-------------------------------------------------------------------------- //! \name Accessors. //! @{ //! Return the vertex. const vertex_type& vertex() const { return base_type::vertex(); } //! Return the identifier of this node. /*! Typically, the identifier is in the range [0...num_vertices). and a value of -1 indicates that the identifier has not been calculated. */ std::size_t identifier() const { return base_type::identifier(); } //! Return a const iterator to itself. node_const_iterator self() const { return base_type::self(); } //! Return a const iterator to an incident cell. cell_const_iterator cell() const { return _cell; } //! @} //-------------------------------------------------------------------------- //! \name Manipulators. //! @{ //! Set the vertex. void set_vertex(const vertex_type& vertex) { base_type::set_vertex(vertex); } //! Set the identifier. /*! \note This member function is const because the identifier is mutable. It is intended to be modified as the mesh changes. */ void set_identifier(const std::size_t identifier) const { base_type::set_identifier(identifier); } //! Return an iterator to itself. node_iterator self() { return base_type::self(); } //! Set the iterator to the derived node. void set_self(const node_iterator v) { base_type::set_self(v); } //! Return an iterator to an incident cell. cell_iterator cell() { return _cell; } //! Add an adjacent cell. void add_cell(const cell_iterator c) { _cell = c; } //! Remove an adjacent cell. void remove_cell(const cell_iterator c) { // Look for another adjacent cell. // CONTINUE; assert(false); } //! @} //-------------------------------------------------------------------------- //! \name Equality. //! @{ //! Return true if this is equal to \c x. bool operator==(const Node_OneIncCell& x) const { return base_type::operator==(x) && _cell == x._cell; } //! Return true if this is not equal to \c x. bool operator!=(const Node_OneIncCell& x) const { return ! operator==(x); } //! @} //-------------------------------------------------------------------------- //! \name File I/O. //! @{ //! Write the vertex, identifier, and the incident cell identifier. void put(std::ostream& out) const { // Write the point and the vertex identifier. base_type::put(out); // Write the incident cell identifier. out << " " << _cell->identifier(); } //! @} }; } // namespace geom #endif
def report(runway_state): sample_metar = ( "EGNX 191250Z VRB03KT 9999 -RASN FEW008 SCT024 " "BKN046 M01/M03 Q0989 " ) return Metar.Metar(sample_metar + " " + runway_state)
Titrated oral misoprostol solution versus vaginal misoprostol for labor induction in primi gravid ladies It is now generally accepted that the uterine cervix plays an important role during pregnancy and labor and that it depends on an active ripening process within the cervix; which is necessary for successful labor induction. to test the safety and efficacy of titrated oral misoprostol compared to vaginal misoprostol for labor induction in term gravid ladies. This prospective, single-blinded randomized controlled clinical trial was conducted at Ain-Shams University Maternity Hospital during the period between August 2017 and August 2018. 120 pregnant women planned for induction of labor were recruited in the study according to the inclusion / exclusion criteria. Subjects included in the study were randomized into two groups: patients who received oral 200 ug misoprostol in 200ml water titrated over 24hours and placebo tablets vaginally which resemble vagiprost tablets (25 microgram misoprostol) and the second group contained pients who received vaginal misoprostol 25microg maximum four doses, and placebo solution of 200ml of tap water. titrated oral misoprostol is as effective in promoting cervical ripening and inducing labor as intravaginal misoprostol, oral Misoprostol has a similar maternal and perinatal safety profile to vaginal misoprostol. This new approach to oral misoprostol administration was successful in minimizing the risk of uterine hyper-stimulation, which has been a feature of misoprostol use for induction of labor, at the expense of a somewhat slower response. Oral Misoprostol has a similar maternal and perinatal safety profile compared to vaginal misoprostol.
Q: How could the government not prevent a medical company from helping terrorists? Recently, the government has found out that the biggest medical corporation (responsible for plenty of technological progress) has been financing and providing material aid to an organization responsible for multiple attacks and thousands of deaths. This is a futuristic world so, thanks to this company, people live longer, a lot of diseases are taken care of, prosthetics working like natural limbs exist, nerve-related problems like paraplegia can be fixed... Needless to say the company are viewed as a saviour and as philanthropist. Also, this government rules a sort of interplanetary federation so there is room for secret activities and secret facilities on remote planets. Due to the large amount of money this company has, it has a private security force, but it can not rival against a full size army. This security would be enough to delay an assault on a facility, but the outcome of an assault would always be in favour of the government. The terrorism funding needs to be possible, whatever the means. After finding this out, the government will be watching the company's payments closely. The terrorist group in question is a small group of very competent fighters looking to expose government corruption and bad actions. They are extremely good at keeping themselves concealed. The funding decision is only known by the CEO and very few highly ranked executives, who are secretly part of the terrorist organization. Most of the company do not know about this. In fact, there is a whole branch dedicated to helping these terrorists, but it kept very secret and it's highly protected by the security force. The connection between the company and the terrorist group has been discovered by a highly placed governor whose shady plans have been thwarted by the terrorists. Therefore, he is trying to use his power to destroy the terrorists by any means, hence his focus on the funding. He also found out that the CEO is part of the organization (he is the leader of it but it remains unknown), but he has not found out anything about the complicit executives. Therefore, if the CEO is not in power anymore, he can still manage things from the outside thanks to the help of these executives. The government is a corrupt democracy, but the corruption remains unknown to the public. In fact, it's the terrorists' goal to expose this corruption and use of dirty tactics. Since the government wants to keep a good image, if the public were to find out about the governor's shady plans, he would definitely lose his job, hence his determination to bring the terrorists and the funding down. How could the government not prevent a medical company from helping terrorists? A: Because the crime was by one or a few individuals. Or... A company is not an agent. A company is not a person with a plan and a will. Crimes are committed by persons, not companies. Only persons can be found guilty of a crime. Hence a company cannot commit a crime. Instances when you hear a company being dragged before a court it is always civil proceedings and never criminal proceedings. If one or a few persons are found to have funnelled company assets into terrorist causes, then those persons will be charged of that crime. These persons will then — of course — be fired from the company, and the company's lawyers will seek to get the stolen money back from them. The point is: a company does not act. People act, and thus people get charged for committing crimes, not companies. "But what if the CEO was in on it?" Then the board of directors fire the CEO and seek damages from them. "But what if all of the board of directors were in on it?" Then there will be a shareholder's meeting where they appoint a new board of directors, and seek damages from the board of directors. "But what if everyone was in on it?!" Then all of the assets of the company will be confiscated by the government, since the government can confiscate assets of convicted criminals if those assets have been used in criminal activity. These assets will then be auctioned out. Someone can buy the company assets and restructure the company, with a new board of directors. "But I really want a criminal company, and that the government knows it!" Ok fine... if your really want to disable the boring legal stuff that prevents you from making a spicy story, then the answer is: jurisdiction. As I explained in this answer: jurisdiction is a hairy issue when you are out planet-hopping. If you thought that catching criminals and getting them extradited was tricky on Earth — with hostile nations granting immunity to wanted criminals (*) — it is going to be a right tangled mess out in space when things are spread out over different planets. Or — even better — what if CrimeInSpace Crop. does not even reside on a planet but their entire operation is mobile, on large factory ships. There simply is no jurisdiction that can touch them. One final note... If the company is out in truly lawless territory... then so is the government in question, and it is only accountability towards the constituents ("That has to be approved by Congress!") that prevents them from sending a hit-squad towards the company. If the company is sheltered in another jurisdiction however, then it becomes trickier because the government has to respect the sovereignty of other nations. (*) Leon Klinghoffer was murdered during the Achille Lauro hijacking in 1985 A: It cannot be shutdown or reigned in simply because it is too big to fail. Sure, the medical company is providing aid to an organization that has caused thousands of deaths, but the company is also providing aid that is preventing millions of deaths. If it were to suddenly close, who would take over care of the patients? Make the prescription medicine? Run the hospitals? Maintain the prosthetic limbs? Finish the research on groundbreaking cures that were just around the corner? And of course, if the government suddenly shut them down or tried disrupting their moneymaking operations the company's public relations team would have a field day running attack ads against whoever was responsible. No honest politician would want to be seen as the evil person who took away grandma's medicine, and no corrupt politician would want their dirty secrets broadcast across the world. A: Q: Why would you not blow up a road terrorists are using? A: The road will still be useful after terrorists are gone. The same applies here. The medical devices produced by this company are not terror devices. They are helpful. Ordinary people have jobs here. The company and its income are being used as a front / financing by terrorists. The terrorists need to go, not the company or income. Just as when the government takes over a failing school it does not blow up the school, when the government decides to oust the terrorists from leadership of this company it will do so gently and retain all functional and useful structures. Many managers and certainly the rank and file working here are not terrorists. The board of directors (if a public company) are probably not terrorists either. Third-in-command will be the interim chief and new outside CEO will be recruited. Business as usual for big business. The company will have a gentle leadership change under the direction of the government. CEOs get ousted all the time, with some plausible reason being offered up to the public. Probably the real reason is political, or the CEO is a drunk, or some major stockholders got pissed, or the CEO is a terrorist.
Q: Are chicken gizzards okay to eat if still pink inside? I sautéed chicken gizzards for quite a while, then added water and boiled another 15 minutes but they are still pink inside. Are they safe to eat even though they are pink inside? A: After boiling for that period of time (especially after sauteing), the gizzards have certainly reached a "safe" temperature. They are probably not really good eating though. Gizzards are a tough piece of meat. They benefit from a low and slow cooking process. Here's a pretty good article from Livestrong. Among other advice, they suggest braising (or boiling) first for 15 or so minutes, then searing at a very high heat. Here's a pretty typical recipe for Southern Fried Chicken Gizzards. It starts with simmering the gizzards for 45 minutes or longer, then cooling, then breading. Safety really isn't the issue here, or color; after 15 minutes of boiling, you have well surpassed the USDA recommended temperature of 165F (74C). For good flavor though, they need more time. Traditionally, chicken or turkey giblets are cooked by simmering in water for use in flavoring soups, gravies or poultry stuffing. Once cooked, the liver will become crumbly and the heart and gizzard will soften and become easy to chop. Cooked giblets should have a firm texture. Casseroles containing giblets should be cooked to 165 °F. Stuffing should also be cooked to 165 °F. Chicken giblets are commonly fried or broiled. Leftovers should be refrigerated within 2 hours.
Q: $\nu$-svm parameter selection For the $\nu$-SVM (for both classification and regression cases) the $\nu \in (0;1)$ should be selected. The LIBSVM guide suggests to use grid search for identifying the optimal value of the $C$ parameter for $C$-SVM, it also recommends to try following values $C = 2^{-5}, 2^{-3}, \dots, 2^{15}$. So the question is, are there any recommendations for values of the $\nu$ parameter in case of $\nu$-SVMs? A: Rather than use a grid search, you can just optimise the hyper-parameters using standard numeric optimisation techniques (e.g. gradient descent). If you don't have estimates of the gradients, you can use the Nelder-Mead simplex method, which doesn't require gradient information and is vastly more efficient than grid-search methods. I would use the logit function to map the (0;1) range of $\nu$ onto $(-\infty;+\infty)$ to get an unconstrained optimisation problem. If you really want to use grid search, then just spacing the evaluation points linearly in the range 0 - 1 should be fine.
Exposure to radiation from single or combined radio frequencies provokes macrophage dysfunction in the RAW 264.7 cell line Abstract Purpose: The aim of this study was to determine whether exposure to radiation from single or multiple radio-frequency (RF) signals at 900 and 2450MHz would induce effects in the RAW 264.7 cell line. Materials and methods: Cell cultures were exposed to single or combined RF for 4, 24, 48, or 72h in a GTEM electromagnetic test chamber. At the end of the radiation exposure time, viability and cell growth were analyzed by flow cytometry, nitric oxide (NO) production was measured by colorimetry, the expression of HSP70 and TNF- was ascertained by qPCR, and the phagocytic activity was observed by microscopy. Results: NO production increased after 48h exposure at 2450MHz, compared with controls. The group subjected to the combined interaction of two RFs showed an increase of HSP70 after 48h exposure and a significant increase of NO and TNF- after 72h. The phagocytic activity of macrophages decreased in all groups as exposure time increased. Conclusions: Our results indicated a decrease in phagocytic activity and an increase in inflammatory, cytoprotective, and cytotoxic responses in macrophages after continuous and combined exposure of multiple RF signals. Multiple RF interact in everyday life, the immune response in humans is unknown. Graphical Abstract
Pharmacokinetics and tolerability of inhaled levodopa from a new dry-powder inhaler in patients with Parkinsons disease Background: Inhaled levodopa may quickly resolve off periods in Parkinsons disease. Our aim was to determine the pharmacokinetics and tolerability of a new levodopa dry-powder inhaler. Methods: A single-centre, single-ascending, single-doseresponse study was performed. Over three visits, eight Parkinsons disease patients (not in the off state) received by inhalation 30 mg or 60mg levodopa, or their regular oral levodopa. Maximum levodopa plasma concentration (Cmax), time to maximum plasma concentration (Tmax) and area under the concentration time curve 0180min were determined. Spirometry was performed three times at each visit. Results: After inhalation, levodopa Tmax occurred within 15min in all participants, whereas after oral administration, Tmax ranged from 20 min to 90min. The bioavailability of inhaled levodopa without carboxylase inhibitor was 53% relative to oral levodopa with carboxylase inhibitor. No change in lung-function parameters was observed and none of the patients experienced cough or dyspnoea. No correlation was observed between inhalation parameters and levodopa pharmacokinetic parameters. Conclusion: Inhaled levodopa is well tolerated, absorbed faster than oral levodopa, and can be robustly administered over a range of inhalation flow profiles. It therefore appears suitable for the treatment of off periods in Parkinsons disease. Introduction Parkinson's disease is characterized by the degeneration of dopaminergic neurons in the substantia nigra in the brain. 1 The resulting lack of dopamine in the brain causes disruption of brain circuits, thereby provoking the core motor features of bradykinesia plus rest tremor or rigidity. 2 Levodopa, nonergot dopamine agonists and monoamine oxidase (MAO) inhibitors are effective in relieving the motor symptoms and signs of the disease. 3 Levodopa is administered via the oral or duodenal route, and dopamine agonists are administered via the oral, transdermal or subcutaneous route. Several years after being diagnosed with Parkinson's disease, many patients develop motor fluctuations as a result of a narrowing therapeutic window of levodopa 4 in combination with a delayed onset of effect after orally administered levodopa due to irregular gastrointestinal absorption. 5 This may lead to 'off periods', in which Parkinson's symptoms are poorly controlled 6 and patients suffer from a variety of complaints such as bradykinesia, decreased mobility, tremor and autonomic or sensory symptoms. 7 For patients with severe motor fluctuations on oral levodopa therapy, the only registered drug for termination of the off periods is subcutaneous apomorphine. After injection, an onset of effect of apomorphine is generally not seen within a 20-min lag time. 8 Since being in an off period causes a great burden to the patient, a rapid onset of the effect is desired. Unfortunately, apomorphine is a strong emetic, causing nausea and vomiting on a regular basis. Patients using apomorphine therefore frequently require antiemetic drugs. 9 Another disadvantage of apomorphine is its administration via (self) injection. In spite of improved needle technology, injection is considered burdensome by many patients. An alternative strategy in daily practice for ending an off episode is the oral administration of levodopa/benserazide dispersible tablets. Faster effect than standard levodopa formulations is expected. For dispersible levodopa to be effective, the levodopa needs to be absorbed via the small intestine. It is known that most off symptoms improve within about 30-60 min after administration of dispersible levodopa, but in some patients, improvement of symptoms is delayed or does not occur at all. 10 After oral administration, the absolute bioavailability of immediate-release levodopa is 40-60%, combined with a decarboxylase inhibitor, it raises to approximately 85%. 11 The C max is reached after 1 h on average. However, it is known that food, especially proteins, decreases the absorption rate of levodopa. Food also increases the time to the C max. Levodopa is metabolized mainly via decarboxylation by the aromatic amino acid decarboxylase to dopamine, adrenaline, and noradrenaline and via O-methylation by catechol-O-methyltransferase (COMT) and MAO. 12,13,14 Levodopa used in combination with a decarboxylase inhibitor has a relatively short plasma half-life time of approximately 90 min. 15 Delivery of systemically acting drugs by inhalation can offer various advantages compared with their oral administration. 16,17 After correct pulmonary administration, a major portion of the drug is immediately deposited on the absorbing membrane, which results in rapid absorption compared with oral administration. The drug is not subjected to the drug-metabolizing enzymes and efflux transporter activity of the gut and firstpass metabolism of the liver that occurs after oral administration. 16 Moreover, administration of levodopa by inhalation bypasses the irregular gastrointestinal absorption of levodopa in off periods, which is caused by irregular gastrointestinal motility. In contrast, after inhalation, small molecules can be absorbed within seconds to minutes, 16 which has been confirmed for levodopa by Lipp et al. 18 They showed that after inhalation of their levodopa formulation CVT301, the drug was rapidly absorbed into the bloodstream. Already 5 min after receiving 50 mg of CVT-301, 67% of the participants showed a levodopa plasma concentration over 400 ng/ml. This rapid absorption clearly is an advantage when a quick response of a drug is desired, as in off periods in Parkinson's disease. Pulmonary administration of levodopa is therefore considered a promising alternative to injected apomorphine or to dispersible levodopa tablets. This promise is further strengthened by the fact that Parkinson's disease patients are generally capable to perform a correct inhalation manoeuvre during an off period. 19 So far, the only described dry-powder inhalation system for levodopa is the CVT301. 18 The CVT301 system is based on a spray-dried powder containing only 50% levodopa, with 25% dipalmitoyl phosphatidylcholine, 15% sodium citrate, and 10% calcium chloride as excipients. The high load of excipients increases the amount of powder to be inhaled and may lead to side effects, such as cough. Motivated by the positive results from our study on inhalation manoeuvres, we developed a new dry-powder inhalation system for levodopa that contains 98% pure crystalline drug and only a minor amount (2%) of an endogenous excipient. 20 Furthermore, the fact that this formulation contains only crystalline levodopa is expected to improve the stability of the final product. Being a preloaded, disposable inhaler, the Cyclops ® (PureIMS, Roden, the Netherlands) does not require the loading of capsules, contrary to CVT301, which makes it easier to use. Furthermore, the high resistance to airflow of the Cyclops may minimize the chance of cough reactions during inhalation. In this article, we present the pharmacokinetic data of an unblinded single-centre, single-ascending, single-dose-response study of a pulmonary administered 30 mg and 60 mg levodopa with 2% L-leucine dry-powder dose in Parkinson's disease patients. Besides the pharmacokinetic evaluation of inhaled levodopa, the tolerability of the airways journals.sagepub.com/home/taj 3 for inhaled dry-powder levodopa was assessed using spirometry data. Furthermore, by recording the inhalation flow profiles during dose administration we examine the relationship between inhalation and pharmacokinetic parameters. The sample size calculation is based on the expected levodopa plasma concentration after 10 min, since a rapid onset of effect is desired. Materials Inhalation of 30 mg of CVT301 results in a plasma levodopa concentration of 425 ng/ml with a standard deviation of 95 ng/ml after 10 min. After administration of 100 mg of oral levodopa in the fasted state, this value is around 150 ng/ml. With a power of 0.8 and a type I error rate of 0.05, the required sample size would be two study participants. Because of expected differences between CVT301 and the inhaler used in this study, a larger sample size of eight participants was assumed to be sufficient. Patients were eligible if they were at least 18 years of age; diagnosed with Parkinson's disease; currently on a stable levodopa regime with a maximum of four administrations per day and able to perform spirometry. Participants were excluded if they met one or more of the exclusion criteria. Exclusion criteria were: cognitive dysfunction that precludes good understanding of the instructions; being pregnant or breastfeeding; being known to suffer from active pulmonary disease; symptomatic orthostatic hypotension or using a COMT or MAO-B inhibitor. Dosing Over three visits, at least 7 days apart, the participants received a 30 mg inhalation powder levodopa dose (visit one), a 60 mg (2 30 mg) inhalation powder levodopa dose (visit two) and their regular oral levodopa dose (visit three). The inhaled levodopa doses were chosen such that they would remain well below the acceptable single oral dose of 250 mg, assuming a fourfold dose advantage by inhalation. The oral levodopa/ decarboxylase dose varied between 100/25 mg and 250/62.5 mg. All visits and study-drug administration took place in the morning. All patients took their regular breakfast at home, of which the composition details were not collected. The participants were not allowed to take any food or drinks (except water) in the period 60 min before until 60 min after administration of the levodopa. All levodopa doses administered in this study were at least 7.5 h after the last levodopa administration at home, which is five times the half-life time of levodopa plus decarboxylase inhibitor. The time of the last levodopa administration at home, the time of inhalation of the levodopa powder and the time of oral administration of the levodopa in this study were recorded in case report forms. Levodopa inhalation The levodopa inhalation powder was administered by inhalation through the mouth. Prior to levodopa inhalation, a lung technician trained participants in the correct handling of the inhaler device, including the different steps of a correct inhalation manoeuvre. For this training, an empty, instrumented inhaler was used. For measuring the flow curves through the inhaler, a differential pressure gauge (PD1 with MC2A measuring converter) was used (Hottinger, Baldwin Messtechnik, Darmstadt, Germany). The pressure drop across the inhaler was computed into a flow rate, using a laptop loaded with LabViews software (National Instruments BV, The Netherlands). The inhaler used for the levodopa administration was instrumented in the same manner. The generated flow curves were shown to the patient on the computer screen during training, as well as during inhalation of the levodopa, to enable the patient to adjust the desired inhalation effort. On the screen, the minimal required and maximal desired flow rate were indicated. The obtained flow curves during inhalation of the levodopa were also used to compute the inspiratory peak flow rates and inhaled volumes. For the 60 mg dose, the inhalation parameters of the first and second inhalation were averaged to enable further calculations. Because these parameters affect the dose emission from the inhaler, the aerosol generation process, as well as the lung deposition, they are a potential source of variation in the inhaled dose and its lung deposition. Hence, their evaluation potentially allows for the explanation of unexpected pharmacokinetic results. After inhalation of the levodopa, the inhaler residue was determined by ultraviolet spectrophotometric analyses (UV-1800 spectrophotometer Shimadzu Benelux, The Netherlands). The delivered dose was subsequently calculated from blister dose minus inhaler residue. The fine particle dose (<5 m), being 45% of the delivered dose at 4 kPa, was determined with a Sympatec HELOS BF laser diffractometer (Sympatec, GmbH, Clausthal-Zellerfeld. Germany). Blood sampling Blood samples were collected before administration of the levodopa (T = 0) and at T = 5, 10, 15, 20, 30, 45, 60, 90 and 180 min after administration of the levodopa. The exact time of blood sampling was noted in the case report forms. Sampling was performed using an intravenous (IV) line filled with saline to avoid blood clotting of the system. To avoid dilution of the blood samples with saline, every first tube was rejected and every second tube was used for analysis. In case of problems with the IV line, a blood sample was drawn by venepuncture. The samples were collected in an ethylenediaminetetra-acetic acid tube. A research nurse took the blood samples. Analytical methods After collection, the samples were centrifuged for 12 min at 2500 rpm. The plasma was then transferred to a Sarstedt cup with screw cap. For each ml of plasma, 10 mg reduced glutathione was added to prevent the degradation of levodopa. The samples were stored at −80°C until analysis. Levodopa concentrations were determined using liquid chromatography-tandem mass spectrometry (XLC-MS/MS). The limit of quantification was 1.0 nmol/l. Spirometry Spirometry was used to assess the tolerability of the airways for the inhaled dry-powder levodopa formulation. Spirometry was performed before administration of levodopa and ±35 and 100 min after administration of levodopa, respectively. Forced expiratory volume in 1 s (FEV1), forced vital capacity (FVC) and maximum expiratory flow after 50% of the expired FVC (MEF50) were measured using a pneumotachograph. Pharmacokinetic evaluation The linear trapezoidal method was used to calculate the area under the concentration time curve (AUC) from T = 0 to T = 180 min (AUC 0-180). GraphPad Prism 7.0 (La Jolla, California USA) was used to calculate the AUCs. The maximum levodopa plasma concentration (C max ) and the time to maximum plasma concentration (T max ) were gathered from the obtained concentration time curves. The terminal elimination half-life (T) was computed from the following equation: T is the T max minus the last timepoint of blood sampling; Ct is the last measured concentration in the concentration time curve and C max the maximum plasma concentration from the concentration time curve. The relative bioavailability from inhalation compared with that from oral administration was calculated as: where A refers to inhaled levodopa and B to oral levodopa, respectively. Study objectives, design and study site The primary objective of this study was the pharmacokinetic evaluation of inhaled levodopa dry powder in comparison with oral levodopa. The secondary objective was to assess the tolerability of the airways for inhaled dry-powder levodopa using spirometry data as a measure. The study was performed in the Martini Hospital Groningen, The Netherlands. Patients and data A total of eight patients were included in the study. Patient characteristics are presented in Table 1. From these patients, 232 blood samples were collected and analysed for levodopa plasma concentrations. Eight samples were missed due to issues with the IV line. The range of time spans between the moments of last levodopa administration at home and administration of the study drug was 14-18 h (minimum/maximum). The delivered doses from the inhaler (all doses, all patients) were quite consistent and were on average 85.3% of the nominal dose (relative standard deviation = 5.6%; Table 2). Levodopa pharmacokinetic data The Figures 1(a) and 1(b) show the levodopa plasma concentrations after pulmonary administration of 30 (a) and 60 mg (b) of levodopa, of which the C max values doubled approximately from 229.2 ± 62.1 ng/ml to 476.2 ± 132.7 ng/ml, respectively. Plasma concentrations after oral administration of levodopa are shown in Figure 1(c). For easy comparison of the plasma levodopa concentrations after oral administration, all administered oral doses (varying from 100 to 250 mg) were recalculated into plasma concentrations per 100 mg oral levodopa. Characteristics (n = 8) Age ( After oral administration (100 mg levodopa) the mean C max was 1206.6 ± 448.7 ng/ml. The normalized C max per milligram administered levodopa (calculated from the delivered dose) was 9.10 ng/ml after inhalation of 30 mg, 9.23 ng/ml after inhalation of 60 mg levodopa and 12.06 ng/ ml after oral administration. The AUC per administered mg levodopa is 581.2 ± 127.6 min*ng/ml after inhalation of 30 mg levodopa compared with 574.7 ± 139.9 min*ng/ml after inhalation of 60 mg levodopa. After oral administration, the AUC per mg is 1085.7 ± 296.9 min*ng/ml. The relative bioavailability of inhaled levodopa in comparison with oral levodopa is 53%. Levodopa plasma concentrations varied strongly after oral administration of levodopa. This also results in large interindividual differences in the T max. The T max after oral administration was 20 min for three participants, 45 min for one participant and 90 min for four participants . A summary of the plasma pharmacokinetic parameters of inhaled levodopa is shown in Table 2. After inhalation of levodopa, there was only a small interindividual variation in the T max for both dose levels, being 5 min for two participants, 10 min for four participants and 15 min for two participants (mean ± SD: 10 ± 4 minutes). The mean elimination half-life time in our study is 58.3 ± 12.8 min after inhalation of 30 mg, 57.8 ± 18.9 min after inhalation of 60 mg levodopa and 67.7 ± 20.6 min after oral administration of levodopa plus decarboxylase inhibitor. Table 3 shows the inhalation data obtained from flow volume curves that were recorded during inhalation of the levodopa by the study participants. The inhaled volumes varied from 1.1 l to 4.2 l (mean ± SD: 2.6 ± 0.75). The maximum pressure drops across the inhaler varied between 1.5 and 10.4 kPa (mean ± SD: 5.8 ± 2.4) with corresponding peak flow rates between 22.9 l/min and 59.9 l/min (mean ± SD: 43.4 ± 9.7). The variation of the inhalation characteristics is larger between the patients than within the patients and can at least partly be explained by differences in sex, age and size of the participants. The delivered dose is the dose that has left the inhaler. V1 = visit 1 (30 mg inhalation powder); V2 = visit 2 (60 mg inhalation powder). AUC, area under the concentration time curve; C max, maximum levodopa plasma concentration; SD, standard deviation; T el, elimination half-life time; T max, time to maximum plasma concentration. Inhalation data journals.sagepub.com/home/taj 7 When relating the inhalation data from Table 3 to the plasma pharmacokinetics shown in Table 2, there is no clear relationship between inhaled volumes, attained maximum pressure drops or maximum peak flows and the maximum plasma concentrations that were reached. R 2 values from simple linear regression between these parameters do not exceed 0.022. Spirometry No significant change in lung-function parameters (FEV1, FVC, MEF50) was observed after inhalation of either of the levodopa doses or after oral administration of levodopa. None of the patients experienced cough or dyspnoea during or after inhalation. Discussion In this study, we assessed the pharmacokinetics and tolerability in Parkinson's disease patients of two doses of levodopa administered via a newly developed inhalation system, containing minimal amounts of excipient. We demonstrated that a levodopa powder formulation with 2% L-leucine administered via the Cyclops inhaler is rapidly absorbed into the systemic circulation. In all patients, the T max with levodopa was reached faster after inhalation, that is, within 15 min, whereas after oral administration, T max with levodopa ranged from 20 min to 90 min. The interindividual differences in both the C max and the time to C max were much larger for orally administered levodopa than for inhaled levodopa. In four out of eight patients, it took 90 min to reach C max after oral administration. The slow rise of levodopa plasma concentrations in these patients may be the result of delayed gastric emptying caused by not being in a true fasting state or by Parkinson's disease. It is possible that such a slow rise of the levodopa plasma concentration after oral administration makes oral levodopa less effective for use in an acute setting such as the termination of off periods. In contrast, the results suggest that inhaled levodopa may be much more suitable to terminate an off period because of a consequential rapid rise in the plasma levodopa concentration. There is no clear relationship between the inhaled volumes, maximal pressure drops, or peak flow rates and the maximal levodopa plasma concentrations that were achieved. This is mostly a consequence of the low variation in delivered dose between the study participants ( Table 2). It implies that the combination of inhaler and levodopa formulation results in a robust pulmonary administration not sensitive to differences in inhalation technique or patient characteristics. One should bear in mind, however, that the differences in inhaler technique encountered in this study may not reflect the differences encountered in practice, because of the extensive inhalation instruction given to the study participants. The relative bioavailability of inhaled levodopa in comparison with oral levodopa is 53%, which is close to the fine particle fraction of the delivered dose of 45% and therefore appears to reflect the lung deposition fraction. After all, for effective deposition of inhalation powder in the airways, and thus absorption in the systemic circulation, a particle size between 1 and 5 m is required. 25 However, the bioavailability of inhaled levodopa in this study is likely lowered by the absence of a decarboxylase inhibitor in the formulation. The levodopa inhalation powder does not contain a decarboxylase inhibitor, since its intended future use is on an 'as needed' basis as rescue medication on top of oral levodopa administered together with a decarboxylase inhibitor as maintenance therapy. Since the participants in this study had to postpone their own levodopa with decarboxylase inhibitor at least five half-life times before administration of the study levodopa, the pharmacokinetics of oral levodopa plus decarboxylase inhibitor is compared with that of inhaled levodopa without decarboxylase inhibitor. Therefore, the AUC of inhaled levodopa will be higher when used on an 'as needed' basis on top of maintenance therapy due to decreased peripheral conversion of levodopa to dopamine. 14 Because the relative bioavailability is higher than the fine particle fraction (i.e. the fraction suitable for deep lung deposition), our results imply that for effective absorption into the systemic circulation, deposition of levodopa particles does not necessarily need to be in the peripheral airways. This adds to the robustness of this route of administration. The calculated elimination half-life for inhaled levodopa varied between 34 min and 93 min. The mean elimination half-life we found in this study was 58 min after inhalation of levodopa and 68 min after oral administration of levodopa plus decarboxylase inhibitor. This is shorter than the halflife of 90 min previously described in literature. 15 Several previous studies have already confirmed that the pharmacokinetics of levodopa display considerable interparticipant variation, 14 which in our study, is the case for AUC, C max and elimination half-life time. Therefore, for a more accurate assessment of these parameters, a study population larger than eight patients would be desirable. In none of the patients, a drop in FEV1, FVC or MEF50 was observed. Furthermore, none of the patients experienced cough or dyspnoea during or after the inhalation manoeuvre. In the study reported by Lipp and colleagues, 18 21.7% of the patients reported cough after inhalation of their levodopa formulation. A common cause for cough after inhalation is the deposition of drug particles in the oropharynx. We assume that due to the high airflow resistance of the Cyclops inhaler, 21 deposition of levodopa in the oropharynx is prevented, which explains the absence of cough after the levodopa inhalation formulation used in this study. Another reason for cough after inhalation is the chemical composition of the powder. 26 Our inhalation powder consists of only 2% excipient. Since coughing is possibly a reason for patients to withdraw inhalation therapy, the absence of cough is an important advantage of the formulation used in this study. Whether or not the levodopa plasma concentrations attained by inhalation in this study are sufficient for rescue therapy in off periods will depend on disease progression and the degree to which a patient is turned 'off'. In progressed, fluctuating patients, a very steep dose-response relationship exists, where a maximum effect on finger tapping can be achieved by an increase in levodopa effect compartment concentration of approximately 200-400 ng/ml. 27 Therefore, the plasma concentrations attained by inhalation of levodopa in this study (i.e. 229 ng/ml with 30 mg and 476 ng/ml with 60 mg) could be clinically sufficient to end off episodes in Parkinson's disease. Because the study participants were not in the off state before levodopa inhalation, no effect could be observed. In a follow-up clinical trial, we will study the effect of inhaled levodopa from the Cyclops DPI on Parkinson's disease patients in the 'off state' in comparison with orally administered levodopa. This will show whether or not the faster absorption after inhalation is of clinical benefit. Conclusion Oral administration results in a more variable levodopa plasma profile than pulmonary administration. In addition, inhaled levodopa is absorbed up to 85 min faster in the blood plasma and inhaled doses of 30 mg and 60 mg showed comparable pharmacokinetics per milligram of inhaled levodopa. Since none of the patients experienced cough or dyspnoea and no change in pulmonary function was measured, it is concluded that the new levodopa powder inhalation system is well tolerated after inhalation. The results of this study therefore suggest that the tested levodopa formulation may be particularly beneficial for use during an off period, since a rapid onset of action without any harmful effects are two key requirements for such a rescue medication. Furthermore, no relationship was found between inhalation parameters, such as inhaled volume and inhalation flow rate, and levodopa pharmacokinetic parameters, which is indicative of a robust administration method. A study evaluating the efficacy of inhaled levodopa from the Cyclops for Parkinson's patients in an off period will be performed next.
Change management and synchronization of local and shared versions of a controlled vocabulary To share clinical data and to build interoperating computer systems that permit data entry, data retrieval, and data analysis, users and systems at multiple sites must share a common controlled clinical vocabulary (or ontology). However, local sites that adopt a shared vocabulary have local needs, and local-vocabulary maintainers make changes to the local version of that vocabulary. For a local site, there is a tradeoff between having autonomy over a local vocabulary and conforming to a shared vocabulary to obtain the benefits of interoperation. If the local site is motivated to conform, then the burden lies with the local site to manage its own changes and to incorporate the changes of the shared version at periodic intervals. I call this process synchronization. In this dissertation, I present an approach to change management and synchronization of local and shared versions of a controlled vocabulary. This approach supports carefully controlled divergence. I describe the CONCORDIA model, which comprises a structural model, a change model, and a log model to which the shared and local vocabularies conform. I demonstrate use of this model through methods that I have implemented in shared and local versions of a browser and an editor and in a synchronization-support tool. I evaluated my model and methods by performing the synchronization process on a small test set of medical concepts in the subdomain of rickettsial diseases. I obtained content from medical textbooks published at different points in time, and showed evolution of the medical vocabulary in two divergent directions. I synchronized the two versions using the synchronization-support tool. The CONCORDIA model served as an effective approach for representation and communication of vocabulary change.