query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
23
negative_passages
listlengths
9
100
subset
stringclasses
7 values
7b25c401a85ee8722811b60d0ad7cdee
Skinning mesh animations
[ { "docid": "0382ad43b6d31a347d9826194a7261ce", "text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.", "title": "" } ]
[ { "docid": "281c64b492a1aff7707dbbb5128799c8", "text": "Internet business models have been widely discussed in literature and applied within the last decade. Nevertheless, a clear understanding of some e-commerce concepts does not exist yet. The classification of business models in e-commerce is one of these areas. The current research tries to fill this gap through a conceptual and qualitative study. Nine main e-commerce business model types are selected from literature and analyzed to define the criteria and their sub-criteria (characteristics). As a result three different classifications for business models are determined. This study can be used to improve the understanding of essential functions, relations and mechanisms of existing e-commerce business models.", "title": "" }, { "docid": "030c8aeb4e365bfd2fdab710f8c9f598", "text": "By combining linear graph theory with the principle of virtual work, a dynamic formulation is obtained that extends graph-theoretic modelling methods to the analysis of exible multibody systems. The system is represented by a linear graph, in which nodes represent reference frames on rigid and exible bodies, and edges represent components that connect these frames. By selecting a spanning tree for the graph, the analyst can choose the set of coordinates appearing in the nal system of equations. This set can include absolute, joint, or elastic coordinates, or some combination thereof. If desired, all non-working constraint forces and torques can be automatically eliminated from the dynamic equations by exploiting the properties of virtual work. The formulation has been implemented in a computer program, DynaFlex, that generates the equations of motion in symbolic form. Three examples are presented to demonstrate the application of the formulation, and to validate the symbolic computer implementation.", "title": "" }, { "docid": "3c778c71f621b2c887dc81e7a919058e", "text": "We have witnessed the Fixed Internet emerging with virtually every computer being connected today; we are currently witnessing the emergence of the Mobile Internet with the exponential explosion of smart phones, tablets and net-books. However, both will be dwarfed by the anticipated emergence of the Internet of Things (IoT), in which everyday objects are able to connect to the Internet, tweet or be queried. Whilst the impact onto economies and societies around the world is undisputed, the technologies facilitating such a ubiquitous connectivity have struggled so far and only recently commenced to take shape. To this end, this paper introduces in a timely manner and for the first time the wireless communications stack the industry believes to meet the important criteria of power-efficiency, reliability and Internet connectivity. Industrial applications have been the early adopters of this stack, which has become the de-facto standard, thereby bootstrapping early IoT developments with already thousands of wireless nodes deployed. Corroborated throughout this paper and by emerging industry alliances, we believe that a standardized approach, using latest developments in the IEEE 802.15.4 and IETF working groups, is the only way forward. We introduce and relate key embodiments of the power-efficient IEEE 802.15.4-2006 PHY layer, the power-saving and reliable IEEE 802.15.4e MAC layer, the IETF 6LoWPAN adaptation layer enabling universal Internet connectivity, the IETF ROLL routing protocol enabling availability, and finally the IETF CoAP enabling seamless transport and support of Internet applications. The protocol stack proposed in the present work converges towards the standardized notations of the ISO/OSI and TCP/IP stacks. What thus seemed impossible some years back, i.e., building a clearly defined, standards-compliant and Internet-compliant stack given the extreme restrictions of IoT networks, is commencing to become reality.", "title": "" }, { "docid": "540a6dd82c7764eedf99608359776e66", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aea.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" }, { "docid": "22ef70869ce47993bbdf24b18b6988f5", "text": "Recent results suggest that it is possible to grasp a variety of singulated objects with high precision using Convolutional Neural Networks (CNNs) trained on synthetic data. This paper considers the task of bin picking, where multiple objects are randomly arranged in a heap and the objective is to sequentially grasp and transport each into a packing box. We model bin picking with a discrete-time Partially Observable Markov Decision Process that specifies states of the heap, point cloud observations, and rewards. We collect synthetic demonstrations of bin picking from an algorithmic supervisor uses full state information to optimize for the most robust collision-free grasp in a forward simulator based on pybullet to model dynamic object-object interactions and robust wrench space analysis from the Dexterity Network (Dex-Net) to model quasi-static contact between the gripper and object. We learn a policy by fine-tuning a Grasp Quality CNN on Dex-Net 2.1 to classify the supervisor’s actions from a dataset of 10,000 rollouts of the supervisor in the simulator with noise injection. In 2,192 physical trials of bin picking with an ABB YuMi on a dataset of 50 novel objects, we find that the resulting policies can achieve 94% success rate and 96% average precision (very few false positives) on heaps of 5-10 objects and can clear heaps of 10 objects in under three minutes. Datasets, experiments, and supplemental material are available at http://berkeleyautomation.github.io/dex-net.", "title": "" }, { "docid": "6dbaeff4f3cb814a47e8dc94c4660d33", "text": "An Intrusion Detection System (IDS) is a software that monitors a single or a network of computers for malicious activities (attacks) that are aimed at stealing or censoring information or corrupting network protocols. Most techniques used in today’s IDS are not able to deal with the dynamic and complex nature of cyber attacks on computer networks. Hence, efficient adaptive methods like various techniques of machine learning can result in higher detection rates, lower false alarm rates and reasonable computation and communication costs. In this paper, we study several such schemes and compare their performance. We divide the schemes into methods based on classical artificial intelligence (AI) and methods based on computational intelligence (CI). We explain how various characteristics of CI techniques can be used to build efficient IDS.", "title": "" }, { "docid": "7f3c6e8f0915160bbc9feba4d2175fb3", "text": "Memory leaks are major problems in all kinds of applications, depleting their performance, even if they run on platforms with automatic memory management, such as Java Virtual Machine. In addition, memory leaks contribute to software aging, increasing the complexity of software maintenance. So far memory leak detection was considered to be a part of development process, rather than part of software maintenance. To detect slow memory leaks as a part of quality assurance process or in production environments statistical approach for memory leak detection was implemented and deployed in a commercial tool called Plumbr. It showed promising results in terms of leak detection precision and recall, however, even better detection quality was desired. To achieve this improvement goal, classification algorithms were applied to the statistical data, which was gathered from customer environments where Plumbr was deployed. This paper presents the challenges which had to be solved, method that was used to generate features for supervised learning and the results of the corresponding experiments.", "title": "" }, { "docid": "23129bd3b502cd06e347b90f5a1516bc", "text": "ISSN 2277 5080 | © 2012 Bonfring Abstract--This paper discusses DSP based implementation of Gaussian Minimum Shift Keying (GMSK) demodulator using Polarity type Costas loop. The demodulator consists of a Polarity type Costas loop for carrier recovery, data recovery, and phase detection. Carrier has been recovered using a loop of center-frequency locking scheme as in M-ary Phase Shift Keying (MPSK) Polarity type Costas-loop. Phase unwrapping and Bit-Reconstruction is presented in detail. All the modules are first modeled in MATLAB (Simulink) and Systemview. After bit true simulation, the design is coded in VHDL and code simulation is done using QuestaSim 6.3c. The design is targeted to Virtex-4 XC4VSX35-10FF668 Xilinx FPGA (Field programmable gate array) for real time testing, which is carried out on Xtreme DSP development platform.", "title": "" }, { "docid": "643e97c3bc0cdde54bf95720fe52f776", "text": "Ego-motion estimation based on images from a stereo camera has become a common function for autonomous mobile systems and is gaining increasing importance in the automotive sector. Unlike general robotic platforms, vehicles have a suspension adding degrees of freedom and thus complexity to their dynamics model. Some parameters of the model, such as the vehicle mass, are non-static as they depend on e.g. the specific load conditions and thus need to be estimated online to guarantee a concise and safe autonomous maneuvering of the vehicle. In this paper, a novel visual odometry based approach to simultaneously estimate ego-motion and selected vehicle parameters using a dual Ensemble Kalman Filter and a non-linear single-track model with pitch dynamics is presented. The algorithm has been validated using simulated data and showed a good performance for both the estimation of the ego-motion and of the relevant vehicle parameters.", "title": "" }, { "docid": "9e0cbbe8d95298313fd929a7eb2bfea9", "text": "We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.", "title": "" }, { "docid": "63602b90688ddb0e8ba691702cbdaab8", "text": "This paper presents a 50-d.o.f. humanoid robot, Computational Brain (CB). CB is a humanoid robot created for exploring the underlying processing of the human brain while dealing with the real world. We place our investigations within real—world contexts, as humans do. In so doing, we focus on utilizing a system that is closer to humans—in sensing, kinematics configuration and performance. We present the real-time network-based architecture for the control of all 50 d.o.f. The controller provides full position/velocity/force sensing and control at 1 kHz, allowing us the flexibility in deriving various forms of control. A dynamic simulator is also presented; the simulator acts as a realistic testbed for our controllers and acts as a common interface to our humanoid robots. A contact model developed to allow better validation of our controllers prior to final testing on the physical robot is also presented. Three aspects of the system are highlighted in this paper: (i) physical power for walking, (ii) full-body compliant control—physical interactions and (iii) perception and control—visual ocular-motor responses.", "title": "" }, { "docid": "23d2349831a364e6b77e3c263a8321c8", "text": "lmost a decade has passed since we started advocating a process of usability design [20-22]. This article is a status report about the value of this process and, mainly, a description of new ideas for enhancing the use of the process. We first note that, when followed , the process leads to usable, useful, likeable computer systems and applications. Nevertheless, experience and observational evidence show that (because of the way development work is organized and carried out) the process is often not followed, despite designers' enthusiasm and motivation to do so. To get around these organizational and technical obstacles, we propose a) greater reliance on existing methodologies for establishing test-able usability and productivity-enhancing goals; b) a new method for identifying and focuging attention on long-term, trends about the effects that computer applications have on end-user productivity; and c) a new approach, now under way, to application development, particularly the development of user interfaces. The process consists of four activities [18, 20-22]. Early Focus On Users. Designers should have direct contact with intended or actual users-via interviews , observations, surveys, partic-ipatory design. The aim is to understand users' cognitive, behav-ioral, attitudinal, and anthropomet-ric characteristics-and the characteristics of the jobs they will be doing. Integrated Design. All aspects of usability (e.g., user interface, help system, training plan, documentation) should evolve in parallel, rather than be defined sequentially, and should be under one management. Early~And Continual~User Testing. The only presently feasible approach to successful design is an empirical one, requiring observation and measurement of user behavior , careful evaluation of feedback , insightful solutions to existing problems, and strong motivation to make design changes. Iterative Design. A system under development must be modified based upon the results of behav-ioral tests of functions, user interface , help system, documentation, training approach. This process of implementation, testing, feedback, evaluation, and change must be repeated to iteratively improve the system. We, and others proposing similar ideas (see below), have worked hard at spreading this process of usabil-ity design. We have used numerous channels to accomplish this: frequent talks, workshops, seminars, publications, consulting, addressing arguments used against it [22], conducting a direct case study of the process [20], and identifying methods for people not fully trained as human factors professionals to use in carrying out this process [18]. The Process Works. Several lines of evidence indicate that this usabil-ity design process leads to systems, applications, and products …", "title": "" }, { "docid": "111743197c23aff0fac0699a30edca23", "text": "Origami describes rules for creating folded structures from patterns on a flat sheet, but does not prescribe how patterns can be designed to fit target shapes. Here, starting from the simplest periodic origami pattern that yields one-degree-of-freedom collapsible structures-we show that scale-independent elementary geometric constructions and constrained optimization algorithms can be used to determine spatially modulated patterns that yield approximations to given surfaces of constant or varying curvature. Paper models confirm the feasibility of our calculations. We also assess the difficulty of realizing these geometric structures by quantifying the energetic barrier that separates the metastable flat and folded states. Moreover, we characterize the trade-off between the accuracy to which the pattern conforms to the target surface, and the effort associated with creating finer folds. Our approach enables the tailoring of origami patterns to drape complex surfaces independent of absolute scale, as well as the quantification of the energetic and material cost of doing so.", "title": "" }, { "docid": "3754b5c86e0032382f144ded5f1ca4d8", "text": "Use and users have an important and acknowledged role to most designers of interactive systems. Nevertheless any touch of user hands does not in itself secure development of meaningful artifacts. In this article we stress the need for a professional PD practice in order to yield the full potentiality of user involvement. We suggest two constituting elements of such a professional PD practice. The existence of a shared 'where-to' and 'why' artifact and an ongoing reflection and off-loop reflection among practitioners in the PD process.", "title": "" }, { "docid": "a5a53221aa9ccda3258223b9ed4e2110", "text": "Accurate and reliable inventory forecasting can save an organization from overstock, under-stock and no stock/stock-out situation of inventory. Overstocking leads to high cost of storage and its maintenance, whereas under-stocking leads to failure to meet the demand and losing profit and customers, similarly stock-out leads to complete halt of production or sale activities. Inventory transactions generate data, which is a time-series data having characteristic volume, speed, range and regularity. The inventory level of an item depends on many factors namely, current stock, stock-on-order, lead-time, annual/monthly target. In this paper, we present a perspective of treating Inventory management as a problem of Genetic Programming based on inventory transactions data. A Genetic Programming — Symbolic Regression (GP-SR) based mathematical model is developed and subsequently used to make forecasts using Holt-Winters Exponential Smoothing method for time-series modeling. The GP-SR model evolves based on RMSE as the fitness function. The performance of the model is measured in terms of RMSE and MAE. The estimated values of item demand from the GP-SR model is finally used to simulate a time-series and forecasts are generated for inventory required on a monthly time horizon.", "title": "" }, { "docid": "69e0179971396fcaf09c9507735a8d5b", "text": "In this paper, we describe a statistical approach to both an articulatory-to-acoustic mapping and an acoustic-to-articulatory inversion mapping without using phonetic information. The joint probability density of an articulatory parameter and an acoustic parameter is modeled using a Gaussian mixture model (GMM) based on a parallel acoustic-articulatory speech database. We apply the GMM-based mapping using the minimum mean-square error (MMSE) criterion, which has been proposed for voice conversion, to the two mappings. Moreover, to improve the mapping performance, we apply maximum likelihood estimation (MLE) to the GMM-based mapping method. The determination of a target parameter trajectory having appropriate static and dynamic properties is obtained by imposing an explicit relationship between static and dynamic features in the MLE-based mapping. Experimental results demonstrate that the MLE-based mapping with dynamic features can significantly improve the mapping performance compared with the MMSE-based mapping in both the articulatory-to-acoustic mapping and the inversion mapping.", "title": "" }, { "docid": "490dc6ee9efd084ecf2496b72893a39a", "text": "The rise of blockchain-based cryptocurrencies has led to an explosion of services using distributed ledgers as their underlying infrastructure. However, due to inherently single-service oriented blockchain protocols, such services can bloat the existing ledgers, fail to provide sufficient security, or completely forego the property of trustless auditability. Security concerns, trust restrictions, and scalability limits regarding the resource requirements of users hamper the sustainable development of loosely-coupled services on blockchains. This paper introduces Aspen, a sharded blockchain protocol designed to securely scale with increasing number of services. Aspen shares the same trust model as Bitcoin in a peer-to-peer network that is prone to extreme churn containing Byzantine participants. It enables introduction of new services without compromising the security, leveraging the trust assumptions, or flooding users with irrelevant messages.", "title": "" }, { "docid": "9cc2dfde38bed5e767857b1794d987bc", "text": "Smartphones providing proprietary encryption schemes, albeit offering a novel paradigm to privacy, are becoming a bone of contention for certain sovereignties. These sovereignties have raised concerns about their security agencies not having any control on the encrypted data leaving their jurisdiction and the ensuing possibility of it being misused by people with malicious intents. Such smartphones have typically two types of customers, independent users who use it to access public mail servers and corporates/enterprises whose employees use it to access corporate emails in an encrypted form. The threat issues raised by security agencies concern mainly the enterprise servers where the encrypted data leaves the jurisdiction of the respective sovereignty while on its way to the global smartphone router. In this paper, we have analyzed such email message transfer mechanisms in smartphones and proposed some feasible solutions, which, if accepted and implemented by entities involved, can lead to a possible win-win situation for both the parties, viz., the smartphone provider who does not want to lose the customers and these sovereignties who can avoid the worry of encrypted data leaving their jurisdiction.", "title": "" }, { "docid": "af691c2ca5d9fd1ca5109c8b2e7e7b6d", "text": "As social robots become more widely used as educational tutoring agents, it is important to study how children interact with these systems, and how effective they are as assessed by learning gains, sustained engagement, and perceptions of the robot tutoring system as a whole. In this paper, we summarize our prior work involving a long-term child-robot interaction study and outline important lessons learned regarding individual differences in children. We then discuss how these lessons inform future research in child-robot interaction.", "title": "" }, { "docid": "c8fdcfa08aff6286a02b984cc5f716b2", "text": "As interest in adopting Cloud computing for various applications is rapidly growing, it is important to understand how these applications and systems will perform when deployed on Clouds. Due to the scale and complexity of shared resources, it is often hard to analyze the performance of new scheduling and provisioning algorithms on actual Cloud test beds. Therefore, simulation tools are becoming more and more important in the evaluation of the Cloud computing model. Simulation tools allow researchers to rapidly evaluate the efficiency, performance and reliability of their new algorithms on a large heterogeneous Cloud infrastructure. However, current solutions lack either advanced application models such as message passing applications and workflows or scalable network model of data center. To fill this gap, we have extended a popular Cloud simulator (CloudSim) with a scalable network and generalized application model, which allows more accurate evaluation of scheduling and resource provisioning policies to optimize the performance of a Cloud infrastructure.", "title": "" } ]
scidocsrr
5f7cb537da11a86fcd3b211ca8da75bb
Toward parallel continuum manipulators
[ { "docid": "f80f1952c5b58185b261d53ba9830c47", "text": "This paper presents a new class of thin, dexterous continuum robots, which we call active cannulas due to their potential medical applications. An active cannula is composed of telescoping, concentric, precurved superelastic tubes that can be axially translated and rotated at the base relative to one another. Active cannulas derive bending not from tendon wires or other external mechanisms but from elastic tube interaction in the backbone itself, permitting high dexterity and small size, and dexterity improves with miniaturization. They are designed to traverse narrow and winding environments without relying on ldquoguidingrdquo environmental reaction forces. These features seem ideal for a variety of applications where a very thin robot with tentacle-like dexterity is needed. In this paper, we apply beam mechanics to obtain a kinematic model of active cannula shape and describe design tools that result from the modeling process. After deriving general equations, we apply them to a simple three-link active cannula. Experimental results illustrate the importance of including torsional effects and the ability of our model to predict energy bifurcation and active cannula shape.", "title": "" }, { "docid": "be749e59367ee1033477bb88503032cf", "text": "This paper describes the results of field trials and associated testing of the OctArm series of multi-section continuous backbone \"continuum\" robots. This novel series of manipulators has recently (Spring 2005) undergone a series of trials including open-air and in-water field tests. Outcomes of the trials, in which the manipulators demonstrated the ability for adaptive and novel manipulation in challenging environments, are described. Implications for the deployment of continuum robots in a variety of applications are discussed", "title": "" }, { "docid": "8bb465b2ec1f751b235992a79c6f7bf1", "text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.", "title": "" } ]
[ { "docid": "d157d7b6e1c5796b6d7e8fedf66e81d8", "text": "Intrusion detection for computer network systems becomes one of the most critical tasks for network administrators today. It has an important role for organizations, governments and our society due to its valuable resources on computer networks. Traditional misuse detection strategies are unable to detect new and unknown intrusion. Besides , anomaly detection in network security is aim to distinguish between illegal or malicious events and normal behavior of network systems. Anomaly detection can be considered as a classification problem where it builds models of normal network behavior, which it uses to detect new patterns that significantly deviate from the model. Most of the current research on anomaly detection is based on the learning of normally and anomaly behaviors. They do not take into account the previous, recent events to detect the new incoming one. In this paper, we propose a real time collective anomaly detection model based on neural network learning and feature operating. Normally a Long Short-Term Memory Recurrent Neural Network (LSTM RNN) is trained only on normal data and it is capable of predicting several time steps ahead of an input. In our approach, a LSTM RNN is trained with normal time series data before performing a live prediction for each time step. Instead of considering each time step separately, the observation of prediction errors from a certain number of time steps is now proposed as a new idea for detecting collective anomalies. The prediction errors from a number of the latest time steps above a threshold will indicate a collective anomaly. The model is built on a time series version of the KDD 1999 dataset. The experiments demonstrate that it is possible to offer reliable and efficient for collective anomaly detection.", "title": "" }, { "docid": "b55eb410f2a2c7eb6be1c70146cca203", "text": "Permissioned blockchains are arising as a solution to federate companies prompting accountable interactions. A variety of consensus algorithms for such blockchains have been proposed, each of which has different benefits and drawbacks. Proof-of-Authority (PoA) is a new family of Byzantine fault-tolerant (BFT) consensus algorithms largely used in practice to ensure better performance than traditional Practical Byzantine Fault Tolerance (PBFT). However, the lack of adequate analysis of PoA hinders any cautious evaluation of their effectiveness in real-world permissioned blockchains deployed over the Internet, hence on an eventually synchronous network experimenting Byzantine nodes. In this paper, we analyse two of the main PoA algorithms, named Aura and Clique, both in terms of provided guarantees and performances. First, we derive their functioning including how messages are exchanged, then we weight, by relying on the CAP theorem, consistency, availability and partition tolerance guarantees. We also report a qualitative latency analysis based on message rounds. The analysis advocates that PoA for permissioned blockchains, deployed over the Internet with Byzantine nodes, do not provide adequate consistency guarantees for scenarios where data integrity is essential. We claim that PBFT can fit better such scenarios, despite a limited loss in terms of performance.", "title": "" }, { "docid": "969a8e447fb70d22a7cbabe7fc47a9c9", "text": "A novel multi-level AC six-phase motor drive is developed in this paper. The scheme is based on three conventional 2-level three-phase voltage source inverters (VSIs) supplying the open-end windings of a dual three-phase motor (six-phase induction machine). The proposed inverter is capable of supply the machine with multi-level voltage waveforms. The developed system is compared with the conventional solution and it is demonstrated that the drive system permits to reduce the harmonic distortion of the machine currents, to reduce the total semiconductor losses and to decrease the power processed by converter switches. The system model and the Pulse-Width Modulation (PWM) strategy are presented. The experimental verification was obtained by using IGBTs with dedicated drives and a digital signal processor (DSP) with plug-in boards and sensors.", "title": "" }, { "docid": "97412a2a6e6d91fef2c75b62aca5b6f4", "text": "Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that the model is able to predict the winning team with 74.4% accuracy, outperforming other classical machine learning algorithms that could only afford a maximum prediction accuracy of 70.6% in the experiments that we performed.", "title": "" }, { "docid": "dd4cc15729f65a0102028949b34cc56f", "text": "Autonomous vehicles platooning has received considerable attention in recent years, due to its potential to significantly benefit road transportation, improving traffic efficiency, enhancing road safety and reducing fuel consumption. The Vehicular ad hoc Networks and the de facto vehicular networking standard IEEE 802.11p communication protocol are key tools for the deployment of platooning applications, since the cooperation among vehicles is based on a reliable communication structure. However, vehicular networks can suffer different security threats. Indeed, in collaborative driving applications, the sudden appearance of a malicious attack can mainly compromise: (i) the correctness of data traffic flow on the vehicular network by sending malicious messages that alter the platoon formation and its coordinated motion; (ii) the safety of platooning application by altering vehicular network communication capability. In view of the fact that cyber attacks can lead to dangerous implications for the security of autonomous driving systems, it is fundamental to consider their effects on the behavior of the interconnected vehicles, and to try to limit them from the control design stage. To this aim, in this work we focus on some relevant types of malicious threats that affect the platoon safety, i.e. application layer attacks (Spoofing and Message Falsification) and network layer attacks (Denial of Service and Burst Transmission), and we propose a novel collaborative control strategy for enhancing the protection level of autonomous platoons. The control protocol is designed and validated in both analytically and experimental way, for the appraised malicious attack scenarios and for different communication topology structures. The effectiveness of the proposed strategy is shown by using PLEXE, a state of the art inter-vehicular communications and mobility simulator that includes basic building blocks for platooning. A detailed experimental analysis discloses the robustness of the proposed approach and its capabilities in reacting to the malicious attack effects.", "title": "" }, { "docid": "25ed874d2bf1125b5539d595319d334b", "text": "The notion of creativity, as opposed to related concepts such as beauty or interestingness, has not been studied from the perspective of automatic analysis of multimedia content. Meanwhile, short online videos shared on social media platforms, or micro-videos, have arisen as a new medium for creative expression. In this paper we study creative micro-videos in an effort to understand the features that make a video creative, and to address the problem of automatic detection of creative content. Defining creative videos as those that are novel and have aesthetic value, we conduct a crowdsourcing experiment to create a dataset of over 3, 800 micro-videos labelled as creative and non-creative. We propose a set of computational features that we map to the components of our definition of creativity, and conduct an analysis to determine which of these features correlate most with creative video. Finally, we evaluate a supervised approach to automatically detect creative video, with promising results, showing that it is necessary to model both aesthetic value and novelty to achieve optimal classification accuracy.", "title": "" }, { "docid": "5de19873c4bd67cdcc57d879d923dc10", "text": "BACKGROUND AND PURPOSE\nNeuromyelitis optica (NMO) or Devic's disease is a rare inflammatory and demyelinating autoimmune disorder of the central nervous system (CNS) characterized by recurrent attacks of optic neuritis (ON) and longitudinally extensive transverse myelitis (LETM), which is distinct from multiple sclerosis (MS). The guidelines are designed to provide guidance for best clinical practice based on the current state of clinical and scientific knowledge.\n\n\nSEARCH STRATEGY\nEvidence for this guideline was collected by searches for original articles, case reports and meta-analyses in the MEDLINE and Cochrane databases. In addition, clinical practice guidelines of professional neurological and rheumatological organizations were studied.\n\n\nRESULTS\nDifferent diagnostic criteria for NMO diagnosis [Wingerchuk et al. Revised NMO criteria, 2006 and Miller et al. National Multiple Sclerosis Society (NMSS) task force criteria, 2008] and features potentially indicative of NMO facilitate the diagnosis. In addition, guidance for the work-up and diagnosis of spatially limited NMO spectrum disorders is provided by the task force. Due to lack of studies fulfilling requirement for the highest levels of evidence, the task force suggests concepts for treatment of acute exacerbations and attack prevention based on expert opinion.\n\n\nCONCLUSIONS\nStudies on diagnosis and management of NMO fulfilling requirements for the highest levels of evidence (class I-III rating) are limited, and diagnostic and therapeutic concepts based on expert opinion and consensus of the task force members were assembled for this guideline.", "title": "" }, { "docid": "53a55e8aa8b3108cdc8d015eabb3476d", "text": "We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM’s test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM’s decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier’s test error.", "title": "" }, { "docid": "79e2e4af34e8a2b89d9439ff83b9fd5a", "text": "PROBLEM\nThe current nursing workforce is composed of multigenerational staff members creating challenges and at times conflict for managers.\n\n\nMETHODS\nGenerational cohorts are defined and two multigenerational scenarios are presented and discussed using the ACORN imperatives and Hahn's Five Managerial Strategies for effectively managing a multigenerational staff.\n\n\nFINDINGS\nCommunication and respect are the underlying key strategies to understanding and bridging the generational gap in the workplace.\n\n\nCONCLUSION\nEmbracing and respecting generational differences can bring strength and cohesiveness to nursing teams on the managerial or unit level.", "title": "" }, { "docid": "1878b3e7742a0ffbd3da67be23c6e366", "text": "Compensation for geometrical spreading along a raypath is one of the key steps in AVO amplitude-variation-with-offset analysis, in particular, for wide-azimuth surveys. Here, we propose an efficient methodology to correct long-spread, wide-azimuth reflection data for geometrical spreading in stratified azimuthally anisotropic media. The P-wave geometrical-spreading factor is expressed through the reflection traveltime described by a nonhyperbolic moveout equation that has the same form as in VTI transversely isotropic with a vertical symmetry axis media. The adapted VTI equation is parameterized by the normal-moveout NMO ellipse and the azimuthally varying anellipticity parameter . To estimate the moveout parameters, we apply a 3D nonhyperbolic semblance algorithm of Vasconcelos and Tsvankin that operates simultaneously with traces at all offsets and", "title": "" }, { "docid": "ef372c1537c8eabb4595dc5385199575", "text": "This article provides a review of the traditional clinical concepts for the design and fabrication of removable partial dentures (RPDs). Although classic theories and rules for RPD designs have been presented and should be followed, excellent clinical care for partially edentulous patients may also be achieved with computer-aided design/computer-aided manufacturing technology and unique blended designs. These nontraditional RPD designs and fabrication methods provide for improved fit, function, and esthetics by using computer-aided design software, composite resin for contours and morphology of abutment teeth, metal support structures for long edentulous spans and collapsed occlusal vertical dimensions, and flexible, nylon thermoplastic material for metal-supported clasp assemblies.", "title": "" }, { "docid": "afdc8b3e00a4fe39b281e17056d97664", "text": "This demo presents the features of the Proactive Insights (PI) engine, which uses machine learning and artificial intelligence capabilities to automatically identify weaknesses in business processes, to reveal their root causes, and to give intelligent advice on how to improve process inefficiencies. We demonstrate the four PI elements covering Conformance, Machine Learning, Social, and Companion. The new insights are especially valuable for process managers and academics interested in BPM and process mining.", "title": "" }, { "docid": "df404258bca8d16cabf935fd94fc7463", "text": "Training deep neural networks with Stochastic Gradient Descent, or its variants, requires careful choice of both learning rate and batch size. While smaller batch sizes generally converge in fewer training epochs, larger batch sizes offer more parallelism and hence better computational efficiency. We have developed a new training approach that, rather than statically choosing a single batch size for all epochs, adaptively increases the batch size during the training process. Our method delivers the convergence rate of small batch sizes while achieving performance similar to large batch sizes. We analyse our approach using the standard AlexNet, ResNet, and VGG networks operating on the popular CIFAR10, CIFAR-100, and ImageNet datasets. Our results demonstrate that learning with adaptive batch sizes can improve performance by factors of up to 6.25 on 4 NVIDIA Tesla P100 GPUs while changing accuracy by less than 1% relative to training with fixed batch sizes.", "title": "" }, { "docid": "ed769b97bea6d4bbe7e282ad6dbb1c67", "text": "Three basic switching structures are defined: one is formed by two capacitors and three diodes; the other two are formed by two inductors and two diodes. They are inserted in either a Cuk converter, or a Sepic, or a Zeta converter. The SC/SL structures are built in such a way as when the active switch of the converter is on, the two inductors are charged in series or the two capacitors are discharged in parallel. When the active switch is off, the two inductors are discharged in parallel or the two capacitors are charged in series. As a result, the line voltage is reduced more times than in classical Cuk/Sepic/Zeta converters. The steady-state analysis of the new converters, a comparison of the DC voltage gain and of the voltage and current stresses of the new hybrid converters with those of the available quadratic converters, and experimental results are given", "title": "" }, { "docid": "b36e9a2f1143fa242c4d372cb0ba38b3", "text": "Invariance to nuisance transformations is one of the desirable properties of effective representations. We consider transformations that form a group and propose an approach based on kernel methods to derive local group invariant representations. Locality is achieved by defining a suitable probability distribution over the group which in turn induces distributions in the input feature space. We learn a decision function over these distributions by appealing to the powerful framework of kernel methods and generate local invariant random feature maps via kernel approximations. We show uniform convergence bounds for kernel approximation and provide generalization bounds for learning with these features. We evaluate our method on three real datasets, including Rotated MNIST and CIFAR-10, and observe that it outperforms competing kernel based approaches. The proposed method also outperforms deep CNN on RotatedMNIST and performs comparably to the recently proposed group-equivariant CNN.", "title": "" }, { "docid": "daa30843c26d285b3b42cb588e4d0cd1", "text": "In this paper, we rigorously study tractable models for provably recovering low-rank tensors. Unlike their matrix-based predecessors, current convex approaches for recovering low-rank tensors based on incomplete (tensor completion) and/or grossly corrupted (tensor robust principal analysis) observations still suffer from the lack of theoretical guarantees, although they have been used in various recent applications and have exhibited promising empirical performance. In this work, we attempt to fill this gap. Specifically, we propose a class of convex recovery models (including strongly convex programs) that can be proved to guarantee exact recovery under certain conditions. All parameters in our formulations can be determined beforehand based on the measurement data and thus there is no parameter tuning involved.", "title": "" }, { "docid": "49d5f6fdc02c777d42830bac36f6e7e2", "text": "Current tools for exploratory data analysis (EDA) require users to manually select data attributes, statistical computations and visual encodings. This can be daunting for large-scale, complex data. We introduce Foresight, a visualization recommender system that helps the user rapidly explore large high-dimensional datasets through “guideposts.” A guidepost is a visualization corresponding to a pronounced instance of a statistical descriptor of the underlying data, such as a strong linear correlation between two attributes, high skewness or concentration about the mean of a single attribute, or a strong clustering of values. For each descriptor, Foresight initially presents visualizations of the “strongest” instances, based on an appropriate ranking metric. Given these initial guideposts, the user can then look at “nearby” guideposts by issuing “guidepost queries” containing constraints on metric type, metric strength, data attributes, and data values. Thus, the user can directly explore the network of guideposts, rather than the overwhelming space of data attributes and visual encodings. Foresight also provides for each descriptor a global visualization of ranking-metric values to both help orient the user and ensure a thorough exploration process. Foresight facilitates interactive exploration of large datasets using fast, approximate sketching to compute ranking metrics. We also contribute insights on EDA practices of data scientists, summarizing results from an interview study we conducted to inform the design of Foresight.", "title": "" }, { "docid": "7ec93b17c88d09f8a442dd32127671d8", "text": "Understanding the 3D structure of a scene is of vital importance, when it comes to developing fully autonomous robots. To this end, we present a novel deep learning based framework that estimates depth, surface normals and surface curvature by only using a single RGB image. To the best of our knowledge this is the first work to estimate surface curvature from colour using a machine learning approach. Additionally, we demonstrate that by tuning the network to infer well designed features, such as surface curvature, we can achieve improved performance at estimating depth and normals. This indicates that network guidance is still a useful aspect of designing and training a neural network. We run extensive experiments where the network is trained to infer different tasks while the model capacity is kept constant resulting in different feature maps based on the tasks at hand. We outperform the previous state-of-the-art benchmarks which jointly estimate depths and surface normals while predicting surface curvature in parallel.", "title": "" }, { "docid": "eebeb59c737839e82ecc20a748b12c6b", "text": "We present SWARM, a wearable affective technology designed to help a user to reflect on their own emotional state, modify their affect, and interpret the emotional states of others. SWARM aims for a universal design (inclusive of people with various disabilities), with a focus on modular actuation components to accommodate users' sensory capabilities and preferences, and a scarf form-factor meant to reduce the stigma of accessible technologies through a fashionable embodiment. Using an iterative, user-centered approach, we present SWARM's design. Additionally, we contribute findings for communicating emotions through technology actuations, wearable design techniques (including a modular soft circuit design technique that fuses conductive fabric with actuation components), and universal design considerations for wearable technology.", "title": "" } ]
scidocsrr
6d89ecca492e99422e5f8208633f8685
Automatic Room Segmentation From Unstructured 3-D Data of Indoor Environments
[ { "docid": "7399a8096f56c46a20715b9f223d05bf", "text": "Recently, Rao-Blackwellized particle filters (RBPF) have been introduced as an effective means to solve the simultaneous localization and mapping problem. This approach uses a particle filter in which each particle carries an individual map of the environment. Accordingly, a key question is how to reduce the number of particles. In this paper, we present adaptive techniques for reducing this number in a RBPF for learning grid maps. We propose an approach to compute an accurate proposal distribution, taking into account not only the movement of the robot, but also the most recent observation. This drastically decreases the uncertainty about the robot's pose in the prediction step of the filter. Furthermore, we present an approach to selectively carry out resampling operations, which seriously reduces the problem of particle depletion. Experimental results carried out with real mobile robots in large-scale indoor, as well as outdoor, environments illustrate the advantages of our methods over previous approaches", "title": "" } ]
[ { "docid": "e09d45316d48894bcfb3c5657cd19118", "text": "In recent years, multiple-line acquisition (MLA) has been introduced to increase frame rate in cardiac ultrasound medical imaging. However, this method induces blocklike artifacts in the image. One approach suggested, synthetic transmit beamforming (STB), involves overlapping transmit beams which are then interpolated to remove the MLA blocking artifacts. Independently, the application of minimum variance (MV) beamforming has been suggested in the context of MLA. We demonstrate here that each approach is only a partial solution and that combining them provides a better result than applying either approach separately. This is demonstrated by using both simulated and real phantom data, as well as cardiac data. We also show that the STB-compensated MV beamfomer outperforms single-line acquisition (SLA) delay- and-sum in terms of lateral resolution.", "title": "" }, { "docid": "bb19e122737f08997585999575d2a394", "text": "In this paper, shadow detection and compensation are treated as image enhancement tasks. The principal components analysis (PCA) and luminance based multi-scale Retinex (LMSR) algorithm are explored to detect and compensate shadow in high resolution satellite image. PCA provides orthogonally channels, thus allow the color to remain stable despite the modification of luminance. Firstly, the PCA transform is used to obtain the luminance channel, which enables us to detect shadow regions using histogram threshold technique. After detection, the LMSR technique is used to enhance the image only in luminance channel to compensate for shadows. Then the enhanced image is obtained by inverse transform of PCA. The final shadow compensation image is obtained by comparison of the original image, the enhanced image and the shadow detection image. Experiment results show the effectiveness of the proposed method.", "title": "" }, { "docid": "365b95202095942c4b2b43a5e6f6e04e", "text": "Abstract. In this paper we use the contraction mapping theorem to obtain asymptotic stability results of the zero solution of a nonlinear neutral Volterra integro-differential equation with variable delays. Some conditions which allow the coefficient functions to change sign and do not ask the boundedness of delays are given. An asymptotic stability theorem with a necessary and sufficient condition is proved, which improve and extend the results in the literature. Two examples are also given to illustrate this work.", "title": "" }, { "docid": "884ea5137f9eefa78030608097938772", "text": "In this paper, we propose a new concept - the \"Reciprocal Velocity Obstacle\"- for real-time multi-agent navigation. We consider the case in which each agent navigates independently without explicit communication with other agents. Our formulation is an extension of the Velocity Obstacle concept [3], which was introduced for navigation among (passively) moving obstacles. Our approach takes into account the reactive behavior of the other agents by implicitly assuming that the other agents make a similar collision-avoidance reasoning. We show that this method guarantees safe and oscillation- free motions for each of the agents. We apply our concept to navigation of hundreds of agents in densely populated environments containing both static and moving obstacles, and we show that real-time and scalable performance is achieved in such challenging scenarios.", "title": "" }, { "docid": "2c667b86fffdcb69e35a21795fc0e3bd", "text": "We compiled details of over 8000 assessments of protected area management effectiveness across the world and developed a method for analyzing results across diverse assessment methodologies and indicators. Data was compiled and analyzed for over 4000 of these sites. Management of these protected areas varied from weak to effective, with about 40% showing major deficiencies. About 14% of the surveyed areas showed significant deficiencies across many management effectiveness indicators and hence lacked basic requirements to operate effectively. Strongest management factors recorded on average related to establishment of protected areas (legal establishment, design, legislation and boundary marking) and to effectiveness of governance; while the weakest aspects of management included community benefit programs, resourcing (funding reliability and adequacy, staff numbers and facility and equipment maintenance) and management effectiveness evaluation. Estimations of management outcomes, including both environmental values conservation and impact on communities, were positive. We conclude that in spite of inadequate funding and management process, there are indications that protected areas are contributing to biodiversity conservation and community well-being.", "title": "" }, { "docid": "233c9d97c70a95f71897b6f289c7d8a7", "text": "The group Steiner tree problem is a generalization of the Steiner tree problem where we are given several subsets (groups) of vertices in a weighted graph, and the goal is to find a minimum-weight connected subgraph containing at least one vertex from each group. The problem was introduced by Reich and Widmayer and linds applications in VLSI design. The group Steiner tree problem generalizes the set covering problem, and is therefore at least as hard. We give a randomized O(log3 n log k)-approximation algorithm for the group Steiner tree problem on an n-node graph, where k is the number of groups. The best previous performance guarantee was (1 + ?)a (Bateman, Helvig, Robins and Zelikovsky). Noting that the group Steiner problem also models the network design problems with location-theoretic constraints studied by Marathe, Bavi and Sundaram, our results also improve their bicriteria approximation results. Similarly, we improve previous results by Slavik on a tour version, called the errand scheduling problem. We use the result of Bartal on probabilistic approximation of finite metric spaces by tree metrics problem to one in a tree metric. To find a solution on a tree, we use a generalization of randomized rounding. Our approximation guarantees improve to O(log’ nlog k) in the case of graphs that exclude small minors by using a better alternative to Bartal’s result on probabilistic approximations of metrics induced by such graphs (Konjevod, Ravi and Salman) this improvement is valid for the group Steiner problem on planar graphs as well as on a set of points in the 2D-Euclidean case. -", "title": "" }, { "docid": "c48fa25b1e49d641efa08d3ce9960270", "text": "This paper presents a novel mobility metric for mobile ad hoc networks (MANET) that is based on the ratio between the received power levels of successive transmissions measured at any node from all its neighboring nodes. This mobility metric is subsequently used as a basis for cluster formation which can be used for improving the scalability of services such as routing in such networks. We propose a distributed clustering algorithm, MOBIC, based on the use of this mobility metric for selection of clusterheads, and demonstrate that it leads to more stable cluster formation than the Lowest-ID clustering algorithm ( “least clusterhead change” [3]) which is a well known clustering algorithms for MANETs. We show reduction of as much as 33% in the number of clusterhead changes owing to the use of the proposed technique. In a MANET that uses scalable cluster-based services, the network performance metrics such as throughput and delay are tightly coupled with the frequency of cluster reorganization. Therefore, we believe that since using MOBIC results in a more stable configuration, it will directly lead to improvement of performance.", "title": "" }, { "docid": "1b52822b76e7ace1f7e12a6f2c92b060", "text": "We treated the mandibular retrusion of a 20-year-old man by distraction osteogenesis. Our aim was to avoid any visible discontinuities in the soft tissue profile that may result from conventional \"one-step\" genioplasty. The result was excellent. In addition to a good aesthetic outcome, there was increased bone formation not only between the two surfaces of the osteotomy but also adjacent to the distraction zone, resulting in improved coverage of the roots of the lower incisors. Only a few patients have been treated so far, but the method seems to hold promise for the treatment of extreme retrognathism, as these patients often have insufficient buccal bone coverage.", "title": "" }, { "docid": "e11b4a08fc864112d4f68db1ea9703e9", "text": "Forecasting is an integral part of any organization for their decision-making process so that they can predict their targets and modify their strategy in order to improve their sales or productivity in the coming future. This paper evaluates and compares various machine learning models, namely, ARIMA, Auto Regressive Neural Network(ARNN), XGBoost, SVM, Hy-brid Models like Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM and STL Decomposition (using ARIMA, Snaive, XGBoost) to forecast sales of a drug store company called Rossmann. Training data set contains past sales and supplemental information about drug stores. Accuracy of these models is measured by metrics such as MAE and RMSE. Initially, linear model such as ARIMA has been applied to forecast sales. ARIMA was not able to capture nonlinear patterns precisely, hence nonlinear models such as Neural Network, XGBoost and SVM were used. Nonlinear models performed better than ARIMA and gave low RMSE. Then, to further optimize the performance, composite models were designed using hybrid technique and decomposition technique. Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM were used and all of them performed better than their respective individual models. Then, the composite model was designed using STL Decomposition where the decomposed components namely seasonal, trend and remainder components were forecasted by Snaive, ARIMA and XGBoost. STL gave better results than individual and hybrid models. This paper evaluates and analyzes why composite models give better results than an individual model and state that decomposition technique is better than the hybrid technique for this application.", "title": "" }, { "docid": "92b4d9c69969c66a1d523c38fd0495a4", "text": "A level designer typically creates the levels of a game to cater for a certain set of objectives, or mission. But in procedural content generation, it is common to treat the creation of missions and the generation of levels as two separate concerns. This often leads to generic levels that allow for various missions. However, this also creates a generic impression for the player, because the potential for synergy between the objectives and the level is not utilised. Following up on the mission-space generation concept, as described by Dormans [5], we explore the possibilities of procedurally generating a level from a designer-made mission. We use a generative grammar to transform a mission into a level in a mixed-initiative design setting. We provide two case studies, dungeon levels for a rogue-like game, and platformer levels for a metroidvania game. The generators differ in the way they use the mission to generate the space, but are created with the same tool for content generation based on model transformations. We discuss the differences between the two generation processes and compare it with a parameterized approach.", "title": "" }, { "docid": "ac0e5d2b50462a15928556bee7f8548e", "text": "The concept of “truth,” as a public good is the production of a collective understanding, which emerges from a complex network of social interactions. The recent impact of social networks on shaping the perception of truth in political arena shows how such perception is corroborated and established by the online users, collectively. However, investigative journalism for discovering truth is a costly option, given the vast spectrum of online information. In some cases, both journalist and online users choose not to investigate the authenticity of the news they receive, because they assume other actors of the network had carried the cost of validation. Therefore, the new phenomenon of “fake news” has emerged within the context of social networks. The online social networks, similarly to System of Systems, cause emergent properties, which makes authentication processes difficult, given availability of multiple sources. In this study, we show how this conflict can be modeled as a volunteer's dilemma. We also show how the public contribution through news subscription (shared rewards) can impact the dominance of truth over fake news in the network.", "title": "" }, { "docid": "0105070bd23400083850627b1603af0b", "text": "This research covers an endeavor by the author on the usage of automated vision and navigation framework; the research is conducted by utilizing a Kinect sensor requiring minimal effort framework for exploration purposes in the zone of robot route. For this framework, GMapping (a highly efficient Rao-Blackwellized particle filer to learn grid maps from laser range data) parameters have been optimized to improve the accuracy of the map generation and the laser scan. With the use of Robot Operating System (ROS), the open source GMapping bundle was utilized as a premise for a map era and Simultaneous Localization and Mapping (SLAM). Out of the many different map generation techniques, the tele-operation used is interactive marker, which controls the TurtleBot 2 movements via RVIZ (3D visualization tool for ROS). Test results completed with the multipurpose robot in a counterfeit and regular environment represents the preferences of the proposed strategy. From experiments, it is found that Kinect sensor produces a more accurate map compared to non-filtered laser range finder data, which is excellent since the price of a Kinect sensor is much cheaper than a laser range finder. An expansion of experimental results was likewise done to test the performance of the portable robot frontier exploring in an obscure environment while performing SLAM alongside the proposed technique.", "title": "" }, { "docid": "fb15647d528df8b8613376066d9f5e68", "text": "This article described the feature extraction methods of crop disease based on computer image processing technology in detail. Based on color, texture and shape feature extraction method in three aspects features and their respective problems were introduced start from the perspective of lesion leaves. Application research of image feature extraction in the filed of crop disease was reviewed in recent years. The results were analyzed that about feature extraction methods, and then the application of image feature extraction techniques in the future detection of crop diseases in the field of intelligent was prospected.", "title": "" }, { "docid": "0b06586502303b6796f1f512129b5cbe", "text": "This paper introduces an extension of collocational analysis that takes into account grammatical structure and is specifically geared to investigating the interaction of lexemes and the grammatical constructions associated with them. The method is framed in a construction-based approach to language, i.e. it assumes that grammar consists of signs (form-meaning pairs) and is thus not fundamentally different from the lexicon. The method is applied to linguistic expressions at various levels of abstraction (words, semi-fixed phrases, argument structures, tense, aspect and mood). The method has two main applications: first, to increase the adequacy of grammatical description by providing an objective way of identifying the meaning of a grammatical construction and determining the degree to which particular slots in it prefer or are restricted to a particular set of lexemes; second, to provide data for linguistic theory-building.", "title": "" }, { "docid": "a1a04d251e19a43455787cefa02bae53", "text": "This paper provides an overview of CMOS-based sensor technology with specific attention placed on devices made through micromachining of CMOS substrates and thin films. Microstructures may be formed using either pre-CMOS, intra-CMOS and post-CMOS fabrication approaches. To illustrate and motivate monolithic integration, a handful of microsystem examples, including inertial sensors, gravimetric chemical sensors, microphones, and a bone implantable sensor will be highlighted. Design constraints and challenges for CMOS-MEMS devices will be covered", "title": "" }, { "docid": "bb774fed5d447fdc181cb712c74925c2", "text": "Test-driven development is a discipline that helps professional software developers ship clean, flexible code that works, on time. In this article, the author discusses how test-driven development can help software developers achieve a higher degree of professionalism", "title": "" }, { "docid": "5bb9ca3c14dd84f1533789c3fe4bbd10", "text": "The field of spondyloarthritis (SpA) has experienced major progress in the last decade, especially with regard to new treatments, earlier diagnosis, imaging technology and a better definition of outcome parameters for clinical trials. In the present work, the Assessment in SpondyloArthritis international Society (ASAS) provides a comprehensive handbook on the most relevant aspects for the assessments of spondyloarthritis, covering classification criteria, MRI and x rays for sacroiliac joints and the spine, a complete set of all measurements relevant for clinical trials and international recommendations for the management of SpA. The handbook focuses at this time on axial SpA, with ankylosing spondylitis (AS) being the prototype disease, for which recent progress has been faster than in peripheral SpA. The target audience includes rheumatologists, trial methodologists and any doctor and/or medical student interested in SpA. The focus of this handbook is on practicality, with many examples of MRI and x ray images, which will help to standardise not only patient care but also the design of clinical studies.", "title": "" }, { "docid": "91713d85bdccb2c06d7c50365bd7022c", "text": "A 1 Mbit MRAM, a nonvolatile memory that uses magnetic tunnel junction (MJT) storage elements, has been characterized for total ionizing dose (TID) and single event latchup (SEL). Our results indicate that these devices show no single event latchup up to an effective LET of 84 MeV-cm2/mg (where our testing ended) and no bit failures to a TID of 75 krad (Si).", "title": "" }, { "docid": "4d405c1c2919be01209b820f61876d57", "text": "This paper presents a single-pole eight-throw switch, based on an eight-way power divider, using substrate integrate waveguide(SIW) technology. Eight sectorial-lines are formed by inserting radial slot-lines on the top plate of SIW power divider. Each sectorial-line can be controlled independently with high level of isolation. The switching is accomplished by altering the capacitance of the varactor on the line, which causes different input impedances to be seen at a central probe to each sectorial line. The proposed structure works as a switching circuit and an eight-way power divider depending on the bias condition. The change in resonant frequency and input impedance are estimated by adapting a tapered transmission line model. The detailed design, fabrication, and measurement are discussed.", "title": "" }, { "docid": "608ab1c58a84cd97f6444c5eff4bf8fc", "text": "Light detection and ranging (lidar) is becoming an increasingly popular technology among scientists for the development of predictive models of forest biophysical variables. However, before this technology can be adopted with confidence for long-term monitoring applications in Canada, robust models must be developed that can be applied and validated over large and complex forested areas. This will require “scaling-up” from current models developed from high-density lidar data to low-density data collected at higher altitudes. This paper investigates the effect of lowering the average point spacing of discrete lidar returns on models of forest biophysical variables. Validation of results revealed that high-density models are well correlated with mean dominant height, basal area, crown closure, and average aboveground biomass (R2 = 0.84, 0.89, 0.60, and 0.91, respectively). Low-density models could not accurately predict crown closure (R2 = 0.36). However, they did provide slightly improved estimates for mean dominant height, basal area, and average aboveground biomass (R2 = 0.90, 0.91, and 0.92, respectively). Maps were generated and validated for the entire study area from the low-density models. The ability of low-density models to accurately map key biophysical variables is a positive indicator for the utility of lidar data for monitoring large forested areas. Résumé : Le lidar est en voie de devenir une technique de plus en plus populaire parmi les chercheurs pour le développement de modèles de prédiction des variables biophysiques de la forêt. Cependant, avant que cette technologie puisse être adoptée avec confiance pour le suivi à long terme au Canada, des modèles robustes pouvant être appliqués et validés pour des superficies de forêt vastes et complexes doivent être développés. Cela va exiger de passer des modèles courants développés à partir d’une forte densité de données lidar à une plus faible densité de données collectées à plus haute altitude. Cet article se penche sur l’effet de la diminution de l’espacement ponctuel moyen des échos individuels du lidar sur les modèles de variables biophysiques de la forêt. La validation des résultats a montré que les modèles à forte densité sont bien corrélés avec la hauteur dominante moyenne, la surface terrière, la fermeture du couvert et la biomasse aérienne moyenne (R2 = 0,84, 0,89, 0,60 et 0,91 respectivement). Les modèles à faible densité ne pouvaient pas correctement (R2 = 0,36) prédire la fermeture du couvert. Cependant, ils ont fourni des estimations légèrement meilleures pour la hauteur dominante moyenne, la surface terrière et la biomasse aérienne moyenne (R2 = 0,90, 0,91 et 0,92 respectivement). Des cartes ont été générées et validées pour toute la zone d’étude à partir de modèles à faible densité. La capacité des modèles à faible densité à cartographier correctement les variables biophysiques importantes est une indication positive de l’utilité des données lidar pour le suivi de vastes zones de forêt. [Traduit par la Rédaction] Thomas et al. 47", "title": "" } ]
scidocsrr
aa7c85f32127a96c63fc22c07cbede29
Unsupervised Discovery of Discourse Relations for Eliminating Intra-sentence Polarity Ambiguities
[ { "docid": "7723c78b2ff8f9fdc285ee05b482efef", "text": "We describe our experience in developing a discourse-annotated corpus for community-wide use. Working in the framework of Rhetorical Structure Theory, we were able to create a large annotated resource with very high consistency, using a well-defined methodology and protocol. This resource is made publicly available through the Linguistic Data Consortium to enable researchers to develop empirically grounded, discourse-specific applications.", "title": "" } ]
[ { "docid": "ff1834a5b249c436dfa5a48b5f464568", "text": "Communication primitives such as coding and multiple antenna processing have provided significant benefits for traditional wireless systems. Existing designs, however, consume significant power and computational resources, and hence cannot be run on low complexity, power constrained backscatter devices. This paper makes two main contributions: (1) we introduce the first multi-antenna cancellation design that operates on backscatter devices while retaining a small form factor and power footprint, (2) we introduce a novel coding mechanism that enables long range communication as well as concurrent transmissions and can be decoded on backscatter devices. We build hardware prototypes of the above designs that can be powered solely using harvested energy from TV and solar sources. The results show that our designs provide benefits for both RFID and ambient backscatter systems: they enable RFID tags to communicate directly with each other at distances of tens of meters and through multiple walls. They also increase the communication rate and range achieved by ambient backscatter systems by 100X and 40X respectively. We believe that this paper represents a substantial leap in the capabilities of backscatter communication.", "title": "" }, { "docid": "ca8d70248ef68c41f34eee375e511abf", "text": "While mobile advertisement is the dominant source of revenue for mobile apps, the usage patterns of mobile users, and thus their engagement and exposure times, may be in conflict with the effectiveness of current ads. Users engagement with apps can range from a few seconds to several minutes, depending on a number of factors such as users' locations, concurrent activities and goals. Despite the wide-range of engagement times, the current format of ad auctions dictates that ads are priced, sold and configured prior to actual viewing, that is regardless of the actual ad exposure time.\n We argue that the wealth of easy-to-gather contextual information on mobile devices is sufficient to allow advertisers to make better choices by effectively predicting exposure time. We analyze mobile device usage patters with a detailed two-week long user study of 37 users in the US and South Korea. After characterizing application session times, we use factor analysis to derive a simple predictive model and show that is able to offer improved accuracy compared to mean session time over 90% of the time. We make the case for including predicted ad exposure duration in the price of mobile advertisements and posit that such information could significantly impact the effectiveness of mobile ads by giving publishers the ability to tune campaigns for engagement length, and enable a more efficient market for ad impressions while lowering network utilization and device power consumption.", "title": "" }, { "docid": "a258c6b5abf18cb3880e4bc7a436c887", "text": "We propose a reactive controller framework for robust quadrupedal locomotion, designed to cope with terrain irregularities, trajectory tracking errors and poor state estimation. The framework comprises two main modules: One related to the generation of elliptic trajectories for the feet and the other for control of the stability of the whole robot. We propose a task space CPG-based trajectory generation that can be modulated according to terrain irregularities and the posture of the robot trunk. To improve the robot's stability, we implemented a null space based attitude control for the trunk and a push recovery algorithm based on the concept of capture points. Simulations and experimental results on the hydraulically actuated quadruped robot HyQ will be presented to demonstrate the effectiveness of our framework.", "title": "" }, { "docid": "c2e7425f719dd51eec0d8e180577269e", "text": "Most important way of communication among humans is language and primary medium used for the said is speech. The speech recognizers make use of a parametric form of a signal to obtain the most important distinguishable features of speech signal for recognition purpose. In this paper, Linear Prediction Cepstral Coefficient (LPCC), Mel Frequency Cepstral Coefficient (MFCC) and Bark frequency Cepstral coefficient (BFCC) feature extraction techniques for recognition of Hindi Isolated, Paired and Hybrid words have been studied and the corresponding recognition rates are compared. Artifical Neural Network is used as back end processor. The experimental results show that the better recognition rate is obtained for MFCC as compared to LPCC and BFCC for all the three types of words.", "title": "" }, { "docid": "04a85672df9da82f7e5da5b8b25c9481", "text": "This study investigated long-term effects of training on postural control using the model of deficits in activation of transversus abdominis (TrA) in people with recurrent low back pain (LBP). Nine volunteers with LBP attended four sessions for assessment and/or training (initial, two weeks, four weeks and six months). Training of repeated isolated voluntary TrA contractions were performed at the initial and two-week session with feedback from real-time ultrasound imaging. Home program involved training twice daily for four weeks. Electromyographic activity (EMG) of trunk and deltoid muscles was recorded with surface and fine-wire electrodes. Rapid arm movement and walking were performed at each session, and immediately after training on the first two sessions. Onset of trunk muscle activation relative to prime mover deltoid during arm movements, and the coefficient of variation (CV) of EMG during averaged gait cycle were calculated. Over four weeks of training, onset of TrA EMG was earlier during arm movements and CV of TrA EMG was reduced (consistent with more sustained EMG activity). Changes were retained at six months follow-up (p<0.05). These results show persistence of motor control changes following training and demonstrate that this training approach leads to motor learning of automatic postural control strategies.", "title": "" }, { "docid": "f6342101ff8315bcaad4e4f965e6ba8a", "text": "In radar imaging it is well known that relative motion or deformation of parts of illuminated objects induce additional features in the Doppler frequency spectra. These features are called micro-Doppler effect and appear as sidebands around the central Doppler frequency. They can provide valuable information about the structure of the moving parts and may be used for identification purposes [1].", "title": "" }, { "docid": "df677d32bdbba01d27c8eb424b9893e9", "text": "Active learning is an area of machine learning examining strategies for allocation of finite resources, particularly human labeling efforts and to an extent feature extraction, in situations where available data exceeds available resources. In this open problem paper, we motivate the necessity of active learning in the security domain, identify problems caused by the application of present active learning techniques in adversarial settings, and propose a framework for experimentation and implementation of active learning systems in adversarial contexts. More than other contexts, adversarial contexts particularly need active learning as ongoing attempts to evade and confuse classifiers necessitate constant generation of labels for new content to keep pace with adversarial activity. Just as traditional machine learning algorithms are vulnerable to adversarial manipulation, we discuss assumptions specific to active learning that introduce additional vulnerabilities, as well as present vulnerabilities that are amplified in the active learning setting. Lastly, we present a software architecture, Security-oriented Active Learning Testbed (SALT), for the research and implementation of active learning applications in adversarial contexts.", "title": "" }, { "docid": "8439309414a9999abbd0e0be95a25fb8", "text": "Cython is a Python language extension that allows explicit type declarations and is compiled directly to C. As such, it addresses Python's large overhead for numerical loops and the difficulty of efficiently using existing C and Fortran code, which Cython can interact with natively.", "title": "" }, { "docid": "89238dd77c0bf0994b53190078eb1921", "text": "Several methods exist for a computer to generate music based on data including Markov chains, recurrent neural networks, recombinancy, and grammars. We explore the use of unit selection and concatenation as a means of generating music using a procedure based on ranking, where, we consider a unit to be a variable length number of measures of music. We first examine whether a unit selection method, that is restricted to a finite size unit library, can be sufficient for encompassing a wide spectrum of music. This is done by developing a deep autoencoder that encodes a musical input and reconstructs the input by selecting from the library. We then describe a generative model that combines a deep structured semantic model (DSSM) with an LSTM to predict the next unit, where units consist of four, two, and one measures of music. We evaluate the generative model using objective metrics including mean rank and accuracy and with a subjective listening test in which expert musicians are asked to complete a forcedchoiced ranking task. Our system is compared to a note-level generative baseline model that consists of a stacked LSTM trained to predict forward by one note.", "title": "" }, { "docid": "410bd8286a87a766dd221c1269f05c04", "text": "The lowand mid-frequency model of the transformer with resistive load is analysed for different values of coupling coefficients. The model comprising of coupling-dependent inductances is used to derive the following characteristics: voltage gain, current gain, bandwidth, input impedance, and transformer efficiency. It is shown that in the lowand mid-frequency range, the turns ratio between the windings is a strong function of the coupling coefficient, i.e., if the coupling coefficient decreases, then the effective turns ratio reduces. A practical transformer was designed, simulated, and tested. It was observed that the magnitudes of the voltage transfer function and current transfer function exhibit a maximum value each at a different value of coupling coefficient. In addition, as the coupling coefficient decreases, the transformer bandwidth also decreases. Furthermore, analytical expressions for the transformer efficiency for resistive loads are derived and its variation with respect to frequency at different coupling coefficients is investigated. It is shown that the transformer efficiency is maximum at any coupling coefficient if the input resistance is equal to the load resistance. Experimental validation of the theoretical results was performed using a practical transformer set-up. The theoretical predictions were found to be in good agreement with the experimental results.", "title": "" }, { "docid": "2ea886246d4f59d88c3eabd99c60dd5d", "text": "This paper proposes a Modified Particle Swarm Optimization with Time Varying Acceleration Coefficients (MPSO-TVAC) for solving economic load dispatch (ELD) problem. Due to prohibited operating zones (POZ) and ramp rate limits of the practical generators, the ELD problems become nonlinear and nonconvex optimization problem. Furthermore, the ELD problem may be more complicated if transmission losses are considered. Particle swarm optimization (PSO) is one of the famous heuristic methods for solving nonconvex problems. However, this method may suffer to trap at local minima especially for multimodal problem. To improve the solution quality and robustness of PSO algorithm, a new best neighbour particle called ‘rbest’ is proposed. The rbest provides extra information for each particle that is randomly selected from other best particles in order to diversify the movement of particle and avoid premature convergence. The effectiveness of MPSO-TVAC algorithm is tested on different power systems with POZ, ramp-rate limits and transmission loss constraints. To validate the performances of the proposed algorithm, comparative studies have been carried out in terms of convergence characteristic, solution quality, computation time and robustness. Simulation results found that the proposed MPSO-TVAC algorithm has good solution quality and more robust than other methods reported in previous work.", "title": "" }, { "docid": "aa64bd9576044ec5e654c9f29c4f7d84", "text": "BACKGROUND\nSocial media are dynamic and interactive computer-mediated communication tools that have high penetration rates in the general population in high-income and middle-income countries. However, in medicine and health care, a large number of stakeholders (eg, clinicians, administrators, professional colleges, academic institutions, ministries of health, among others) are unaware of social media's relevance, potential applications in their day-to-day activities, as well as the inherent risks and how these may be attenuated and mitigated.\n\n\nOBJECTIVE\nWe conducted a narrative review with the aim to present case studies that illustrate how, where, and why social media are being used in the medical and health care sectors.\n\n\nMETHODS\nUsing a critical-interpretivist framework, we used qualitative methods to synthesize the impact and illustrate, explain, and provide contextual knowledge of the applications and potential implementations of social media in medicine and health care. Both traditional (eg, peer-reviewed) and nontraditional (eg, policies, case studies, and social media content) sources were used, in addition to an environmental scan (using Google and Bing Web searches) of resources.\n\n\nRESULTS\nWe reviewed, evaluated, and synthesized 76 articles, 44 websites, and 11 policies/reports. Results and case studies are presented according to 10 different categories of social media: (1) blogs (eg, WordPress), (2) microblogs (eg, Twitter), (3) social networking sites (eg, Facebook), (4) professional networking sites (eg, LinkedIn, Sermo), (5) thematic networking sites (eg, 23andMe), (6) wikis (eg, Wikipedia), (7) mashups (eg, HealthMap), (8) collaborative filtering sites (eg, Digg), (9) media sharing sites (eg, YouTube, Slideshare), and others (eg, SecondLife). Four recommendations are provided and explained for stakeholders wishing to engage with social media while attenuating risk: (1) maintain professionalism at all times, (2) be authentic, have fun, and do not be afraid, (3) ask for help, and (4) focus, grab attention, and engage.\n\n\nCONCLUSIONS\nThe role of social media in the medical and health care sectors is far reaching, and many questions in terms of governance, ethics, professionalism, privacy, confidentiality, and information quality remain unanswered. By following the guidelines presented, professionals have a starting point to engage with social media in a safe and ethical manner. Future research will be required to understand the synergies between social media and evidence-based practice, as well as develop institutional policies that benefit patients, clinicians, public health practitioners, and industry alike.", "title": "" }, { "docid": "06f6ffa9c1c82570b564e1cd0f719950", "text": "Widespread use of biometric architectures implies the need to secure highly sensitive data to respect the privacy rights of the users. In this paper, we discuss the following question: To what extent can biometric designs be characterized as Privacy Enhancing Technologies? The terms of privacy and security for biometric schemes are defined, while current regulations for the protection of biometric information are presented. Additionally, we analyze and compare cryptographic techniques for secure biometric designs. Finally, we introduce a privacy-preserving approach for biometric authentication in mobile electronic financial applications. Our model utilizes the mechanism of pseudonymous biometric identities for secure user registration and authentication. We discuss how the privacy requirements for the processing of biometric data can be met in our scenario. This work attempts to contribute to the development of privacy-by-design biometric technologies.", "title": "" }, { "docid": "74a91327b85ac9681f618d4ba6a86151", "text": "In this paper, a miniaturized planar antenna with enhanced bandwidth is designed for the ISM 433 MHz applications. The antenna is realized by cascading two resonant structures with meander lines, thus introducing two different radiating branches to realize two neighboring resonant frequencies. The techniques of shorting pin and novel ground plane are adopted for bandwidth enhancement. Combined with these structures, a novel antenna with a total size of 23 mm × 49.5 mm for the ISM band application is developed and fabricated. Measured results show that the proposed antenna has good performance with the -10 dB impedance bandwidth is about 12.5 MHz and the maximum gain is about -2.8 dBi.", "title": "" }, { "docid": "f0f88be4a2b7619f6fb5cdcca1741d1f", "text": "BACKGROUND\nThere is no evidence from randomized trials to support a strategy of lowering systolic blood pressure below 135 to 140 mm Hg in persons with type 2 diabetes mellitus. We investigated whether therapy targeting normal systolic pressure (i.e., <120 mm Hg) reduces major cardiovascular events in participants with type 2 diabetes at high risk for cardiovascular events.\n\n\nMETHODS\nA total of 4733 participants with type 2 diabetes were randomly assigned to intensive therapy, targeting a systolic pressure of less than 120 mm Hg, or standard therapy, targeting a systolic pressure of less than 140 mm Hg. The primary composite outcome was nonfatal myocardial infarction, nonfatal stroke, or death from cardiovascular causes. The mean follow-up was 4.7 years.\n\n\nRESULTS\nAfter 1 year, the mean systolic blood pressure was 119.3 mm Hg in the intensive-therapy group and 133.5 mm Hg in the standard-therapy group. The annual rate of the primary outcome was 1.87% in the intensive-therapy group and 2.09% in the standard-therapy group (hazard ratio with intensive therapy, 0.88; 95% confidence interval [CI], 0.73 to 1.06; P=0.20). The annual rates of death from any cause were 1.28% and 1.19% in the two groups, respectively (hazard ratio, 1.07; 95% CI, 0.85 to 1.35; P=0.55). The annual rates of stroke, a prespecified secondary outcome, were 0.32% and 0.53% in the two groups, respectively (hazard ratio, 0.59; 95% CI, 0.39 to 0.89; P=0.01). Serious adverse events attributed to antihypertensive treatment occurred in 77 of the 2362 participants in the intensive-therapy group (3.3%) and 30 of the 2371 participants in the standard-therapy group (1.3%) (P<0.001).\n\n\nCONCLUSIONS\nIn patients with type 2 diabetes at high risk for cardiovascular events, targeting a systolic blood pressure of less than 120 mm Hg, as compared with less than 140 mm Hg, did not reduce the rate of a composite outcome of fatal and nonfatal major cardiovascular events. (ClinicalTrials.gov number, NCT00000620.)", "title": "" }, { "docid": "f3cb18c15459dd7a9c657e32442bd289", "text": "The advent of crowdsourcing has created a variety of new opportunities for improving upon traditional methods of data collection and annotation. This in turn has created intriguing new opportunities for data-driven machine learning (ML). Convenient access to crowd workers for simple data collection has further generalized to leveraging more arbitrary crowd-based human computation (von Ahn 2005) to supplement automated ML. While new potential applications of crowdsourcing continue to emerge, a variety of practical and sometimes unexpected obstacles have already limited the degree to which its promised potential can be actually realized in practice. This paper considers two particular aspects of crowdsourcing and their interplay, data quality control (QC) and ML, reflecting on where we have been, where we are, and where we might go from here.", "title": "" }, { "docid": "400048566b24d7527845f7c6b6d86fc0", "text": "In brief: Diagnosis of skier's thumb-a common sports injury-is based on physical examination and history of the injury. The most important findings from the physical exam are point tenderness over the ulnar collateral ligament and instability, which is tested with the thumb at 0° and at 20° to 30° of flexion. Grade 1 and 2 injuries, which involve torn fibers but no loss of integrity, can be treated with casting and/or splinting and physical therapy. Grade 3 injuries involve complete disruption of the ligament and usually require surgical repair. Results from treatment are generally excellent, and with appropriate rehabilitation, athletes recover pinch and grip strength and return to sports.", "title": "" }, { "docid": "06d2d07ed7532aa19b779607a21afef7", "text": "BACKGROUND\nMyocardium irreversibly injured by ischemic stress must be efficiently repaired to maintain tissue integrity and contractile performance. Macrophages play critical roles in this process. These cells transform across a spectrum of phenotypes to accomplish diverse functions ranging from mediating the initial inflammatory responses that clear damaged tissue to subsequent reparative functions that help rebuild replacement tissue. Although macrophage transformation is crucial to myocardial repair, events governing this transformation are poorly understood.\n\n\nMETHODS\nHere, we set out to determine whether innate immune responses triggered by cytoplasmic DNA play a role.\n\n\nRESULTS\nWe report that ischemic myocardial injury, along with the resulting release of nucleic acids, activates the recently described cyclic GMP-AMP synthase-stimulator of interferon genes pathway. Animals lacking cyclic GMP-AMP synthase display significantly improved early survival after myocardial infarction and diminished pathological remodeling, including ventricular rupture, enhanced angiogenesis, and preserved ventricular contractile function. Furthermore, cyclic GMP-AMP synthase loss of function abolishes the induction of key inflammatory programs such as inducible nitric oxide synthase and promotes the transformation of macrophages to a reparative phenotype, which results in enhanced repair and improved hemodynamic performance.\n\n\nCONCLUSIONS\nThese results reveal, for the first time, that the cytosolic DNA receptor cyclic GMP-AMP synthase functions during cardiac ischemia as a pattern recognition receptor in the sterile immune response. Furthermore, we report that this pathway governs macrophage transformation, thereby regulating postinjury cardiac repair. Because modulators of this pathway are currently in clinical use, our findings raise the prospect of new treatment options to combat ischemic heart disease and its progression to heart failure.", "title": "" }, { "docid": "f443e22db2a2313b47168740662ad187", "text": "Tunneling-field-effect-transistor (TFET) has emerged as an alternative for conventional CMOS by enabling the supply voltage (VDD) scaling in ultra-low power, energy efficient computing, due to its sub-60 mV/ decade sub-threshold slope (SS). Given its unique device characteristics such as the asymmetrical source/drain design induced uni-directional conduction, enhanced on-state Miller capacitance effect and steep switching at low voltages, TFET based circuit design requires strong interactions between the device-level and the circuit-level to explore the performance benefits, with certain modifications of the conventional CMOS circuits to achieve the functionality and optimal energy efficiency. Because TFET operates at low supply voltage range (VDD < 0:5 V) to outperform CMOS, reliability issues can have profound impact on the circuit design from the practical application perspective. In this review paper, we present recent development on Tunnel FET device design, and modeling technique for circuit implementation and performance benchmarking. We focus on the reliability issues such as soft-error, electrical noise and process variation, and their impact on TFET based circuit performance compared to sub-threshold CMOS. Analytical models of electrical noise and process variation are also discussed for circuit-level", "title": "" }, { "docid": "1e25480ef6bd5974fcd806aac7169298", "text": "Alphabetical ciphers are being used since centuries for inducing confusion in messages, but there are some drawbacks that are associated with Classical alphabetic techniques like concealment of key and plaintext. Here in this paper we will suggest an encryption technique that is a blend of both classical encryption as well as modern technique, this hybrid technique will be superior in terms of security than average Classical ciphers.", "title": "" } ]
scidocsrr
2ea6466de9702c55fb87df541947b9d0
Searching by Talking: Analysis of Voice Queries on Mobile Web Search
[ { "docid": "ef08ef786fd759b33a7d323c69be19db", "text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.", "title": "" } ]
[ { "docid": "f4abfe0bb969e2a6832fa6317742f202", "text": "We built a highly compliant, underactuated, robust and at the same time dexterous anthropomorphic hand. We evaluate its dexterous grasping capabilities by implementing the comprehensive Feix taxonomy of human grasps and by assessing the dexterity of its opposable thumb using the Kapandji test. We also illustrate the hand’s payload limits and demonstrate its grasping capabilities in real-world grasping experiments. To support our claim that compliant structures are beneficial for dexterous grasping, we compare the dimensionality of control necessary to implement the diverse grasp postures with the dimensionality of the grasp postures themselves. We find that actuation space is smaller than posture space and explain the difference with the mechanic interaction between hand and grasped object. Additional desirable properties are derived from using soft robotics technology: the hand is robust to impact and blunt collisions, inherently safe, and not affected by dirt, dust, or liquids. Furthermore, the hand is simple and inexpensive to manufacture.", "title": "" }, { "docid": "b0c60343724a49266fac2d2f4c2d37d3", "text": "In the Western world, aging is a growing problem of the society and computer assisted treatments can facilitate the telemedicine for old people or it can help in rehabilitations of patients after sport accidents in far locations. Physical exercises play an important role in physiotherapy and RGB-D devices can be utilized to recognize them in order to make interactive computer healthcare applications in the future. A practical model definition is introduced in this paper to recognize different exercises with Asus Xtion camera. One of the contributions is the extendable recognition models to detect other human activities with noisy sensors, but avoiding heavy data collection. The experiments show satisfactory detection performance without any false positives which is unique in the field to the best of the author knowledge. The computational costs are negligible thus the developed models can be suitable for embedded systems.", "title": "" }, { "docid": "d7bb22eefbff0a472d3e394c61788be2", "text": "Crowd evacuation of a building has been studied over the last decades. In this paper, seven methodological approaches for crowd evacuation have been identified. These approaches include cellular automata models, lattice gas models, social force models, fluid-dynamic models, agent-based models, game theoretic models, and approaches based on experiments with animals. According to available literatures, we discuss the advantages and disadvantages of these approaches, and conclude that a variety of different kinds of approaches should be combined to study crowd evacuation. Psychological and physiological elements affecting individual and collective behaviors should be also incorporated into the evacuation models. & 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ca9c4512d2258a44590a298879219970", "text": "I propose a common framework that combines three different paradigms in machine learning: generative, discriminative and imitative learning. A generative probabilistic distribution is a principled way to model many machine learning and machine perception problems. Therein, one provides domain specific knowledge in terms of structure and parameter priors over the joint space of variables. Bayesian networks and Bayesian statistics provide a rich and flexible language for specifying this knowledge and subsequently refining it with data and observations. The final result is a distribution that is a good generator of novel exemplars. Conversely, discriminative algorithms adjust a possibly non-distributional model to data optimizing for a specific task, such as classification or prediction. This typically leads to superior performance yet compromises the flexibility of generative modeling. I present Maximum Entropy Discrimination (MED) as a framework to combine both discriminative estimation and generative probability densities. Calculations involve distributions over parameters, margins, and priors and are provably and uniquely solvable for the exponential family. Extensions include regression, feature selection, and transduction. SVMs are also naturally subsumed and can be augmented with, for example, feature selection, to obtain substantial improvements. To extend to mixtures of exponential families, I derive a discriminative variant of the ExpectationMaximization (EM) algorithm for latent discriminative learning (or latent MED). While EM and Jensen lower bound log-likelihood, a dual upper bound is made possible via a novel reverse-Jensen inequality. The variational upper bound on latent log-likelihood has the same form as EM bounds, is computable efficiently and is globally guaranteed. It permits powerful discriminative learning with the wide range of contemporary probabilistic mixture models (mixtures of Gaussians, mixtures of multinomials and hidden Markov models). We provide empirical results on standardized data sets that demonstrate the viability of the hybrid discriminative-generative approaches of MED and reverse-Jensen bounds over state of the art discriminative techniques or generative approaches. Subsequently, imitative learning is presented as another variation on generative modeling which also learns from exemplars from an observed data source. However, the distinction is that the generative model is an agent that is interacting in a much more complex surrounding external world. It is not efficient to model the aggregate space in a generative setting. I demonstrate that imitative learning (under appropriate conditions) can be adequately addressed as a discriminative prediction task which outperforms the usual generative approach. This discriminative-imitative learning approach is applied with a generative perceptual system to synthesize a real-time agent that learns to engage in social interactive behavior. Thesis Supervisor: Alex Pentland Title: Toshiba Professor of Media Arts and Sciences, MIT Media Lab Discriminative, Generative and Imitative Learning", "title": "" }, { "docid": "9584909fc62cca8dc5c9d02db7fa7e5d", "text": "As the nature of many materials handling tasks have begun to change from lifting to pushing and pulling, it is important that one understands the biomechanical nature of the risk to which the lumbar spine is exposed. Most previous assessments of push-pull tasks have employed models that may not be sensitive enough to consider the effects of the antagonistic cocontraction occurring during complex pushing and pulling motions in understanding the risk to the spine and the few that have considered the impact of cocontraction only consider spine load at one lumbar level. This study used an electromyography-assisted biomechanical model sensitive to complex motions to assess spine loadings throughout the lumbar spine as 10 males and 10 females pushed and pulled loads at three different handle heights and of three different load magnitudes. Pulling induced greater spine compressive loads than pushing, whereas the reverse was true for shear loads at the different lumbar levels. The results indicate that, under these conditions, anterior-posterior (A/P) shear loads were of sufficient magnitude to be of concern especially at the upper lumbar levels. Pushing and pulling loads equivalent to 20% of body weight appeared to be the limit of acceptable exertions, while pulling at low and medium handle heights (50% and 65% of stature) minimised A/P shear. These findings provide insight to the nature of spine loads and their potential risk to the low back during modern exertions.", "title": "" }, { "docid": "4cc4c8fd07f30b5546be2376c1767c19", "text": "We apply new bilevel and trilevel optimization models to make critical infrastructure more resilient against terrorist attacks. Each model features an intelligent attacker (terrorists) and a defender (us), information transparency, and sequential actions by attacker and defender. We illustrate with examples of the US Strategic Petroleum Reserve, the US Border Patrol at Yuma, Arizona, and an electrical transmission system. We conclude by reporting insights gained from the modeling experience and many “red-team” exercises. Each exercise gathers open-source data on a real-world infrastructure system, develops an appropriate bilevel or trilevel model, and uses these to identify vulnerabilities in the system or to plan an optimal defense.", "title": "" }, { "docid": "8c174dbb8468b1ce6f4be3676d314719", "text": "An estimated 24 million people worldwide have dementia, the majority of whom are thought to have Alzheimer's disease. Thus, Alzheimer's disease represents a major public health concern and has been identified as a research priority. Although there are licensed treatments that can alleviate symptoms of Alzheimer's disease, there is a pressing need to improve our understanding of pathogenesis to enable development of disease-modifying treatments. Methods for improving diagnosis are also moving forward, but a better consensus is needed for development of a panel of biological and neuroimaging biomarkers that support clinical diagnosis. There is now strong evidence of potential risk and protective factors for Alzheimer's disease, dementia, and cognitive decline, but further work is needed to understand these better and to establish whether interventions can substantially lower these risks. In this Seminar, we provide an overview of recent evidence regarding the epidemiology, pathogenesis, diagnosis, and treatment of Alzheimer's disease, and discuss potential ways to reduce the risk of developing the disease.", "title": "" }, { "docid": "8af2e53cb3f77a2590945f135a94279b", "text": "Time series data are an ubiquitous and important data source in many domains. Most companies and organizations rely on this data for critical tasks like decision-making, planning, and analytics in general. Usually, all these tasks focus on actual data representing organization and business processes. In order to assess the robustness of current systems and methods, it is also desirable to focus on time-series scenarios which represent specific time-series features. This work presents a generally applicable and easy-to-use method for the feature-driven generation of time series data. Our approach extracts descriptive features of a data set and allows the construction of a specific version by means of the modification of these features.", "title": "" }, { "docid": "6b8329ef59c6811705688e48bf6c0c08", "text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.", "title": "" }, { "docid": "1785d1d7da87d1b6e5c41ea89e447bf9", "text": "Web usage mining is the application of data mining techniques to discover usage patterns from Web data, in order to understand and better serve the needs of Web-based applications. Web usage mining consists of three phases, namely preprocessing, pattern discovery, and pattern analysis. This paper describes each of these phases in detail. Given its application potential, Web usage mining has seen a rapid increase in interest, from both the research and practice communities. This paper provides a detailed taxonomy of the work in this area, including research efforts as well as commercial offerings. An up-to-date survey of the existing work is also provided. Finally, a brief overview of the WebSIFT system as an example of a prototypical Web usage mining system is given.", "title": "" }, { "docid": "924768b271caa9d1ba0cb32ab512f92e", "text": "Traditional keyboard and mouse based presentation prevents lecturers from interacting with the audiences freely and closely. In this paper, we propose a gesture-aware presentation tool named SlideShow to liberate lecturers from physical space constraints and make human-computer interaction more natural and convenient. In our system, gesture data is obtained by a handle controller with 3-axis accelerometer and gyro and transmitted to host-side through bluetooth, then we use Bayesian change point detection to segment continuous gesture series and HMM to recognize the gesture. In consequence Slideshow could carry out the corresponding operations on PowerPoint(PPT) to make a presentation, and operation states can be switched automatically and intelligently during the presentation. Both the experimental and testing results show our approach is practical, useful and convenient.", "title": "" }, { "docid": "d2f64c21d0a3a54b4a2b75b7dd7df029", "text": "Library of Congress Cataloging in Publication Data EB. Boston studies in the philosophy of science.The concept of autopoiesis is due to Maturana and Varela 8, 9. The aim of this article is to revisit the concepts of autopoiesis and cognition in the hope of.Amazon.com: Autopoiesis and Cognition: The Realization of the Living Boston Studies in the Philosophy of Science, Vol. 42 9789027710161: H.R. Maturana.Autopoiesis, The Santiago School of Cognition, and. In their early work together Maturana and Varela developed the idea of autopoiesis.Autopoiesis and Cognition: The Realization of the Living Dordecht.", "title": "" }, { "docid": "566c6e3f9267fc8ccfcf337dc7aa7892", "text": "Research into the values motivating unsustainable behavior has generated unique insight into how NGOs and environmental campaigns contribute toward successfully fostering significant and long-term behavior change, yet thus far this research has not been applied to the domain of sustainable HCI. We explore the implications of this research as it relates to the potential limitations of current approaches to persuasive technology, and what it means for designing higher impact interventions. As a means of communicating these implications to be readily understandable and implementable, we develop a set of antipatterns to describe persuasive technology approaches that values research suggests are unlikely to yield significant sustainability wins, and a complementary set of patterns to describe new guidelines for what may become persuasive technology best practice.", "title": "" }, { "docid": "f48d02ff3661d3b91c68d6fcf750f83e", "text": "There have been a number of techniques developed in recent years for the efficient analysis of probabilistic inference problems, represented as Bayes' networks or influence diagrams (Lauritzen and Spiegelhalter [9], Pearl [12], Shachter [14]). To varying degrees these methods exploit the conditional independence assumed and revealed in the problem structure to analyze problems in polynomial time, essentially polynomial in the number of variables and the size of the largest state space encountered during the evaluation. Unfortunately, there are many problems of interest for which the variables of interest are continuous rather than discrete, so the relevant state spaces become infinite and the polynomial complexity is of little help.", "title": "" }, { "docid": "c3558d8f79cd8a7f53d8b6073c9a7db3", "text": "De novo assembly of RNA-seq data enables researchers to study transcriptomes without the need for a genome sequence; this approach can be usefully applied, for instance, in research on 'non-model organisms' of ecological and evolutionary importance, cancer samples or the microbiome. In this protocol we describe the use of the Trinity platform for de novo transcriptome assembly from RNA-seq data in non-model organisms. We also present Trinity-supported companion utilities for downstream applications, including RSEM for transcript abundance estimation, R/Bioconductor packages for identifying differentially expressed transcripts across samples and approaches to identify protein-coding genes. In the procedure, we provide a workflow for genome-independent transcriptome analysis leveraging the Trinity platform. The software, documentation and demonstrations are freely available from http://trinityrnaseq.sourceforge.net. The run time of this protocol is highly dependent on the size and complexity of data to be analyzed. The example data set analyzed in the procedure detailed herein can be processed in less than 5 h.", "title": "" }, { "docid": "745cdbb442c73316f691dc20cc696f31", "text": "Computer-generated texts, whether from Natural Language Generation (NLG) or Machine Translation (MT) systems, are often post-edited by humans before being released to users. The frequency and type of post-edits is a measure of how well the system works, and can be used for evaluation. We describe how we have used post-edit data to evaluate SUMTIME-MOUSAM, an NLG system that produces weather forecasts.", "title": "" }, { "docid": "f90784e4bdaad1f8ecb5941867a467cf", "text": "Social Networks (SN) Sites are becoming very popular and the number of users is increasing rapidly. However, with that increase there is also an increase in the security threats which affect the users’ privacy, identity and confidentiality. Different research groups highlighted the security threats in SN and attempted to offer some solutions to these issues. In this paper we survey several examples of this research and highlight the approaches. All the models we surveyed were focusing on protecting users’ information yet they failed to cover other important issues. For example, none of the mechanisms provided the users with control over what others can reveal about them; and encryption of images is still not achieved properly. Generally having higher security measures will affect the system’s performance in terms of speed and response time. However, this trade-off was not discussed or addressed in any of the models we surveyed.", "title": "" }, { "docid": "a38986fcee27fb733ec51cf83771a85f", "text": "A tunable broadband inverted microstrip line phase shifter filled with Liquid Crystals (LCs) is investigated between 1.125 GHz and 35 GHz at room temperature. The effective dielectric anisotropy is tuned by a DC-voltage of up to 30 V. In addition to standard LCs like K15 (5CB), a novel highly anisotropic LC mixture is characterized by a resonator method at 8.5 GHz, showing a very high dielectric anisotropy /spl Delta/n of 0.32 for the novel mixture compared to 0.13 for K15. These LCs are filled into two inverted microstrip line phase shifter devices with different polyimide films and heights. With a physical length of 50 mm, the insertion losses are about 4 dB for the novel mixture compared to 6 dB for K15 at 24 GHz. A differential phase shift of 360/spl deg/ can be achieved at 30 GHz with the novel mixture. The figure-of-merit of the phase shifter exceeds 110/spl deg//dB for the novel mixture compared to 21/spl deg//dB for K15 at 24 GHz. To our knowledge, this is the best value above 20 GHz at room temperature demonstrated for a tunable phase shifter based on nonlinear dielectrics up to now. This substantial progress opens up totally new low-cost LC applications beyond optics.", "title": "" }, { "docid": "ab0c80a10d26607134828c6b350089aa", "text": "Parkinson's disease (PD) is a neurodegenerative disorder with symptoms that progressively worsen with age. Pathologically, PD is characterized by the aggregation of α-synuclein in cells of the substantia nigra in the brain and loss of dopaminergic neurons. This pathology is associated with impaired movement and reduced cognitive function. The etiology of PD can be attributed to a combination of environmental and genetic factors. A popular animal model, the nematode roundworm Caenorhabditis elegans, has been frequently used to study the role of genetic and environmental factors in the molecular pathology and behavioral phenotypes associated with PD. The current review summarizes cellular markers and behavioral phenotypes in transgenic and toxin-induced PD models of C. elegans.", "title": "" } ]
scidocsrr