id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
sequencelengths
1
98
1201.2004
Md. Amjad Hossain
Md. Amjad Hossain, Pintu Chandra Shill, Bishnu Sarker, and Kazuyuki Murase
Optimal Fuzzy Model Construction with Statistical Information using Genetic Algorithm
null
null
10.5121/ijcsit.2011.3619
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fuzzy rule based models have a capability to approximate any continuous function to any degree of accuracy on a compact domain. The majority of FLC design process relies on heuristic knowledge of experience operators. In order to make the design process automatic we present a genetic approach to learn fuzzy rules as well as membership function parameters. Moreover, several statistical information criteria such as the Akaike information criterion (AIC), the Bhansali-Downham information criterion (BDIC), and the Schwarz-Rissanen information criterion (SRIC) are used to construct optimal fuzzy models by reducing fuzzy rules. A genetic scheme is used to design Takagi-Sugeno-Kang (TSK) model for identification of the antecedent rule parameters and the identification of the consequent parameters. Computer simulations are presented confirming the performance of the constructed fuzzy logic controller.
[ { "version": "v1", "created": "Tue, 10 Jan 2012 10:14:33 GMT" } ]
1,326,240,000,000
[ [ "Hossain", "Md. Amjad", "" ], [ "Shill", "Pintu Chandra", "" ], [ "Sarker", "Bishnu", "" ], [ "Murase", "Kazuyuki", "" ] ]
1201.2711
Fionn Murtagh
Fionn Murtagh
Ultrametric Model of Mind, I: Review
20 pages, 2 figures, 46 references. arXiv admin note: substantial text overlap with arXiv:0709.0116, arXiv:0805.2744, and arXiv:1105.0121 (V3: 2 typos corrected)
p-Adic Numbers, Ultrametric Analysis and Applications, 4, 193-206, 2012
10.1134/S2070046612030041
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We mathematically model Ignacio Matte Blanco's principles of symmetric and asymmetric being through use of an ultrametric topology. We use for this the highly regarded 1975 book of this Chilean psychiatrist and pyschoanalyst (born 1908, died 1995). Such an ultrametric model corresponds to hierarchical clustering in the empirical data, e.g. text. We show how an ultrametric topology can be used as a mathematical model for the structure of the logic that reflects or expresses Matte Blanco's symmetric being, and hence of the reasoning and thought processes involved in conscious reasoning or in reasoning that is lacking, perhaps entirely, in consciousness or awareness of itself. In a companion paper we study how symmetric (in the sense of Matte Blanco's) reasoning can be demarcated in a context of symmetric and asymmetric reasoning provided by narrative text.
[ { "version": "v1", "created": "Fri, 13 Jan 2012 00:17:17 GMT" }, { "version": "v2", "created": "Mon, 6 Feb 2012 19:47:26 GMT" }, { "version": "v3", "created": "Mon, 16 Jul 2012 12:43:58 GMT" } ]
1,474,329,600,000
[ [ "Murtagh", "Fionn", "" ] ]
1201.3107
Li Yang
Li Yang, Yuhui Wang
Tacit knowledge mining algorithm based on linguistic truth-valued concept lattice
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/3.0/
This paper is the continuation of our research work about linguistic truth-valued concept lattice. In order to provide a mathematical tool for mining tacit knowledge, we establish a concrete model of 6-ary linguistic truth-valued concept lattice and introduce a mining algorithm through the structure consistency. Specifically, we utilize the attributes to depict knowledge, propose the 6-ary linguistic truth-valued attribute extended context and congener context to characterize tacit knowledge, and research the necessary and sufficient conditions of forming tacit knowledge. We respectively give the algorithms of generating the linguistic truth-valued congener context and constructing the linguistic truth-valued concept lattice.
[ { "version": "v1", "created": "Sun, 15 Jan 2012 17:33:28 GMT" } ]
1,326,758,400,000
[ [ "Yang", "Li", "" ], [ "Wang", "Yuhui", "" ] ]
1201.3204
Alex Fukunaga
Akihiro Kishimoto, Alex Fukunaga, Adi Botea
Evaluation of a Simple, Scalable, Parallel Best-First Search Strategy
in press, to appear in Artificial Intelligence
Artificial Intelligence (2013), vol. 195, pp. 222-248
10.1016/j.artint.2012.10.007
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate Hash-Distributed A* (HDA*), a simple approach to parallel best-first search that asynchronously distributes and schedules work among processors based on a hash function of the search state. We use this approach to parallelize the A* algorithm in an optimal sequential version of the Fast Downward planner, as well as a 24-puzzle solver. The scaling behavior of HDA* is evaluated experimentally on a shared memory, multicore machine with 8 cores, a cluster of commodity machines using up to 64 cores, and large-scale high-performance clusters, using up to 2400 processors. We show that this approach scales well, allowing the effective utilization of large amounts of distributed memory to optimally solve problems which require terabytes of RAM. We also compare HDA* to Transposition-table Driven Scheduling (TDS), a hash-based parallelization of IDA*, and show that, in planning, HDA* significantly outperforms TDS. A simple hybrid which combines HDA* and TDS to exploit strengths of both algorithms is proposed and evaluated.
[ { "version": "v1", "created": "Mon, 16 Jan 2012 10:31:47 GMT" }, { "version": "v2", "created": "Thu, 25 Oct 2012 03:39:16 GMT" } ]
1,426,809,600,000
[ [ "Kishimoto", "Akihiro", "" ], [ "Fukunaga", "Alex", "" ], [ "Botea", "Adi", "" ] ]
1201.3408
Velimir Ilic
Milos B. Djuric, Velimir M. Ilic and Miomir S. Stankovic
The computation of first order moments on junction trees
9 pages, 1 figure
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We review some existing methods for the computation of first order moments on junction trees using Shafer-Shenoy algorithm. First, we consider the problem of first order moments computation as vertices problem in junction trees. In this way, the problem is solved using the memory space of an order of the junction tree edge-set cardinality. After that, we consider two algorithms, Lauritzen-Nilsson algorithm, and Mau\'a et al. algorithm, which computes the first order moments as the normalization problem in junction tree, using the memory space of an order of the junction tree leaf-set cardinality.
[ { "version": "v1", "created": "Tue, 17 Jan 2012 01:28:55 GMT" } ]
1,326,844,800,000
[ [ "Djuric", "Milos B.", "" ], [ "Ilic", "Velimir M.", "" ], [ "Stankovic", "Miomir S.", "" ] ]
1201.4080
Ingmar Steiner
Ingmar Steiner (INRIA Lorraine - LORIA), Slim Ouni (INRIA Lorraine - LORIA)
Progress in animation of an EMA-controlled tongue model for acoustic-visual speech synthesis
null
Elektronische Sprachsignalverarbeitung 2011 TUDpress (Ed.) (2011) 245-252
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a technique for the animation of a 3D kinematic tongue model, one component of the talking head of an acoustic-visual (AV) speech synthesizer. The skeletal animation approach is adapted to make use of a deformable rig controlled by tongue motion capture data obtained with electromagnetic articulography (EMA), while the tongue surface is extracted from volumetric magnetic resonance imaging (MRI) data. Initial results are shown and future work outlined.
[ { "version": "v1", "created": "Thu, 19 Jan 2012 15:29:56 GMT" } ]
1,327,017,600,000
[ [ "Steiner", "Ingmar", "", "INRIA Lorraine - LORIA" ], [ "Ouni", "Slim", "", "INRIA Lorraine -\n LORIA" ] ]
1201.5426
M. H. van Emden
A. Nait Abdallah and M.H. van Emden
Constraint Propagation as Information Maximization
21 pages
null
null
Research Report 746, Dept. of Computer Science, University of Western Ontario, Canada
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper draws on diverse areas of computer science to develop a unified view of computation: (1) Optimization in operations research, where a numerical objective function is maximized under constraints, is generalized from the numerical total order to a non-numerical partial order that can be interpreted in terms of information. (2) Relations are generalized so that there are relations of which the constituent tuples have numerical indexes, whereas in other relations these indexes are variables. The distinction is essential in our definition of constraint satisfaction problems. (3) Constraint satisfaction problems are formulated in terms of semantics of conjunctions of atomic formulas of predicate logic. (4) Approximation structures, which are available for several important domains, are applied to solutions of constraint satisfaction problems. As application we treat constraint satisfaction problems over reals. These cover a large part of numerical analysis, most significantly nonlinear equations and inequalities. The chaotic algorithm analyzed in the paper combines the efficiency of floating-point computation with the correctness guarantees of arising from our logico-mathematical model of constraint-satisfaction problems.
[ { "version": "v1", "created": "Thu, 26 Jan 2012 01:42:18 GMT" }, { "version": "v2", "created": "Thu, 7 Feb 2013 23:05:29 GMT" } ]
1,360,540,800,000
[ [ "Abdallah", "A. Nait", "" ], [ "van Emden", "M. H.", "" ] ]
1201.5472
Pierrick Tranouez
Pierrick Tranouez (LITIS), Eric Daud\'e (IDEES), Patrice Langlois (IDEES)
A multiagent urban traffic simulation
arXiv admin note: significant text overlap with arXiv:0909.1021 and arXiv:0910.1026
Journal of Nonlinear Systems and Applications 1, 3 (2010) 9 pp (in print)
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We built a multiagent simulation of urban traffic to model both ordinary traffic and emergency or crisis mode traffic. This simulation first builds a modeled road network based on detailed geographical information. On this network, the simulation creates two populations of agents: the Transporters and the Mobiles. Transporters embody the roads themselves; they are utilitarian and meant to handle the low level realism of the simulation. Mobile agents embody the vehicles that circulate on the network. They have one or several destinations they try to reach using initially their beliefs of the structure of the network (length of the edges, speed limits, number of lanes etc.). Nonetheless, when confronted to a dynamic, emergent prone environment (other vehicles, unexpectedly closed ways or lanes, traffic jams etc.), the rather reactive agent will activate more cognitive modules to adapt its beliefs, desires and intentions. It may change its destination(s), change the tactics used to reach the destination (favoring less used roads, following other agents, using general headings), etc. We describe our current validation of our model and the next planned improvements, both in validation and in functionalities.
[ { "version": "v1", "created": "Thu, 26 Jan 2012 10:15:09 GMT" } ]
1,327,622,400,000
[ [ "Tranouez", "Pierrick", "", "LITIS" ], [ "Daudé", "Eric", "", "IDEES" ], [ "Langlois", "Patrice", "", "IDEES" ] ]
1201.5841
Alexandre Castro
Alexandre de Castro
The thermodynamic cost of fast thought
null
null
10.1007/s11023-013-9302-x
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
After more than sixty years, Shannon's research [1-3] continues to raise fundamental questions, such as the one formulated by Luce [4,5], which is still unanswered: "Why is information theory not very applicable to psychological problems, despite apparent similarities of concepts?" On this topic, Pinker [6], one of the foremost defenders of the computational theory of mind [6], has argued that thought is simply a type of computation, and that the gap between human cognition and computational models may be illusory. In this context, in his latest book, titled Thinking Fast and Slow [8], Kahneman [7,8] provides further theoretical interpretation by differentiating the two assumed systems of the cognitive functioning of the human mind. He calls them intuition (system 1) determined to be an associative (automatic, fast and perceptual) machine, and reasoning (system 2) required to be voluntary and to operate logical- deductively. In this paper, we propose an ansatz inspired by Ausubel's learning theory for investigating, from the constructivist perspective [9-12], information processing in the working memory of cognizers. Specifically, a thought experiment is performed utilizing the mind of a dual-natured creature known as Maxwell's demon: a tiny "man-machine" solely equipped with the characteristics of system 1, which prevents it from reasoning. The calculation presented here shows that [...]. This result indicates that when the system 2 is shut down, both an intelligent being, as well as a binary machine, incur the same energy cost per unit of information processed, which mathematically proves the computational attribute of the system 1, as Kahneman [7,8] theorized. This finding links information theory to human psychological features and opens a new path toward the conception of a multi-bit reasoning machine.
[ { "version": "v1", "created": "Fri, 27 Jan 2012 17:25:29 GMT" }, { "version": "v2", "created": "Fri, 10 Feb 2012 03:11:06 GMT" }, { "version": "v3", "created": "Mon, 12 Nov 2012 12:44:49 GMT" }, { "version": "v4", "created": "Sat, 26 Jan 2013 14:37:56 GMT" } ]
1,471,132,800,000
[ [ "de Castro", "Alexandre", "" ] ]
1201.6511
Gilles Falquet
Claudine M\'etral, Gilles Falquet, Kostas Karatzas
Ontologies for the Integration of Air Quality Models and 3D City Models
null
In Conceptual Models for Practitioners, J. Teller, C. Tweed, G. Rabino (Eds.), Societ\`a Editrice Esculapio, Bologna, 2008
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The holistic approach to sustainable urban planning implies using different models in an integrated way that is capable of simulating the urban system. As the interconnection of such models is not a trivial task, one of the key elements that may be applied is the description of the urban geometric properties in an "interoperable" way. Focusing on air quality as one of the most pronounced urban problems, the geometric aspects of a city may be described by objects such as those defined in CityGML, so that an appropriate air quality model can be applied for estimating the quality of the urban air on the basis of atmospheric flow and chemistry equations. In this paper we first present theoretical background and motivations for the interconnection of 3D city models and other models related to sustainable development and urban planning. Then we present a practical experiment based on the interconnection of CityGML with an air quality model. Our approach is based on the creation of an ontology of air quality models and on the extension of an ontology of urban planning process (OUPP) that acts as an ontology mediator.
[ { "version": "v1", "created": "Tue, 31 Jan 2012 11:31:50 GMT" } ]
1,328,054,400,000
[ [ "Métral", "Claudine", "" ], [ "Falquet", "Gilles", "" ], [ "Karatzas", "Kostas", "" ] ]
1202.0440
Matej Hoffmann
Matej Hoffmann and Rolf Pfeifer
The implications of embodiment for behavior and cognition: animal and robotic case studies
Book chapter in W. Tschacher & C. Bergomi, ed., 'The Implications of Embodiment: Cognition and Communication', Exeter: Imprint Academic, pp. 31-58
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we will argue that if we want to understand the function of the brain (or the control in the case of robots), we must understand how the brain is embedded into the physical system, and how the organism interacts with the real world. While embodiment has often been used in its trivial meaning, i.e. 'intelligence requires a body', the concept has deeper and more important implications, concerned with the relation between physical and information (neural, control) processes. A number of case studies are presented to illustrate the concept. These involve animals and robots and are concentrated around locomotion, grasping, and visual perception. A theoretical scheme that can be used to embed the diverse case studies will be presented. Finally, we will establish a link between the low-level sensory-motor processes and cognition. We will present an embodied view on categorization, and propose the concepts of 'body schema' and 'forward models' as a natural extension of the embodied approach toward first representations.
[ { "version": "v1", "created": "Thu, 2 Feb 2012 14:25:38 GMT" } ]
1,328,227,200,000
[ [ "Hoffmann", "Matej", "" ], [ "Pfeifer", "Rolf", "" ] ]
1202.0837
Jose Hernandez-Orallo
Javier Insa-Cabrera, Jose-Luis Benacloch-Ayuso, Jose Hernandez-Orallo
On the influence of intelligence in (social) intelligence testing environments
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper analyses the influence of including agents of different degrees of intelligence in a multiagent system. The goal is to better understand how we can develop intelligence tests that can evaluate social intelligence. We analyse several reinforcement algorithms in several contexts of cooperation and competition. Our experimental setting is inspired by the recently developed Darwin-Wallace distribution.
[ { "version": "v1", "created": "Fri, 3 Feb 2012 22:38:04 GMT" } ]
1,426,809,600,000
[ [ "Insa-Cabrera", "Javier", "" ], [ "Benacloch-Ayuso", "Jose-Luis", "" ], [ "Hernandez-Orallo", "Jose", "" ] ]
1202.1886
Sodbileg Shirmen
N.Ugtakhbayar, D.Battulga and Sh.Sodbileg
Classification of artificial intelligence ids for smurf attack
6 pages, 5 figures, 1 table
IJAIA (2012);
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many methods have been developed to secure the network infrastructure and communication over the Internet. Intrusion detection is a relatively new addition to such techniques. Intrusion detection systems (IDS) are used to find out if someone has intrusion into or is trying to get it the network. One big problem is amount of Intrusion which is increasing day by day. We need to know about network attack information using IDS, then analysing the effect. Due to the nature of IDSs which are solely signature based, every new intrusion cannot be detected; so it is important to introduce artificial intelligence (AI) methods / techniques in IDS. Introduction of AI necessitates the importance of normalization in intrusions. This work is focused on classification of AI based IDS techniques which will help better design intrusion detection systems in the future. We have also proposed a support vector machine for IDS to detect Smurf attack with much reliable accuracy.
[ { "version": "v1", "created": "Thu, 9 Feb 2012 04:28:16 GMT" } ]
1,328,832,000,000
[ [ "Ugtakhbayar", "N.", "" ], [ "Battulga", "D.", "" ], [ "Sodbileg", "Sh.", "" ] ]
1202.1891
Ei Shwe Sin
Ei Shwe Sin, Nang Saing Moon Kham
Hyper heuristic based on great deluge and its variants for exam timetabling problem
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/3.0/
Today, University Timetabling problems are occurred annually and they are often hard and time consuming to solve. This paper describes Hyper Heuristics (HH) method based on Great Deluge (GD) and its variants for solving large, highly constrained timetabling problems from different domains. Generally, in hyper heuristic framework, there are two main stages: heuristic selection and move acceptance. This paper emphasizes on the latter stage to develop Hyper Heuristic (HH) framework. The main contribution of this paper is that Great Deluge (GD) and its variants: Flex Deluge(FD), Non-linear(NLGD), Extended Great Deluge(EGD) are used as move acceptance method in HH by combining Reinforcement learning (RL).These HH methods are tested on exam benchmark timetabling problem and best results and comparison analysis are reported.
[ { "version": "v1", "created": "Thu, 9 Feb 2012 05:51:18 GMT" } ]
1,328,832,000,000
[ [ "Sin", "Ei Shwe", "" ], [ "Kham", "Nang Saing Moon", "" ] ]
1202.1945
Jayabrabu R
R. Jayabrabu, V. Saravanan, K. Vivekanandan
A framework: Cluster detection and multidimensional visualization of automated data mining using intelligent agents
15 pages
International Journal of Artificial Intelligence & Applications (IJAIA), Vol.3, No.1, January 2012
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data Mining techniques plays a vital role like extraction of required knowledge, finding unsuspected information to make strategic decision in a novel way which in term understandable by domain experts. A generalized frame work is proposed by considering non - domain experts during mining process for better understanding, making better decision and better finding new patters in case of selecting suitable data mining techniques based on the user profile by means of intelligent agents. KEYWORDS: Data Mining Techniques, Intelligent Agents, User Profile, Multidimensional Visualization, Knowledge Discovery.
[ { "version": "v1", "created": "Thu, 9 Feb 2012 10:57:53 GMT" } ]
1,328,832,000,000
[ [ "Jayabrabu", "R.", "" ], [ "Saravanan", "V.", "" ], [ "Vivekanandan", "K.", "" ] ]
1202.3698
Udi Apsel
Udi Apsel, Ronen I. Brafman
Extended Lifted Inference with Joint Formulas
null
null
null
UAI-P-2011-PG-11-18
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The First-Order Variable Elimination (FOVE) algorithm allows exact inference to be applied directly to probabilistic relational models, and has proven to be vastly superior to the application of standard inference methods on a grounded propositional model. Still, FOVE operators can be applied under restricted conditions, often forcing one to resort to propositional inference. This paper aims to extend the applicability of FOVE by providing two new model conversion operators: the first and the primary is joint formula conversion and the second is just-different counting conversion. These new operations allow efficient inference methods to be applied directly on relational models, where no existing efficient method could be applied hitherto. In addition, aided by these capabilities, we show how to adapt FOVE to provide exact solutions to Maximum Expected Utility (MEU) queries over relational models for decision under uncertainty. Experimental evaluations show our algorithms to provide significant speedup over the alternatives.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Apsel", "Udi", "" ], [ "Brafman", "Ronen I.", "" ] ]
1202.3699
John Asmuth
John Asmuth, Michael L. Littman
Learning is planning: near Bayes-optimal reinforcement learning via Monte-Carlo tree search
null
null
null
UAI-P-2011-PG-19-26
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayes-optimal behavior, while well-defined, is often difficult to achieve. Recent advances in the use of Monte-Carlo tree search (MCTS) have shown that it is possible to act near-optimally in Markov Decision Processes (MDPs) with very large or infinite state spaces. Bayes-optimal behavior in an unknown MDP is equivalent to optimal behavior in the known belief-space MDP, although the size of this belief-space MDP grows exponentially with the amount of history retained, and is potentially infinite. We show how an agent can use one particular MCTS algorithm, Forward Search Sparse Sampling (FSSS), in an efficient way to act nearly Bayes-optimally for all but a polynomial number of steps, assuming that FSSS can be used to act efficiently in any possible underlying MDP.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Asmuth", "John", "" ], [ "Littman", "Michael L.", "" ] ]
1202.3707
Shaunak Chatterjee
Shaunak Chatterjee, Stuart Russell
A temporally abstracted Viterbi algorithm
null
null
null
UAI-P-2011-PG-96-104
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hierarchical problem abstraction, when applicable, may offer exponential reductions in computational complexity. Previous work on coarse-to-fine dynamic programming (CFDP) has demonstrated this possibility using state abstraction to speed up the Viterbi algorithm. In this paper, we show how to apply temporal abstraction to the Viterbi problem. Our algorithm uses bounds derived from analysis of coarse timescales to prune large parts of the state trellis at finer timescales. We demonstrate improvements of several orders of magnitude over the standard Viterbi algorithm, as well as significant speedups over CFDP, for problems whose state variables evolve at widely differing rates.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Chatterjee", "Shaunak", "" ], [ "Russell", "Stuart", "" ] ]
1202.3709
Arthur Choi
Arthur Choi, Khaled S. Refaat, Adnan Darwiche
EDML: A Method for Learning Parameters in Bayesian Networks
null
null
null
UAI-P-2011-PG-115-124
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a method called EDML for learning MAP parameters in binary Bayesian networks under incomplete data. The method assumes Beta priors and can be used to learn maximum likelihood parameters when the priors are uninformative. EDML exhibits interesting behaviors, especially when compared to EM. We introduce EDML, explain its origin, and study some of its properties both analytically and empirically.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Choi", "Arthur", "" ], [ "Refaat", "Khaled S.", "" ], [ "Darwiche", "Adnan", "" ] ]
1202.3711
Tom Claassen
Tom Claassen, Tom Heskes
A Logical Characterization of Constraint-Based Causal Discovery
null
null
null
UAI-P-2011-PG-135-144
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel approach to constraint-based causal discovery, that takes the form of straightforward logical inference, applied to a list of simple, logical statements about causal relations that are derived directly from observed (in)dependencies. It is both sound and complete, in the sense that all invariant features of the corresponding partial ancestral graph (PAG) are identified, even in the presence of latent variables and selection bias. The approach shows that every identifiable causal relation corresponds to one of just two fundamental forms. More importantly, as the basic building blocks of the method do not rely on the detailed (graphical) structure of the corresponding PAG, it opens up a range of new opportunities, including more robust inference, detailed accountability, and application to large models.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Claassen", "Tom", "" ], [ "Heskes", "Tom", "" ] ]
1202.3713
James Cussens
James Cussens
Bayesian network learning with cutting planes
null
null
null
UAI-P-2011-PG-153-160
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of learning the structure of Bayesian networks from complete discrete data with a limit on parent set size is considered. Learning is cast explicitly as an optimisation problem where the goal is to find a BN structure which maximises log marginal likelihood (BDe score). Integer programming, specifically the SCIP framework, is used to solve this optimisation problem. Acyclicity constraints are added to the integer program (IP) during solving in the form of cutting planes. Finding good cutting planes is the key to the success of the approach -the search for such cutting planes is effected using a sub-IP. Results show that this is a particularly fast method for exact BN learning.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Cussens", "James", "" ] ]
1202.3718
Helene Fargier
Helene Fargier, Nahla Ben Amor, Wided Guezguez
On the Complexity of Decision Making in Possibilistic Decision Trees
null
null
null
UAI-P-2011-PG-203-210
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When the information about uncertainty cannot be quantified in a simple, probabilistic way, the topic of possibilistic decision theory is often a natural one to consider. The development of possibilistic decision theory has lead to a series of possibilistic criteria, e.g pessimistic possibilistic qualitative utility, possibilistic likely dominance, binary possibilistic utility and possibilistic Choquet integrals. This paper focuses on sequential decision making in possibilistic decision trees. It proposes a complexity study of the problem of finding an optimal strategy depending on the monotonicity property of the optimization criteria which allows the application of dynamic programming that offers a polytime reduction of the decision problem. It also shows that possibilistic Choquet integrals do not satisfy this property, and that in this case the optimization problem is NP - hard.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Fargier", "Helene", "" ], [ "Amor", "Nahla Ben", "" ], [ "Guezguez", "Wided", "" ] ]
1202.3719
Daan Fierens
Daan Fierens, Guy Van den Broeck, Ingo Thon, Bernd Gutmann, Luc De Raedt
Inference in Probabilistic Logic Programs using Weighted CNF's
null
null
null
UAI-P-2011-PG-211-220
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. Several classical probabilistic inference tasks (such as MAP and computing marginals) have not yet received a lot of attention for this formalism. The contribution of this paper is that we develop efficient inference algorithms for these tasks. This is based on a conversion of the probabilistic logic program and the query and evidence to a weighted CNF formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting. To solve such tasks, we employ state-of-the-art methods. We consider multiple methods for the conversion of the programs as well as for inference on the weighted CNF. The resulting approach is evaluated experimentally and shown to improve upon the state-of-the-art in probabilistic logic programming.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Fierens", "Daan", "" ], [ "Broeck", "Guy Van den", "" ], [ "Thon", "Ingo", "" ], [ "Gutmann", "Bernd", "" ], [ "De Raedt", "Luc", "" ] ]
1202.3721
Phan H. Giang
Phan H. Giang
Dynamic consistency and decision making under vacuous belief
null
null
null
UAI-P-2011-PG-230-237
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ideas about decision making under ignorance in economics are combined with the ideas about uncertainty representation in computer science. The combination sheds new light on the question of how artificial agents can act in a dynamically consistent manner. The notion of sequential consistency is formalized by adapting the law of iterated expectation for plausibility measures. The necessary and sufficient condition for a certainty equivalence operator for Nehring-Puppe's preference to be sequentially consistent is given. This result sheds light on the models of decision making under uncertainty.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Giang", "Phan H.", "" ] ]
1202.3723
Vibhav Gogate
Vibhav Gogate, Pedro Domingos
Approximation by Quantization
null
null
null
UAI-P-2011-PG-247-255
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inference in graphical models consists of repeatedly multiplying and summing out potentials. It is generally intractable because the derived potentials obtained in this way can be exponentially large. Approximate inference techniques such as belief propagation and variational methods combat this by simplifying the derived potentials, typically by dropping variables from them. We propose an alternate method for simplifying potentials: quantizing their values. Quantization causes different states of a potential to have the same value, and therefore introduces context-specific independencies that can be exploited to represent the potential more compactly. We use algebraic decision diagrams (ADDs) to do this efficiently. We apply quantization and ADD reduction to variable elimination and junction tree propagation, yielding a family of bounded approximate inference schemes. Our experimental tests show that our new schemes significantly outperform state-of-the-art approaches on many benchmark instances.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Gogate", "Vibhav", "" ], [ "Domingos", "Pedro", "" ] ]
1202.3724
Vibhav Gogate
Vibhav Gogate, Pedro Domingos
Probabilistic Theorem Proving
null
null
null
UAI-P-2011-PG-256-265
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many representation schemes combining first-order logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logical structure into account. We propose the first method that has the full power of both graphical model inference and first-order theorem proving (in finite domains with Herbrand interpretations). We first define probabilistic theorem proving, their generalization, as the problem of computing the probability of a logical formula given the probabilities or weights of a set of formulas. We then show how this can be reduced to the problem of lifted weighted model counting, and develop an efficient algorithm for the latter. We prove the correctness of this algorithm, investigate its properties, and show how it generalizes previous approaches. Experiments show that it greatly outperforms lifted variable elimination when logical structure is present. Finally, we propose an algorithm for approximate probabilistic theorem proving, and show that it can greatly outperform lifted belief propagation.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Gogate", "Vibhav", "" ], [ "Domingos", "Pedro", "" ] ]
1202.3728
Hannaneh Hajishirzi
Hannaneh Hajishirzi, Julia Hockenmaier, Erik T. Mueller, Eyal Amir
Reasoning about RoboCup Soccer Narratives
null
null
null
UAI-P-2011-PG-291-300
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an approach for learning to translate simple narratives, i.e., texts (sequences of sentences) describing dynamic systems, into coherent sequences of events without the need for labeled training data. Our approach incorporates domain knowledge in the form of preconditions and effects of events, and we show that it outperforms state-of-the-art supervised learning systems on the task of reconstructing RoboCup soccer games from their commentaries.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Hajishirzi", "Hannaneh", "" ], [ "Hockenmaier", "Julia", "" ], [ "Mueller", "Erik T.", "" ], [ "Amir", "Eyal", "" ] ]
1202.3729
Eric A. Hansen
Eric A. Hansen
Suboptimality Bounds for Stochastic Shortest Path Problems
null
null
null
UAI-P-2011-PG-301-310
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider how to use the Bellman residual of the dynamic programming operator to compute suboptimality bounds for solutions to stochastic shortest path problems. Such bounds have been previously established only in the special case that "all policies are proper," in which case the dynamic programming operator is known to be a contraction, and have been shown to be easily computable only in the more limited special case of discounting. Under the condition that transition costs are positive, we show that suboptimality bounds can be easily computed even when not all policies are proper. In the general case when there are no restrictions on transition costs, the analysis is more complex. But we present preliminary results that show such bounds are possible.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Hansen", "Eric A.", "" ] ]
1202.3740
Minyi Li
Minyi Li, Quoc Bao Vo, Ryszard Kowalczyk
An Efficient Protocol for Negotiation over Combinatorial Domains with Incomplete Information
null
null
null
UAI-P-2011-PG-436-444
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of agent-based negotiation in combinatorial domains. It is difficult to reach optimal agreements in bilateral or multi-lateral negotiations when the agents' preferences for the possible alternatives are not common knowledge. Self-interested agents often end up negotiating inefficient agreements in such situations. In this paper, we present a protocol for negotiation in combinatorial domains which can lead rational agents to reach optimal agreements under incomplete information setting. Our proposed protocol enables the negotiating agents to identify efficient solutions using distributed search that visits only a small subspace of the whole outcome space. Moreover, the proposed protocol is sufficiently general that it is applicable to most preference representation models in combinatorial domains. We also present results of experiments that demonstrate the feasibility and computational efficiency of our approach.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Li", "Minyi", "" ], [ "Vo", "Quoc Bao", "" ], [ "Kowalczyk", "Ryszard", "" ] ]
1202.3741
Shiau Hong Lim
Shiau Hong Lim, Peter Auer
Noisy Search with Comparative Feedback
null
null
null
UAI-P-2011-PG-445-452
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present theoretical results in terms of lower and upper bounds on the query complexity of noisy search with comparative feedback. In this search model, the noise in the feedback depends on the distance between query points and the search target. Consequently, the error probability in the feedback is not fixed but varies for the queries posed by the search algorithm. Our results show that a target out of n items can be found in O(log n) queries. We also show the surprising result that for k possible answers per query, the speedup is not log k (as for k-ary search) but only log log k in some cases.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Lim", "Shiau Hong", "" ], [ "Auer", "Peter", "" ] ]
1202.3743
Jianbing Ma
Jianbing Ma, Weiru Liu, Paul Miller
Belief change with noisy sensing in the situation calculus
null
null
null
UAI-P-2011-PG-471-478
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Situation calculus has been applied widely in artificial intelligence to model and reason about actions and changes in dynamic systems. Since actions carried out by agents will cause constant changes of the agents' beliefs, how to manage these changes is a very important issue. Shapiro et al. [22] is one of the studies that considered this issue. However, in this framework, the problem of noisy sensing, which often presents in real-world applications, is not considered. As a consequence, noisy sensing actions in this framework will lead to an agent facing inconsistent situation and subsequently the agent cannot proceed further. In this paper, we investigate how noisy sensing actions can be handled in iterated belief change within the situation calculus formalism. We extend the framework proposed in [22] with the capability of managing noisy sensings. We demonstrate that an agent can still detect the actual situation when the ratio of noisy sensing actions vs. accurate sensing actions is limited. We prove that our framework subsumes the iterated belief change strategy in [22] when all sensing actions are accurate. Furthermore, we prove that our framework can adequately handle belief introspection, mistaken beliefs, belief revision and belief update even with noisy sensing, as done in [22] with accurate sensing actions only.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Ma", "Jianbing", "" ], [ "Liu", "Weiru", "" ], [ "Miller", "Paul", "" ] ]
1202.3744
Brandon Malone
Brandon Malone, Changhe Yuan, Eric A. Hansen, Susan Bridges
Improving the Scalability of Optimal Bayesian Network Learning with External-Memory Frontier Breadth-First Branch and Bound Search
null
null
null
UAI-P-2011-PG-479-488
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous work has shown that the problem of learning the optimal structure of a Bayesian network can be formulated as a shortest path finding problem in a graph and solved using A* search. In this paper, we improve the scalability of this approach by developing a memory-efficient heuristic search algorithm for learning the structure of a Bayesian network. Instead of using A*, we propose a frontier breadth-first branch and bound search that leverages the layered structure of the search graph of this problem so that no more than two layers of the graph, plus solution reconstruction information, need to be stored in memory at a time. To further improve scalability, the algorithm stores most of the graph in external memory, such as hard disk, when it does not fit in RAM. Experimental results show that the resulting algorithm solves significantly larger problems than the current state of the art.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Malone", "Brandon", "" ], [ "Yuan", "Changhe", "" ], [ "Hansen", "Eric A.", "" ], [ "Bridges", "Susan", "" ] ]
1202.3745
Radu Marinescu
Radu Marinescu, Nic Wilson
Order-of-Magnitude Influence Diagrams
null
null
null
UAI-P-2011-PG-489-496
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we develop a qualitative theory of influence diagrams that can be used to model and solve sequential decision making tasks when only qualitative (or imprecise) information is available. Our approach is based on an order-of-magnitude approximation of both probabilities and utilities and allows for specifying partially ordered preferences via sets of utility values. We also propose a dedicated variable elimination algorithm that can be applied for solving order-of-magnitude influence diagrams.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Marinescu", "Radu", "" ], [ "Wilson", "Nic", "" ] ]
1202.3749
Hala Mostafa
Hala Mostafa, Victor Lesser
Compact Mathematical Programs For DEC-MDPs With Structured Agent Interactions
null
null
null
UAI-P-2011-PG-523-530
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To deal with the prohibitive complexity of calculating policies in Decentralized MDPs, researchers have proposed models that exploit structured agent interactions. Settings where most agent actions are independent except for few actions that affect the transitions and/or rewards of other agents can be modeled using Event-Driven Interactions with Complex Rewards (EDI-CR). Finding the optimal joint policy can be formulated as an optimization problem. However, existing formulations are too verbose and/or lack optimality guarantees. We propose a compact Mixed Integer Linear Program formulation of EDI-CR instances. The key insight is that most action sequences of a group of agents have the same effect on a given agent. This allows us to treat these sequences similarly and use fewer variables. Experiments show that our formulation is more compact and leads to faster solution times and better solutions than existing formulations.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Mostafa", "Hala", "" ], [ "Lesser", "Victor", "" ] ]
1202.3754
Eunsoo Oh
Eunsoo Oh, Kee-Eung Kim
A Geometric Traversal Algorithm for Reward-Uncertain MDPs
null
null
null
UAI-P-2011-PG-565-572
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Markov decision processes (MDPs) are widely used in modeling decision making problems in stochastic environments. However, precise specification of the reward functions in MDPs is often very difficult. Recent approaches have focused on computing an optimal policy based on the minimax regret criterion for obtaining a robust policy under uncertainty in the reward function. One of the core tasks in computing the minimax regret policy is to obtain the set of all policies that can be optimal for some candidate reward function. In this paper, we propose an efficient algorithm that exploits the geometric properties of the reward function associated with the policies. We also present an approximate version of the method for further speed up. We experimentally demonstrate that our algorithm improves the performance by orders of magnitude.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Oh", "Eunsoo", "" ], [ "Kim", "Kee-Eung", "" ] ]
1202.3759
Gungor Polatkan
Gungor Polatkan, Oncel Tuzel
Compressed Inference for Probabilistic Sequential Models
null
null
null
UAI-P-2011-PG-609-618
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hidden Markov models (HMMs) and conditional random fields (CRFs) are two popular techniques for modeling sequential data. Inference algorithms designed over CRFs and HMMs allow estimation of the state sequence given the observations. In several applications, estimation of the state sequence is not the end goal; instead the goal is to compute some function of it. In such scenarios, estimating the state sequence by conventional inference techniques, followed by computing the functional mapping from the estimate is not necessarily optimal. A more formal approach is to directly infer the final outcome from the observations. In particular, we consider the specific instantiation of the problem where the goal is to find the state trajectories without exact transition points and derive a novel polynomial time inference algorithm that outperforms vanilla inference techniques. We show that this particular problem arises commonly in many disparate applications and present experiments on three of them: (1) Toy robot tracking; (2) Single stroke character recognition; (3) Handwritten word recognition.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Polatkan", "Gungor", "" ], [ "Tuzel", "Oncel", "" ] ]
1202.3762
Scott Sanner
Scott Sanner, Karina Valdivia Delgado, Leliane Nunes de Barros
Symbolic Dynamic Programming for Discrete and Continuous State MDPs
null
null
null
UAI-P-2011-PG-643-652
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many real-world decision-theoretic planning problems can be naturally modeled with discrete and continuous state Markov decision processes (DC-MDPs). While previous work has addressed automated decision-theoretic planning for DCMDPs, optimal solutions have only been defined so far for limited settings, e.g., DC-MDPs having hyper-rectangular piecewise linear value functions. In this work, we extend symbolic dynamic programming (SDP) techniques to provide optimal solutions for a vastly expanded class of DCMDPs. To address the inherent combinatorial aspects of SDP, we introduce the XADD - a continuous variable extension of the algebraic decision diagram (ADD) - that maintains compact representations of the exact value function. Empirically, we demonstrate an implementation of SDP with XADDs on various DC-MDPs, showing the first optimal automated solutions to DCMDPs with linear and nonlinear piecewise partitioned value functions and showing the advantages of constraint-based pruning for XADDs.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Sanner", "Scott", "" ], [ "Delgado", "Karina Valdivia", "" ], [ "de Barros", "Leliane Nunes", "" ] ]
1202.3764
Johannes Textor
Johannes Textor, Maciej Liskiewicz
Adjustment Criteria in Causal Diagrams: An Algorithmic Perspective
null
null
null
UAI-P-2011-PG-681-688
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying and controlling bias is a key problem in empirical sciences. Causal diagram theory provides graphical criteria for deciding whether and how causal effects can be identified from observed (nonexperimental) data by covariate adjustment. Here we prove equivalences between existing as well as new criteria for adjustment and we provide a new simplified but still equivalent notion of d-separation. These lead to efficient algorithms for two important tasks in causal diagram analysis: (1) listing minimal covariate adjustments (with polynomial delay); and (2) identifying the subdiagram involved in biasing paths (in linear time). Our results improve upon existing exponential-time solutions for these problems, enabling users to assess the effects of covariate adjustment on diagrams with tens to hundreds of variables interactively in real time.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Textor", "Johannes", "" ], [ "Liskiewicz", "Maciej", "" ] ]
1202.3767
Joop van de Ven
Joop van de Ven, Fabio Ramos
Distributed Anytime MAP Inference
null
null
null
UAI-P-2011-PG-708-716
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a distributed anytime algorithm for performing MAP inference in graphical models. The problem is formulated as a linear programming relaxation over the edges of a graph. The resulting program has a constraint structure that allows application of the Dantzig-Wolfe decomposition principle. Subprograms are defined over individual edges and can be computed in a distributed manner. This accommodates solutions to graphs whose state space does not fit in memory. The decomposition master program is guaranteed to compute the optimal solution in a finite number of iterations, while the solution converges monotonically with each iteration. Formulating the MAP inference problem as a linear program allows additional (global) constraints to be defined; something not possible with message passing algorithms. Experimental results show that our algorithm's solution quality outperforms most current algorithms and it scales well to large problems.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "van de Ven", "Joop", "" ], [ "Ramos", "Fabio", "" ] ]
1202.3773
Haohai Yu
Haohai Yu, Robert A. van Engelen
Measuring the Hardness of Stochastic Sampling on Bayesian Networks with Deterministic Causalities: the k-Test
null
null
null
UAI-P-2011-PG-786-795
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Approximate Bayesian inference is NP-hard. Dagum and Luby defined the Local Variance Bound (LVB) to measure the approximation hardness of Bayesian inference on Bayesian networks, assuming the networks model strictly positive joint probability distributions, i.e. zero probabilities are not permitted. This paper introduces the k-test to measure the approximation hardness of inference on Bayesian networks with deterministic causalities in the probability distribution, i.e. when zero conditional probabilities are permitted. Approximation by stochastic sampling is a widely-used inference method that is known to suffer from inefficiencies due to sample rejection. The k-test predicts when rejection rates of stochastic sampling a Bayesian network will be low, modest, high, or when sampling is intractable.
[ { "version": "v1", "created": "Tue, 14 Feb 2012 16:41:17 GMT" } ]
1,329,696,000,000
[ [ "Yu", "Haohai", "" ], [ "van Engelen", "Robert A.", "" ] ]
1202.3887
Hamid Salimi
Hamid Salimi, Davar Giveki, Mohammad Ali Soltanshahi, Javad Hatami
Extended Mixture of MLP Experts by Hybrid of Conjugate Gradient Method and Modified Cuckoo Search
13 pages, 2 figures
International Journal of Artificial Intelligence & Applications (IJAIA), Vol.3, No.1, January 2012
null
null
cs.AI
http://creativecommons.org/licenses/by/3.0/
This paper investigates a new method for improving the learning algorithm of Mixture of Experts (ME) model using a hybrid of Modified Cuckoo Search (MCS) and Conjugate Gradient (CG) as a second order optimization technique. The CG technique is combined with Back-Propagation (BP) algorithm to yield a much more efficient learning algorithm for ME structure. In addition, the experts and gating networks in enhanced model are replaced by CG based Multi-Layer Perceptrons (MLPs) to provide faster and more accurate learning. The CG is considerably depends on initial weights of connections of Artificial Neural Network (ANN), so, a metaheuristic algorithm, the so-called Modified Cuckoo Search is applied in order to select the optimal weights. The performance of proposed method is compared with Gradient Decent Based ME (GDME) and Conjugate Gradient Based ME (CGME) in classification and regression problems. The experimental results show that hybrid MSC and CG based ME (MCS-CGME) has faster convergence and better performance in utilized benchmark data sets.
[ { "version": "v1", "created": "Fri, 17 Feb 2012 11:49:56 GMT" } ]
1,329,696,000,000
[ [ "Salimi", "Hamid", "" ], [ "Giveki", "Davar", "" ], [ "Soltanshahi", "Mohammad Ali", "" ], [ "Hatami", "Javad", "" ] ]
1202.4190
Feng Lin
Feng Lin, Robert C. Qiu, Zhen Hu, Shujie Hou, James P. Browning, Michael C. Wicks
Generalized FMD Detection for Spectrum Sensing Under Low Signal-to-Noise Ratio
4 pages, 1 figure, 1 table
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/3.0/
Spectrum sensing is a fundamental problem in cognitive radio. We propose a function of covariance matrix based detection algorithm for spectrum sensing in cognitive radio network. Monotonically increasing property of function of matrix involving trace operation is utilized as the cornerstone for this algorithm. The advantage of proposed algorithm is it works under extremely low signal-to-noise ratio, like lower than -30 dB with limited sample data. Theoretical analysis of threshold setting for the algorithm is discussed. A performance comparison between the proposed algorithm and other state-of-the-art methods is provided, by the simulation on captured digital television (DTV) signal.
[ { "version": "v1", "created": "Sun, 19 Feb 2012 21:50:58 GMT" } ]
1,329,782,400,000
[ [ "Lin", "Feng", "" ], [ "Qiu", "Robert C.", "" ], [ "Hu", "Zhen", "" ], [ "Hou", "Shujie", "" ], [ "Browning", "James P.", "" ], [ "Wicks", "Michael C.", "" ] ]
1202.6009
Josep Domingo-Ferrer
Josep Domingo-Ferrer
Marginality: a numerical mapping for enhanced treatment of nominal and hierarchical attributes
12 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The purpose of statistical disclosure control (SDC) of microdata, a.k.a. data anonymization or privacy-preserving data mining, is to publish data sets containing the answers of individual respondents in such a way that the respondents corresponding to the released records cannot be re-identified and the released data are analytically useful. SDC methods are either based on masking the original data, generating synthetic versions of them or creating hybrid versions by combining original and synthetic data. The choice of SDC methods for categorical data, especially nominal data, is much smaller than the choice of methods for numerical data. We mitigate this problem by introducing a numerical mapping for hierarchical nominal data which allows computing means, variances and covariances on them.
[ { "version": "v1", "created": "Mon, 27 Feb 2012 17:37:20 GMT" } ]
1,330,387,200,000
[ [ "Domingo-Ferrer", "Josep", "" ] ]
1202.6153
Marcus Hutter
Marcus Hutter
One Decade of Universal Artificial Intelligence
20 LaTeX pages
In Theoretical Foundations of Artificial General Intelligence, Vol.4 (2012) pages 67--88
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (Legg, 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, AIXI is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without even providing the rules of the games. These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive. This article provides an informal overview of UAI in context. It attempts to gently introduce a very theoretical, formal, and mathematical subject, and discusses philosophical and technical ingredients, traits of intelligence, some social questions, and the past and future of UAI.
[ { "version": "v1", "created": "Tue, 28 Feb 2012 09:19:32 GMT" } ]
1,368,748,800,000
[ [ "Hutter", "Marcus", "" ] ]
1202.6386
Shiwali Mohan
Shiwali Mohan and John E. Laird
Relational Reinforcement Learning in Infinite Mario
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relational representations in reinforcement learning allow for the use of structural information like the presence of objects and relationships between them in the description of value functions. Through this paper, we show that such representations allow for the inclusion of background knowledge that qualitatively describes a state and can be used to design agents that demonstrate learning behavior in domains with large state and actions spaces such as computer games.
[ { "version": "v1", "created": "Tue, 28 Feb 2012 21:36:22 GMT" } ]
1,330,560,000,000
[ [ "Mohan", "Shiwali", "" ], [ "Laird", "John E.", "" ] ]
1203.1021
Ahmed Maalel
Ahmed Maalel, Habib Hadj mabrouk, Lassad Mejri and Henda Hajjami Ben Ghezela
Development of an Ontology to Assist the Modeling of Accident Scenarii "Application on Railroad Transport "
7 pages, 9 figures, Journal of Computing (ISSN 2151-9617); Journal of Computing, Volume 3, Issue 7, July 2011
J. of Computing. 3. 7. (2011) 1-8
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a world where communication and information sharing are at the heart of our business, the terminology needs are most pressing. It has become imperative to identify the terms used and defined in a consensual and coherent way while preserving linguistic diversity. To streamline and strengthen the process of acquisition, representation and exploitation of scenarii of train accidents, it is necessary to harmonize and standardize the terminology used by players in the security field. The research aims to significantly improve analytical activities and operations of the various safety studies, by tracking the error in system, hardware, software and human. This paper presents the contribution of ontology to modeling scenarii for rail accidents through a knowledge model based on a generic ontology and domain ontology. After a detailed presentation of the state of the art material, this article presents the first results of the developed model.
[ { "version": "v1", "created": "Mon, 5 Mar 2012 19:45:43 GMT" } ]
1,331,078,400,000
[ [ "Maalel", "Ahmed", "" ], [ "mabrouk", "Habib Hadj", "" ], [ "Mejri", "Lassad", "" ], [ "Ghezela", "Henda Hajjami Ben", "" ] ]
1203.1095
Guido Tack
Tom Schrijvers, Guido Tack, Pieter Wuille, Horst Samulowitz, Peter J. Stuckey
Search Combinators
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to model search in a constraint solver can be an essential asset for solving combinatorial problems. However, existing infrastructure for defining search heuristics is often inadequate. Either modeling capabilities are extremely limited or users are faced with a general-purpose programming language whose features are not tailored towards writing search heuristics. As a result, major improvements in performance may remain unexplored. This article introduces search combinators, a lightweight and solver-independent method that bridges the gap between a conceptually simple modeling language for search (high-level, functional and naturally compositional) and an efficient implementation (low-level, imperative and highly non-modular). By allowing the user to define application-tailored search strategies from a small set of primitives, search combinators effectively provide a rich domain-specific language (DSL) for modeling search to the user. Remarkably, this DSL comes at a low implementation cost to the developer of a constraint solver. The article discusses two modular implementation approaches and shows, by empirical evaluation, that search combinators can be implemented without overhead compared to a native, direct implementation in a constraint solver.
[ { "version": "v1", "created": "Tue, 6 Mar 2012 03:59:34 GMT" } ]
1,331,078,400,000
[ [ "Schrijvers", "Tom", "" ], [ "Tack", "Guido", "" ], [ "Wuille", "Pieter", "" ], [ "Samulowitz", "Horst", "" ], [ "Stuckey", "Peter J.", "" ] ]
1203.1882
Ganti Meenakshi
G.Meenakshi
Multi source feedback based performance appraisal system using Fuzzy logic decision support system
16 pages
null
10.5121/ijsc.2012.3108
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/3.0/
In Multi-Source Feedback or 360 Degree Feedback, data on the performance of an individual are collected systematically from a number of stakeholders and are used for improving performance. The 360-Degree Feedback approach provides a consistent management philosophy meeting the criterion outlined previously. The 360-degree feedback appraisal process describes a human resource methodology that is frequently used for both employee appraisal and employee development. Used in employee performance appraisals, the 360-degree feedback methodology is differentiated from traditional, top-down appraisal methods in which the supervisor responsible for the appraisal provides the majority of the data. Instead it seeks to use information gained from other sources to provide a fuller picture of employees' performances. Similarly, when this technique used in employee development it augments employees' perceptions of training needs with those of the people with whom they interact. The 360-degree feedback based appraisal is a comprehensive method where in the feedback about the employee comes from all the sources that come into contact with the employee on his/her job. The respondents for an employee can be her/his peers, managers, subordinates team members, customers, suppliers and vendors. Hence anyone who comes into contact with the employee, the 360 degree appraisal has four components that include self-appraisal, superior's appraisal, subordinate's appraisal student's appraisal and peer's appraisal .The proposed system is an attempt to implement the 360 degree feedback based appraisal system in academics especially engineering colleges.
[ { "version": "v1", "created": "Thu, 8 Mar 2012 18:44:46 GMT" } ]
1,331,251,200,000
[ [ "Meenakshi", "G.", "" ] ]
1203.3051
Nina Narodytska
Nina Narodytska, Toby Walsh and Lirong Xia
Combining Voting Rules Together
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a simple method for combining together voting rules that performs a run-off between the different winners of each voting rule. We prove that this combinator has several good properties. For instance, even if just one of the base voting rules has a desirable property like Condorcet consistency, the combination inherits this property. In addition, we prove that combining voting rules together in this way can make finding a manipulation more computationally difficult. Finally, we study the impact of this combinator on approximation methods that find close to optimal manipulations.
[ { "version": "v1", "created": "Wed, 14 Mar 2012 11:27:15 GMT" } ]
1,331,769,600,000
[ [ "Narodytska", "Nina", "" ], [ "Walsh", "Toby", "" ], [ "Xia", "Lirong", "" ] ]
1203.3464
Nimar S. Arora
Nimar S. Arora, Rodrigo de Salvo Braz, Erik B. Sudderth, Stuart Russell
Gibbs Sampling in Open-Universe Stochastic Languages
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-30-39
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Languages for open-universe probabilistic models (OUPMs) can represent situations with an unknown number of objects and iden- tity uncertainty. While such cases arise in a wide range of important real-world appli- cations, existing general purpose inference methods for OUPMs are far less efficient than those available for more restricted lan- guages and model classes. This paper goes some way to remedying this deficit by in- troducing, and proving correct, a generaliza- tion of Gibbs sampling to partial worlds with possibly varying model structure. Our ap- proach draws on and extends previous generic OUPM inference methods, as well as aux- iliary variable samplers for nonparametric mixture models. It has been implemented for BLOG, a well-known OUPM language. Combined with compile-time optimizations, the resulting algorithm yields very substan- tial speedups over existing methods on sev- eral test cases, and substantially improves the practicality of OUPM languages generally.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Arora", "Nimar S.", "" ], [ "Braz", "Rodrigo de Salvo", "" ], [ "Sudderth", "Erik B.", "" ], [ "Russell", "Stuart", "" ] ]
1203.3465
Raouia Ayachi
Raouia Ayachi, Nahla Ben Amor, Salem Benferhat, Rolf Haenni
Compiling Possibilistic Networks: Alternative Approaches to Possibilistic Inference
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-40-47
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Qualitative possibilistic networks, also known as min-based possibilistic networks, are important tools for handling uncertain information in the possibility theory frame- work. Despite their importance, only the junction tree adaptation has been proposed for exact reasoning with such networks. This paper explores alternative algorithms using compilation techniques. We first propose possibilistic adaptations of standard compilation-based probabilistic methods. Then, we develop a new, purely possibilistic, method based on the transformation of the initial network into a possibilistic base. A comparative study shows that this latter performs better than the possibilistic adap- tations of probabilistic methods. This result is also confirmed by experimental results.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Ayachi", "Raouia", "" ], [ "Amor", "Nahla Ben", "" ], [ "Benferhat", "Salem", "" ], [ "Haenni", "Rolf", "" ] ]
1203.3466
Kim Bauters
Kim Bauters, Steven Schockaert, Martine De Cock, Dirk Vermeir
Possibilistic Answer Set Programming Revisited
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-48-55
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Possibilistic answer set programming (PASP) extends answer set programming (ASP) by attaching to each rule a degree of certainty. While such an extension is important from an application point of view, existing semantics are not well-motivated, and do not always yield intuitive results. To develop a more suitable semantics, we first introduce a characterization of answer sets of classical ASP programs in terms of possibilistic logic where an ASP program specifies a set of constraints on possibility distributions. This characterization is then naturally generalized to define answer sets of PASP programs. We furthermore provide a syntactic counterpart, leading to a possibilistic generalization of the well-known Gelfond-Lifschitz reduct, and we show how our framework can readily be implemented using standard ASP solvers.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Bauters", "Kim", "" ], [ "Schockaert", "Steven", "" ], [ "De Cock", "Martine", "" ], [ "Vermeir", "Dirk", "" ] ]
1203.3467
Debarun Bhattacharjya
Debarun Bhattacharjya, Ross D. Shachter
Three new sensitivity analysis methods for influence diagrams
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-56-64
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Performing sensitivity analysis for influence diagrams using the decision circuit framework is particularly convenient, since the partial derivatives with respect to every parameter are readily available [Bhattacharjya and Shachter, 2007; 2008]. In this paper we present three non-linear sensitivity analysis methods that utilize this partial derivative information and therefore do not require re-evaluating the decision situation multiple times. Specifically, we show how to efficiently compare strategies in decision situations, perform sensitivity to risk aversion and compute the value of perfect hedging [Seyller, 2008].
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Bhattacharjya", "Debarun", "" ], [ "Shachter", "Ross D.", "" ] ]
1203.3469
Matthias Brocheler
Matthias Brocheler, Lilyana Mihalkova, Lise Getoor
Probabilistic Similarity Logic
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-73-82
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many machine learning applications require the ability to learn from and reason about noisy multi-relational data. To address this, several effective representations have been developed that provide both a language for expressing the structural regularities of a domain, and principled support for probabilistic inference. In addition to these two aspects, however, many applications also involve a third aspect-the need to reason about similarities-which has not been directly supported in existing frameworks. This paper introduces probabilistic similarity logic (PSL), a general-purpose framework for joint reasoning about similarity in relational domains that incorporates probabilistic reasoning about similarities and relational structure in a principled way. PSL can integrate any existing domain-specific similarity measures and also supports reasoning about similarities between sets of entities. We provide efficient inference and learning techniques for PSL and demonstrate its effectiveness both in common relational tasks and in settings that require reasoning about similarity.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Brocheler", "Matthias", "" ], [ "Mihalkova", "Lilyana", "" ], [ "Getoor", "Lise", "" ] ]
1203.3470
Alan S. Carlin
Alan S. Carlin, Nathan Schurr, Janusz Marecki
ALARMS: Alerting and Reasoning Management System for Next Generation Aircraft Hazards
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-93-100
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Next Generation Air Transportation System will introduce new, advanced sensor technologies into the cockpit. With the introduction of such systems, the responsibilities of the pilot are expected to dramatically increase. In the ALARMS (ALerting And Reasoning Management System) project for NASA, we focus on a key challenge of this environment, the quick and efficient handling of aircraft sensor alerts. It is infeasible to alert the pilot on the state of all subsystems at all times. Furthermore, there is uncertainty as to the true hazard state despite the evidence of the alerts, and there is uncertainty as to the effect and duration of actions taken to address these alerts. This paper reports on the first steps in the construction of an application designed to handle Next Generation alerts. In ALARMS, we have identified 60 different aircraft subsystems and 20 different underlying hazards. In this paper, we show how a Bayesian network can be used to derive the state of the underlying hazards, based on the sensor input. Then, we propose a framework whereby an automated system can plan to address these hazards in cooperation with the pilot, using a Time-Dependent Markov Process (TMDP). Different hazards and pilot states will call for different alerting automation plans. We demonstrate this emerging application of Bayesian networks and TMDPs to cockpit automation, for a use case where a small number of hazards are present, and analyze the resulting alerting automation policies.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Carlin", "Alan S.", "" ], [ "Schurr", "Nathan", "" ], [ "Marecki", "Janusz", "" ] ]
1203.3473
Jaesik Choi
Jaesik Choi, Eyal Amir, David J. Hill
Lifted Inference for Relational Continuous Models
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-126-134
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relational Continuous Models (RCMs) represent joint probability densities over attributes of objects, when the attributes have continuous domains. With relational representations, they can model joint probability distributions over large numbers of variables compactly in a natural way. This paper presents a new exact lifted inference algorithm for RCMs, thus it scales up to large models of real world applications. The algorithm applies to Relational Pairwise Models which are (relational) products of potentials of arity 2. Our algorithm is unique in two ways. First, it substantially improves the efficiency of lifted inference with variables of continuous domains. When a relational model has Gaussian potentials, it takes only linear-time compared to cubic time of previous methods. Second, it is the first exact inference algorithm which handles RCMs in a lifted way. The algorithm is illustrated over an example from econometrics. Experimental results show that our algorithm outperforms both a groundlevel inference algorithm and an algorithm built with previously-known lifted methods.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Choi", "Jaesik", "" ], [ "Amir", "Eyal", "" ], [ "Hill", "David J.", "" ] ]
1203.3474
Gabriel Corona
Gabriel Corona, Francois Charpillet
Distribution over Beliefs for Memory Bounded Dec-POMDP Planning
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-135-142
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new point-based method for approximate planning in Dec-POMDP which outperforms the state-of-the-art approaches in terms of solution quality. It uses a heuristic estimation of the prior probability of beliefs to choose a bounded number of policy trees: this choice is formulated as a combinatorial optimisation problem minimising the error induced by pruning.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Corona", "Gabriel", "" ], [ "Charpillet", "Francois", "" ] ]
1203.3477
Tom Erez
Tom Erez, William D. Smart
A Scalable Method for Solving High-Dimensional Continuous POMDPs Using Local Approximation
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-160-167
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Partially-Observable Markov Decision Processes (POMDPs) are typically solved by finding an approximate global solution to a corresponding belief-MDP. In this paper, we offer a new planning algorithm for POMDPs with continuous state, action and observation spaces. Since such domains have an inherent notion of locality, we can find an approximate solution using local optimization methods. We parameterize the belief distribution as a Gaussian mixture, and use the Extended Kalman Filter (EKF) to approximate the belief update. Since the EKF is a first-order filter, we can marginalize over the observations analytically. By using feedback control and state estimation during policy execution, we recover a behavior that is effectively conditioned on incoming observations despite the unconditioned planning. Local optimization provides no guarantees of global optimality, but it allows us to tackle domains that are at least an order of magnitude larger than the current state-of-the-art. We demonstrate the scalability of our algorithm by considering a simulated hand-eye coordination domain with 16 continuous state dimensions and 6 continuous action dimensions.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Erez", "Tom", "" ], [ "Smart", "William D.", "" ] ]
1203.3482
Vibhav Gogate
Vibhav Gogate, Pedro Domingos
Formula-Based Probabilistic Inference
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-210-219
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computing the probability of a formula given the probabilities or weights associated with other formulas is a natural extension of logical inference to the probabilistic setting. Surprisingly, this problem has received little attention in the literature to date, particularly considering that it includes many standard inference problems as special cases. In this paper, we propose two algorithms for this problem: formula decomposition and conditioning, which is an exact method, and formula importance sampling, which is an approximate method. The latter is, to our knowledge, the first application of model counting to approximate probabilistic inference. Unlike conventional variable-based algorithms, our algorithms work in the dual realm of logical formulas. Theoretically, we show that our algorithms can greatly improve efficiency by exploiting the structural information in the formulas. Empirically, we show that they are indeed quite powerful, often achieving substantial performance gains over state-of-the-art schemes.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Gogate", "Vibhav", "" ], [ "Domingos", "Pedro", "" ] ]
1203.3490
Akshat Kumar
Akshat Kumar, Shlomo Zilberstein
Anytime Planning for Decentralized POMDPs using Expectation Maximization
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-294-301
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decentralized POMDPs provide an expressive framework for multi-agent sequential decision making. While fnite-horizon DECPOMDPs have enjoyed signifcant success, progress remains slow for the infnite-horizon case mainly due to the inherent complexity of optimizing stochastic controllers representing agent policies. We present a promising new class of algorithms for the infnite-horizon case, which recasts the optimization problem as inference in a mixture of DBNs. An attractive feature of this approach is the straightforward adoption of existing inference techniques in DBNs for solving DEC-POMDPs and supporting richer representations such as factored or continuous states and actions. We also derive the Expectation Maximization (EM) algorithm to optimize the joint policy represented as DBNs. Experiments on benchmark domains show that EM compares favorably against the state-of-the-art solvers.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Kumar", "Akshat", "" ], [ "Zilberstein", "Shlomo", "" ] ]
1203.3493
Yijing Li
Yijing Li, Prakash P. Shenoy
Solving Hybrid Influence Diagrams with Deterministic Variables
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-322-331
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a framework and an algorithm for solving hybrid influence diagrams with discrete, continuous, and deterministic chance variables, and discrete and continuous decision variables. A continuous chance variable in an influence diagram is said to be deterministic if its conditional distributions have zero variances. The solution algorithm is an extension of Shenoy's fusion algorithm for discrete influence diagrams. We describe an extended Shenoy-Shafer architecture for propagation of discrete, continuous, and utility potentials in hybrid influence diagrams that include deterministic chance variables. The algorithm and framework are illustrated by solving two small examples.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Li", "Yijing", "" ], [ "Shenoy", "Prakash P.", "" ] ]
1203.3499
Mathias Niepert
Mathias Niepert
A Delayed Column Generation Strategy for Exact k-Bounded MAP Inference in Markov Logic Networks
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-384-391
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper introduces k-bounded MAP inference, a parameterization of MAP inference in Markov logic networks. k-Bounded MAP states are MAP states with at most k active ground atoms of hidden (non-evidence) predicates. We present a novel delayed column generation algorithm and provide empirical evidence that the algorithm efficiently computes k-bounded MAP states for meaningful real-world graph matching problems. The underlying idea is that, instead of solving one large optimization problem, it is often more efficient to tackle several small ones.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Niepert", "Mathias", "" ] ]
1203.3500
Farheen Omar
Farheen Omar, Mathieu Sinn, Jakub Truszkowski, Pascal Poupart, James Tung, Allen Caine
Comparative Analysis of Probabilistic Models for Activity Recognition with an Instrumented Walker
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-392-400
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rollating walkers are popular mobility aids used by older adults to improve balance control. There is a need to automatically recognize the activities performed by walker users to better understand activity patterns, mobility issues and the context in which falls are more likely to happen. We design and compare several techniques to recognize walker related activities. A comprehensive evaluation with control subjects and walker users from a retirement community is presented.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Omar", "Farheen", "" ], [ "Sinn", "Mathieu", "" ], [ "Truszkowski", "Jakub", "" ], [ "Poupart", "Pascal", "" ], [ "Tung", "James", "" ], [ "Caine", "Allen", "" ] ]
1203.3508
Guilin Qi
Guilin Qi, Jianfeng Du, Weiru Liu, David A. Bell
Merging Knowledge Bases in Possibilistic Logic by Lexicographic Aggregation
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-458-465
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Belief merging is an important but difficult problem in Artificial Intelligence, especially when sources of information are pervaded with uncertainty. Many merging operators have been proposed to deal with this problem in possibilistic logic, a weighted logic which is powerful for handling inconsistency and deal- ing with uncertainty. They often result in a possibilistic knowledge base which is a set of weighted formulas. Although possibilistic logic is inconsistency tolerant, it suers from the well-known "drowning effect". Therefore, we may still want to obtain a consistent possi- bilistic knowledge base as the result of merg- ing. In such a case, we argue that it is not always necessary to keep weighted informa- tion after merging. In this paper, we define a merging operator that maps a set of pos- sibilistic knowledge bases and a formula rep- resenting the integrity constraints to a clas- sical knowledge base by using lexicographic ordering. We show that it satisfies nine pos- tulates that generalize basic postulates for propositional merging given in [11]. These postulates capture the principle of minimal change in some sense. We then provide an algorithm for generating the resulting knowl- edge base of our merging operator. Finally, we discuss the compatibility of our merging operator with propositional merging and es- tablish the advantage of our merging opera- tor over existing semantic merging operators in the propositional case.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Qi", "Guilin", "" ], [ "Du", "Jianfeng", "" ], [ "Liu", "Weiru", "" ], [ "Bell", "David A.", "" ] ]
1203.3509
Erik Quaeghebeur
Erik Quaeghebeur
Characterizing the Set of Coherent Lower Previsions with a Finite Number of Constraints or Vertices
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-466-473
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The standard coherence criterion for lower previsions is expressed using an infinite number of linear constraints. For lower previsions that are essentially defined on some finite set of gambles on a finite possibility space, we present a reformulation of this criterion that only uses a finite number of constraints. Any such lower prevision is coherent if it lies within the convex polytope defined by these constraints. The vertices of this polytope are the extreme coherent lower previsions for the given set of gambles. Our reformulation makes it possible to compute them. We show how this is done and illustrate the procedure and its results.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Quaeghebeur", "Erik", "" ] ]
1203.3513
Ross D. Shachter
Ross D. Shachter, Debarun Bhattacharjya
Dynamic programming in in uence diagrams with decision circuits
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-509-516
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decision circuits perform efficient evaluation of influence diagrams, building on the ad- vances in arithmetic circuits for belief net- work inference [Darwiche, 2003; Bhattachar- jya and Shachter, 2007]. We show how even more compact decision circuits can be con- structed for dynamic programming in influ- ence diagrams with separable value functions and conditionally independent subproblems. Once a decision circuit has been constructed based on the diagram's "global" graphical structure, it can be compiled to exploit "lo- cal" structure for efficient evaluation and sen- sitivity analysis.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Shachter", "Ross D.", "" ], [ "Bhattacharjya", "Debarun", "" ] ]
1203.3525
Mark Voortman
Mark Voortman, Denver Dash, Marek J. Druzdzel
Learning Why Things Change: The Difference-Based Causality Learner
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-641-650
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present the Difference- Based Causality Learner (DBCL), an algorithm for learning a class of discrete-time dynamic models that represents all causation across time by means of difference equations driving change in a system. We motivate this representation with real-world mechanical systems and prove DBCL's correctness for learning structure from time series data, an endeavour that is complicated by the existence of latent derivatives that have to be detected. We also prove that, under common assumptions for causal discovery, DBCL will identify the presence or absence of feedback loops, making the model more useful for predicting the effects of manipulating variables when the system is in equilibrium. We argue analytically and show empirically the advantages of DBCL over vector autoregression (VAR) and Granger causality models as well as modified forms of Bayesian and constraintbased structure discovery algorithms. Finally, we show that our algorithm can discover causal directions of alpha rhythms in human brains from EEG data.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Voortman", "Mark", "" ], [ "Dash", "Denver", "" ], [ "Druzdzel", "Marek J.", "" ] ]
1203.3528
Feng Wu
Feng Wu, Shlomo Zilberstein, Xiaoping Chen
Rollout Sampling Policy Iteration for Decentralized POMDPs
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-666-673
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present decentralized rollout sampling policy iteration (DecRSPI) - a new algorithm for multi-agent decision problems formalized as DEC-POMDPs. DecRSPI is designed to improve scalability and tackle problems that lack an explicit model. The algorithm uses Monte- Carlo methods to generate a sample of reachable belief states. Then it computes a joint policy for each belief state based on the rollout estimations. A new policy representation allows us to represent solutions compactly. The key benefits of the algorithm are its linear time complexity over the number of agents, its bounded memory usage and good solution quality. It can solve larger problems that are intractable for existing planning algorithms. Experimental results confirm the effectiveness and scalability of the approach.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Wu", "Feng", "" ], [ "Zilberstein", "Shlomo", "" ], [ "Chen", "Xiaoping", "" ] ]
1203.3531
Changhe Yuan
Changhe Yuan, Xiaojian Wu, Eric A. Hansen
Solving Multistage Influence Diagrams using Branch-and-Bound Search
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-691-700
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A branch-and-bound approach to solving influ- ence diagrams has been previously proposed in the literature, but appears to have never been implemented and evaluated - apparently due to the difficulties of computing effective bounds for the branch-and-bound search. In this paper, we describe how to efficiently compute effective bounds, and we develop a practical implementa- tion of depth-first branch-and-bound search for influence diagram evaluation that outperforms existing methods for solving influence diagrams with multiple stages.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,115,200,000
[ [ "Yuan", "Changhe", "" ], [ "Wu", "Xiaojian", "" ], [ "Hansen", "Eric A.", "" ] ]
1203.3538
Emma Brunskill
Emma Brunskill, Stuart Russell
RAPID: A Reachable Anytime Planner for Imprecisely-sensed Domains
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-83-92
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the intractability of generic optimal partially observable Markov decision process planning, there exist important problems that have highly structured models. Previous researchers have used this insight to construct more efficient algorithms for factored domains, and for domains with topological structure in the flat state dynamics model. In our work, motivated by findings from the education community relevant to automated tutoring, we consider problems that exhibit a form of topological structure in the factored dynamics model. Our Reachable Anytime Planner for Imprecisely-sensed Domains (RAPID) leverages this structure to efficiently compute a good initial envelope of reachable states under the optimal MDP policy in time linear in the number of state variables. RAPID performs partially-observable planning over the limited envelope of states, and slowly expands the state space considered as time allows. RAPID performs well on a large tutoring-inspired problem simulation with 122 state variables, corresponding to a flat state space of over 10^30 states.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:25:52 GMT" } ]
1,332,115,200,000
[ [ "Brunskill", "Emma", "" ], [ "Russell", "Stuart", "" ] ]
1203.4011
Raghuram Ramanujan
Raghuram Ramanujan, Ashish Sabharwal, Bart Selman
Understanding Sampling Style Adversarial Search Methods
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-474-483
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
UCT has recently emerged as an exciting new adversarial reasoning technique based on cleverly balancing exploration and exploitation in a Monte-Carlo sampling setting. It has been particularly successful in the game of Go but the reasons for its success are not well understood and attempts to replicate its success in other domains such as Chess have failed. We provide an in-depth analysis of the potential of UCT in domain-independent settings, in cases where heuristic values are available, and the effect of enhancing random playouts to more informed playouts between two weak minimax players. To provide further insights, we develop synthetic game tree instances and discuss interesting properties of UCT, both empirically and analytically.
[ { "version": "v1", "created": "Thu, 15 Mar 2012 11:17:56 GMT" } ]
1,332,201,600,000
[ [ "Ramanujan", "Raghuram", "" ], [ "Sabharwal", "Ashish", "" ], [ "Selman", "Bart", "" ] ]
1203.4287
Muhammad Islam
Muhammad Asiful Islam, C. R. Ramakrishnan, I. V. Ramakrishnan
Parameter Learning in PRISM Programs with Continuous Random Variables
7 pages. Main contribution: Learning algorithm. Inference appears in http://arxiv.org/abs/1112.2681
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic Logic Programming (PLP), exemplified by Sato and Kameya's PRISM, Poole's ICL, De Raedt et al's ProbLog and Vennekens et al's LPAD, combines statistical and logical knowledge representation and inference. Inference in these languages is based on enumerative construction of proofs over logic programs. Consequently, these languages permit very limited use of random variables with continuous distributions. In this paper, we extend PRISM with Gaussian random variables and linear equality constraints, and consider the problem of parameter learning in the extended language. Many statistical models such as finite mixture models and Kalman filter can be encoded in extended PRISM. Our EM-based learning algorithm uses a symbolic inference procedure that represents sets of derivations without enumeration. This permits us to learn the distribution parameters of extended PRISM programs with discrete as well as Gaussian variables. The learning algorithm naturally generalizes the ones used for PRISM and Hybrid Bayesian Networks.
[ { "version": "v1", "created": "Mon, 19 Mar 2012 23:37:07 GMT" } ]
1,332,288,000,000
[ [ "Islam", "Muhammad Asiful", "" ], [ "Ramakrishnan", "C. R.", "" ], [ "Ramakrishnan", "I. V.", "" ] ]
1203.5452
Nesrine Yahia Ben
Nesrine Ben Yahia, Narj\`es Bellamine and Henda Ben Ghezala
Modeling of Mixed Decision Making Process
Keywords-collaborative knowledge management; mixed decision making; dynamicity of actors; UML-G
In Proceedings of IEEE International Conference on Information Technology and e-Services 2012, pp. 555-559 ISBN: 978-9938-9511-1-0
null
null
cs.AI
http://creativecommons.org/licenses/by/3.0/
Decision making whenever and wherever it is happened is key to organizations success. In order to make correct decision, individuals, teams and organizations need both knowledge management (to manage content) and collaboration (to manage group processes) to make that more effective and efficient. In this paper, we explain the knowledge management and collaboration convergence. Then, we propose a formal description of mixed and multimodal decision making (MDM) process where decision may be made by three possible modes: individual, collective or hybrid. Finally, we explicit the MDM process based on UML-G profile.
[ { "version": "v1", "created": "Sat, 24 Mar 2012 22:18:36 GMT" } ]
1,332,806,400,000
[ [ "Yahia", "Nesrine Ben", "" ], [ "Bellamine", "Narjès", "" ], [ "Ghezala", "Henda Ben", "" ] ]
1203.5532
Bruno Scherrer
Bruno Scherrer (INRIA Lorraine - LORIA)
On the Use of Non-Stationary Policies for Infinite-Horizon Discounted Markov Decision Processes
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider infinite-horizon $\gamma$-discounted Markov Decision Processes, for which it is known that there exists a stationary optimal policy. We consider the algorithm Value Iteration and the sequence of policies $\pi_1,...,\pi_k$ it implicitely generates until some iteration $k$. We provide performance bounds for non-stationary policies involving the last $m$ generated policies that reduce the state-of-the-art bound for the last stationary policy $\pi_k$ by a factor $\frac{1-\gamma}{1-\gamma^m}$. In particular, the use of non-stationary policies allows to reduce the usual asymptotic performance bounds of Value Iteration with errors bounded by $\epsilon$ at each iteration from $\frac{\gamma}{(1-\gamma)^2}\epsilon$ to $\frac{\gamma}{1-\gamma}\epsilon$, which is significant in the usual situation when $\gamma$ is close to 1. Given Bellman operators that can only be computed with some error $\epsilon$, a surprising consequence of this result is that the problem of "computing an approximately optimal non-stationary policy" is much simpler than that of "computing an approximately optimal stationary policy", and even slightly simpler than that of "approximately computing the value of some fixed policy", since this last problem only has a guarantee of $\frac{1}{1-\gamma}\epsilon$.
[ { "version": "v1", "created": "Sun, 25 Mar 2012 19:44:41 GMT" }, { "version": "v2", "created": "Fri, 30 Mar 2012 18:18:05 GMT" } ]
1,333,324,800,000
[ [ "Scherrer", "Bruno", "", "INRIA Lorraine - LORIA" ] ]
1203.6716
Gopalakrishnan Tr Nair
Dr T.R. Gopalakrishnan Nair, Meenakshi Malhotra
Creating Intelligent Linking for Information Threading in Knowledge Networks
5 Pages, 6 Figures, 2 Tables, India Conference (INDICON), 2011
India Conference (INDICON), 2011
10.1109/INDCON.2011.6139335
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Informledge System (ILS) is a knowledge network with autonomous nodes and intelligent links that integrate and structure the pieces of knowledge. In this paper, we aim to put forward the link dynamics involved in intelligent processing of information in ILS. There has been advancement in knowledge management field which involve managing information in databases from a single domain. ILS works with information from multiple domains stored in distributed way in the autonomous nodes termed as Knowledge Network Node (KNN). Along with the concept under consideration, KNNs store the processed information linking concepts and processors leading to the appropriate processing of information.
[ { "version": "v1", "created": "Fri, 30 Mar 2012 05:18:06 GMT" } ]
1,333,324,800,000
[ [ "Nair", "Dr T. R. Gopalakrishnan", "" ], [ "Malhotra", "Meenakshi", "" ] ]
1204.0181
Youssef Bassil
Youssef Bassil
Expert PC Troubleshooter With Fuzzy-Logic And Self-Learning Support
LACSC - Lebanese Association for Computational Sciences, http://www.lacsc.org/; International Journal of Artificial Intelligence & Applications (IJAIA), Vol.3, No.2, March 2012
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Expert systems use human knowledge often stored as rules within the computer to solve problems that generally would entail human intelligence. Today, with information systems turning out to be more pervasive and with the myriad advances in information technologies, automating computer fault diagnosis is becoming so fundamental that soon every enterprise has to endorse it. This paper proposes an expert system called Expert PC Troubleshooter for diagnosing computer problems. The system is composed of a user interface, a rule-base, an inference engine, and an expert interface. Additionally, the system features a fuzzy-logic module to troubleshoot POST beep errors, and an intelligent agent that assists in the knowledge acquisition process. The proposed system is meant to automate the maintenance, repair, and operations (MRO) process, and free-up human technicians from manually performing routine, laborious, and timeconsuming maintenance tasks. As future work, the proposed system is to be parallelized so as to boost its performance and speed-up its various operations.
[ { "version": "v1", "created": "Sun, 1 Apr 2012 09:08:21 GMT" } ]
1,333,411,200,000
[ [ "Bassil", "Youssef", "" ] ]
1204.0731
Olivier Bailleux
Olivier Bailleux
Unit contradiction versus unit propagation
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Some aspects of the result of applying unit resolution on a CNF formula can be formalized as functions with domain a set of partial truth assignments. We are interested in two ways for computing such functions, depending on whether the result is the production of the empty clause or the assignment of a variable with a given truth value. We show that these two models can compute the same functions with formulae of polynomially related sizes, and we explain how this result is related to the CNF encoding of Boolean constraints.
[ { "version": "v1", "created": "Tue, 3 Apr 2012 16:44:47 GMT" } ]
1,333,497,600,000
[ [ "Bailleux", "Olivier", "" ] ]
1204.1576
Sanjeev Jha
Sanjeev Kumar Jha
Development of knowledge Base Expert System for Natural treatment of Diabetes disease
null
International Journal of Advanced Computer Science and Applications(IJACSA)Volume 3 Issue 3 March 2012 Published
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The development of expert system for treatment of Diabetes disease by using natural methods is new information technology derived from Artificial Intelligent research using ESTA (Expert System Text Animation) System. The proposed expert system contains knowledge about various methods of natural treatment methods (Massage, Herbal/Proper Nutrition, Acupuncture, Gems) for Diabetes diseases of Human Beings. The system is developed in the ESTA (Expert System shell for Text Animation) which is Visual Prolog 7.3 Application. The knowledge for the said system will be acquired from domain experts, texts and other related sources.
[ { "version": "v1", "created": "Fri, 6 Apr 2012 22:35:15 GMT" } ]
1,334,016,000,000
[ [ "Jha", "Sanjeev Kumar", "" ] ]
1204.1637
Mohamed Ali Mahjoub
Nabil ghanmy, Mohamed Ali Mahjoub, Najoua Essoukri Ben Amara
Characterization of Dynamic Bayesian Network
9 pages, (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 2, No. 7, 2011
null
null
2156-5570
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this report, we will be interested at Dynamic Bayesian Network (DBNs) as a model that tries to incorporate temporal dimension with uncertainty. We start with basics of DBN where we especially focus in Inference and Learning concepts and algorithms. Then we will present different levels and methods of creating DBNs as well as approaches of incorporating temporal dimension in static Bayesian network.
[ { "version": "v1", "created": "Sat, 7 Apr 2012 13:55:29 GMT" } ]
1,334,188,800,000
[ [ "ghanmy", "Nabil", "" ], [ "Mahjoub", "Mohamed Ali", "" ], [ "Amara", "Najoua Essoukri Ben", "" ] ]
1204.1653
Ali Elouafiq
Ali Elouafiq
Machine Cognition Models: EPAM and GPS
EPAM, General Problem solver
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Through history, the human being tried to relay its daily tasks to other creatures, which was the main reason behind the rise of civilizations. It started with deploying animals to automate tasks in the field of agriculture(bulls), transportation (e.g. horses and donkeys), and even communication (pigeons). Millenniums after, come the Golden age with "Al-jazari" and other Muslim inventors, which were the pioneers of automation, this has given birth to industrial revolution in Europe, centuries after. At the end of the nineteenth century, a new era was to begin, the computational era, the most advanced technological and scientific development that is driving the mankind and the reason behind all the evolutions of science; such as medicine, communication, education, and physics. At this edge of technology engineers and scientists are trying to model a machine that behaves the same as they do, which pushed us to think about designing and implementing "Things that-Thinks", then artificial intelligence was. In this work we will cover each of the major discoveries and studies in the field of machine cognition, which are the "Elementary Perceiver and Memorizer"(EPAM) and "The General Problem Solver"(GPS). The First one focus mainly on implementing the human-verbal learning behavior, while the second one tries to model an architecture that is able to solve problems generally (e.g. theorem proving, chess playing, and arithmetic). We will cover the major goals and the main ideas of each model, as well as comparing their strengths and weaknesses, and finally giving their fields of applications. And Finally, we will suggest a real life implementation of a cognitive machine.
[ { "version": "v1", "created": "Sat, 7 Apr 2012 16:34:20 GMT" } ]
1,334,016,000,000
[ [ "Elouafiq", "Ali", "" ] ]
1204.1851
Alexander Artikis
Anastasios Skarlatidis, Alexander Artikis, Jason Filippou and Georgios Paliouras
A Probabilistic Logic Programming Event Calculus
Accepted for publication in the Theory and Practice of Logic Programming (TPLP) journal
Theory and Practice of Logic Programming 15 (2015) 213-245
10.1017/S1471068413000690
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities (STA) detected on video frames. The output is a set of recognised long-term activities (LTA), which are pre-defined temporal combinations of STA. The constraints on the STA that, if satisfied, lead to the recognition of a LTA, have been expressed using a dialect of the Event Calculus. In order to handle the uncertainty that naturally occurs in human activity recognition, we adapted this dialect to a state-of-the-art probabilistic logic programming framework. We present a detailed evaluation and comparison of the crisp and probabilistic approaches through experimentation on a benchmark dataset of human surveillance videos.
[ { "version": "v1", "created": "Mon, 9 Apr 2012 10:23:38 GMT" }, { "version": "v2", "created": "Mon, 29 Apr 2013 16:15:27 GMT" } ]
1,582,070,400,000
[ [ "Skarlatidis", "Anastasios", "" ], [ "Artikis", "Alexander", "" ], [ "Filippou", "Jason", "" ], [ "Paliouras", "Georgios", "" ] ]
1204.2018
Igor Subbotin
Igor Ya. Subbotin and Michael Gr. Voskoglou
Applications of fuzzy logic to Case-Based Reasoning
null
International Journal of Applications of Fuzzy Sets (ISSN 2241-1240) Vol. 1 ( 2011), 7-18
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The article discusses some applications of fuzzy logic ideas to formalizing of the Case-Based Reasoning (CBR) process and to measuring the effectiveness of CBR systems
[ { "version": "v1", "created": "Tue, 10 Apr 2012 00:59:28 GMT" } ]
1,334,102,400,000
[ [ "Subbotin", "Igor Ya.", "" ], [ "Voskoglou", "Michael Gr.", "" ] ]
1204.3255
Manfred Jaeger
Manfred Jaeger
Lower Complexity Bounds for Lifted Inference
To appear in Theory and Practice of Logic Programming (TPLP)
Theory and Practice of Logic Programming 15 (2015) 246-263
10.1017/S1471068413000707
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the big challenges in the development of probabilistic relational (or probabilistic logical) modeling and learning frameworks is the design of inference techniques that operate on the level of the abstract model representation language, rather than on the level of ground, propositional instances of the model. Numerous approaches for such "lifted inference" techniques have been proposed. While it has been demonstrated that these techniques will lead to significantly more efficient inference on some specific models, there are only very recent and still quite restricted results that show the feasibility of lifted inference on certain syntactically defined classes of models. Lower complexity bounds that imply some limitations for the feasibility of lifted inference on more expressive model classes were established early on in (Jaeger 2000). However, it is not immediate that these results also apply to the type of modeling languages that currently receive the most attention, i.e., weighted, quantifier-free formulas. In this paper we extend these earlier results, and show that under the assumption that NETIME =/= ETIME, there is no polynomial lifted inference algorithm for knowledge bases of weighted, quantifier- and function-free formulas. Further strengthening earlier results, this is also shown to hold for approximate inference, and for knowledge bases not containing the equality predicate.
[ { "version": "v1", "created": "Sun, 15 Apr 2012 10:59:29 GMT" }, { "version": "v2", "created": "Thu, 2 May 2013 15:27:06 GMT" } ]
1,582,070,400,000
[ [ "Jaeger", "Manfred", "" ] ]
1204.3844
Abdelmalik Moujahid
Blanca Cases, Alicia D'Anjou, Abdelmalik Moujahid
On how percolation threshold affects PSO performance
null
LNCS, 2012, Volume 7208/2012, 509-520
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistical evidence of the influence of neighborhood topology on the performance of particle swarm optimization (PSO) algorithms has been shown in many works. However, little has been done about the implications could have the percolation threshold in determining the topology of this neighborhood. This work addresses this problem for individuals that, like robots, are able to sense in a limited neighborhood around them. Based on the concept of percolation threshold, and more precisely, the disk percolation model in 2D, we show that better results are obtained for low values of radius, when individuals occasionally ask others their best visited positions, with the consequent decrease of computational complexity. On the other hand, since percolation threshold is a universal measure, it could have a great interest to compare the performance of different hybrid PSO algorithms.
[ { "version": "v1", "created": "Tue, 17 Apr 2012 17:00:58 GMT" } ]
1,334,707,200,000
[ [ "Cases", "Blanca", "" ], [ "D'Anjou", "Alicia", "" ], [ "Moujahid", "Abdelmalik", "" ] ]
1204.4051
Martin Josef Geiger
Thibaut Barth\'elemy, Martin Josef Geiger, Marc Sevaux
Solution Representations and Local Search for the bi-objective Inventory Routing Problem
Proceedings of EU/ME 2012, Workshop on Metaheuristics for Global Challenges, May 10-11, 2012, Copenhagen, Denmark
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The solution of the biobjective IRP is rather challenging, even for metaheuristics. We are still lacking a profound understanding of appropriate solution representations and effective neighborhood structures. Clearly, both the delivery volumes and the routing aspects of the alternatives need to be reflected in an encoding, and must be modified when searching by means of local search. Our work contributes to the better understanding of such solution representations. On the basis of an experimental investigation, the advantages and drawbacks of two encodings are studied and compared.
[ { "version": "v1", "created": "Wed, 18 Apr 2012 11:32:07 GMT" } ]
1,334,793,600,000
[ [ "Barthélemy", "Thibaut", "" ], [ "Geiger", "Martin Josef", "" ], [ "Sevaux", "Marc", "" ] ]
1204.4541
Patrick Taillandier
Patrick Taillandier (UMMISCO), Julien Gaffuri (COGIT)
Automatic Sampling of Geographic objects
null
GIScience, Zurich : Switzerland (2010)
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today, one's disposes of large datasets composed of thousands of geographic objects. However, for many processes, which require the appraisal of an expert or much computational time, only a small part of these objects can be taken into account. In this context, robust sampling methods become necessary. In this paper, we propose a sampling method based on clustering techniques. Our method consists in dividing the objects in clusters, then in selecting in each cluster, the most representative objects. A case-study in the context of a process dedicated to knowledge revision for geographic data generalisation is presented. This case-study shows that our method allows to select relevant samples of objects.
[ { "version": "v1", "created": "Fri, 20 Apr 2012 06:35:41 GMT" } ]
1,335,139,200,000
[ [ "Taillandier", "Patrick", "", "UMMISCO" ], [ "Gaffuri", "Julien", "", "COGIT" ] ]
1204.4989
Patrick Taillandier
Patrick Taillandier (COGIT, UMMISCO), C\'ecile Duch\^ene (COGIT), Alexis Drogoul (UMMISCO, MSI)
Using Belief Theory to Diagnose Control Knowledge Quality. Application to cartographic generalisation
Best paper award, International Conference on Computing and Communication Technologies (IEEE-RIVF), Danang : Viet Nam (2009)
null
10.1109/RIVF.2009.5174663
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Both humans and artificial systems frequently use trial and error methods to problem solving. In order to be effective, this type of strategy implies having high quality control knowledge to guide the quest for the optimal solution. Unfortunately, this control knowledge is rarely perfect. Moreover, in artificial systems-as in humans-self-evaluation of one's own knowledge is often difficult. Yet, this self-evaluation can be very useful to manage knowledge and to determine when to revise it. The objective of our work is to propose an automated approach to evaluate the quality of control knowledge in artificial systems based on a specific trial and error strategy, namely the informed tree search strategy. Our revision approach consists in analysing the system's execution logs, and in using the belief theory to evaluate the global quality of the knowledge. We present a real-world industrial application in the form of an experiment using this approach in the domain of cartographic generalisation. Thus far, the results of using our approach have been encouraging.
[ { "version": "v1", "created": "Mon, 23 Apr 2012 08:01:48 GMT" } ]
1,335,225,600,000
[ [ "Taillandier", "Patrick", "", "COGIT, UMMISCO" ], [ "Duchêne", "Cécile", "", "COGIT" ], [ "Drogoul", "Alexis", "", "UMMISCO, MSI" ] ]
1204.6415
Michael Gr. Voskoglou Prof. Dr.
Michael Gr. Voskoglou
A Fuzzy Model for Analogical Problem Solving
10 pages, 1 Table
International Journal of Fuzzy Logic Systems Vol. 2, No. 1, pp. 1-10, February 2012
10.5121/ijfls.2012.2101
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we develop a fuzzy model for the description of the process of Analogical Reasoning by representing its main steps as fuzzy subsets of a set of linguistic labels characterizing the individuals' performance in each step and we use the Shannon- Wiener diversity index as a measure of the individuals' abilities in analogical problem solving. This model is compared with a stochastic model presented in author's earlier papers by introducing a finite Markov chain on the steps of the process of Analogical Reasoning. A classroom experiment is also presented to illustrate the use of our results in practice.
[ { "version": "v1", "created": "Sat, 28 Apr 2012 16:16:46 GMT" } ]
1,335,830,400,000
[ [ "Voskoglou", "Michael Gr.", "" ] ]
1205.1645
Fran\c{c}ois Scharffe
Julien Plu and Fran\c{c}ois Scharffe
Publishing and linking transport data on the Web
Presented at the First International Workshop On Open Data, WOD-2012 (http://arxiv.org/abs/1204.3726)
null
null
WOD/2012/NANTES/13
cs.AI
http://creativecommons.org/licenses/by-nc-sa/3.0/
Without Linked Data, transport data is limited to applications exclusively around transport. In this paper, we present a workflow for publishing and linking transport data on the Web. So we will be able to develop transport applications and to add other features which will be created from other datasets. This will be possible because transport data will be linked to these datasets. We apply this workflow to two datasets: NEPTUNE, a French standard describing a transport line, and Passim, a directory containing relevant information on transport services, in every French city.
[ { "version": "v1", "created": "Tue, 8 May 2012 09:50:35 GMT" } ]
1,336,521,600,000
[ [ "Plu", "Julien", "" ], [ "Scharffe", "François", "" ] ]
1205.2541
Changzhong Wang
Changzhong Wang, Baiqing Sun, Qinhua Hu
An improved approach to attribute reduction with covering rough sets
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Attribute reduction is viewed as an important preprocessing step for pattern recognition and data mining. Most of researches are focused on attribute reduction by using rough sets. Recently, Tsang et al. discussed attribute reduction with covering rough sets in the paper [E. C.C. Tsang, D. Chen, Daniel S. Yeung, Approximations and reducts with covering generalized rough sets, Computers and Mathematics with Applications 56 (2008) 279-289], where an approach based on discernibility matrix was presented to compute all attribute reducts. In this paper, we provide an improved approach by constructing simpler discernibility matrix with covering rough sets, and then proceed to improve some characterizations of attribute reduction provided by Tsang et al. It is proved that the improved discernible matrix is equivalent to the old one, but the computational complexity of discernible matrix is greatly reduced.
[ { "version": "v1", "created": "Fri, 11 May 2012 14:45:52 GMT" } ]
1,336,953,600,000
[ [ "Wang", "Changzhong", "" ], [ "Sun", "Baiqing", "" ], [ "Hu", "Qinhua", "" ] ]
1205.2596
Fabio Cozman
Fabio Cozman and Avi Pfeffer
Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence (2011)
null
null
null
UAI2011
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is the Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, which was held in Barcelona, Spain, July 14 - 17 2011.
[ { "version": "v1", "created": "Fri, 11 May 2012 18:35:50 GMT" }, { "version": "v2", "created": "Thu, 28 Aug 2014 04:30:01 GMT" } ]
1,409,270,400,000
[ [ "Cozman", "Fabio", "" ], [ "Pfeffer", "Avi", "" ] ]
1205.2597
Peter Grunwald
Peter Grunwald and Peter Spirtes
Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (2010)
null
null
null
UAI2010
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is the Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence, which was held on Catalina Island, CA, July 8 - 11 2010.
[ { "version": "v1", "created": "Fri, 11 May 2012 18:40:29 GMT" }, { "version": "v2", "created": "Thu, 28 Aug 2014 04:29:00 GMT" } ]
1,409,270,400,000
[ [ "Grunwald", "Peter", "" ], [ "Spirtes", "Peter", "" ] ]
1205.2601
Changhe Yuan
Changhe Yuan, Xiaolu Liu, Tsai-Ching Lu, Heejin Lim
Most Relevant Explanation: Properties, Algorithms, and Evaluations
Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI2009)
null
null
UAI-P-2009-PG-631-638
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most Relevant Explanation (MRE) is a method for finding multivariate explanations for given evidence in Bayesian networks [12]. This paper studies the theoretical properties of MRE and develops an algorithm for finding multiple top MRE solutions. Our study shows that MRE relies on an implicit soft relevance measure in automatically identifying the most relevant target variables and pruning less relevant variables from an explanation. The soft measure also enables MRE to capture the intuitive phenomenon of explaining away encoded in Bayesian networks. Furthermore, our study shows that the solution space of MRE has a special lattice structure which yields interesting dominance relations among the solutions. A K-MRE algorithm based on these dominance relations is developed for generating a set of top solutions that are more representative. Our empirical results show that MRE methods are promising approaches for explanation in Bayesian networks.
[ { "version": "v1", "created": "Wed, 9 May 2012 18:47:26 GMT" } ]
1,336,953,600,000
[ [ "Yuan", "Changhe", "" ], [ "Liu", "Xiaolu", "" ], [ "Lu", "Tsai-Ching", "" ], [ "Lim", "Heejin", "" ] ]
1205.2613
Matthias Thimm
Matthias Thimm
Measuring Inconsistency in Probabilistic Knowledge Bases
Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI2009)
null
null
UAI-P-2009-PG-530-537
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper develops an inconsistency measure on conditional probabilistic knowledge bases. The measure is based on fundamental principles for inconsistency measures and thus provides a solid theoretical framework for the treatment of inconsistencies in probabilistic expert systems. We illustrate its usefulness and immediate application on several examples and present some formal results. Building on this measure we use the Shapley value-a well-known solution for coalition games-to define a sophisticated indicator that is not only able to measure inconsistencies but to reveal the causes of inconsistencies in the knowledge base. Altogether these tools guide the knowledge engineer in his aim to restore consistency and therefore enable him to build a consistent and usable knowledge base that can be employed in probabilistic expert systems.
[ { "version": "v1", "created": "Wed, 9 May 2012 18:31:58 GMT" } ]
1,336,953,600,000
[ [ "Thimm", "Matthias", "" ] ]
1205.2616
Prithviraj Sen
Prithviraj Sen, Amol Deshpande, Lise Getoor
Bisimulation-based Approximate Lifted Inference
Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI2009)
null
null
UAI-P-2009-PG-496-505
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been a great deal of recent interest in methods for performing lifted inference; however, most of this work assumes that the first-order model is given as input to the system. Here, we describe lifted inference algorithms that determine symmetries and automatically lift the probabilistic model to speedup inference. In particular, we describe approximate lifted inference techniques that allow the user to trade off inference accuracy for computational efficiency by using a handful of tunable parameters, while keeping the error bounded. Our algorithms are closely related to the graph-theoretic concept of bisimulation. We report experiments on both synthetic and real data to show that in the presence of symmetries, run-times for inference can be improved significantly, with approximate lifted inference providing orders of magnitude speedup over ground inference.
[ { "version": "v1", "created": "Wed, 9 May 2012 18:27:56 GMT" } ]
1,336,953,600,000
[ [ "Sen", "Prithviraj", "" ], [ "Deshpande", "Amol", "" ], [ "Getoor", "Lise", "" ] ]
1205.2619
Kevin Regan
Kevin Regan, Craig Boutilier
Regret-based Reward Elicitation for Markov Decision Processes
Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI2009)
null
null
UAI-P-2009-PG-444-451
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The specification of aMarkov decision process (MDP) can be difficult. Reward function specification is especially problematic; in practice, it is often cognitively complex and time-consuming for users to precisely specify rewards. This work casts the problem of specifying rewards as one of preference elicitation and aims to minimize the degree of precision with which a reward function must be specified while still allowing optimal or near-optimal policies to be produced. We first discuss how robust policies can be computed for MDPs given only partial reward information using the minimax regret criterion. We then demonstrate how regret can be reduced by efficiently eliciting reward information using bound queries, using regret-reduction as a means for choosing suitable queries. Empirical results demonstrate that regret-based reward elicitation offers an effective way to produce near-optimal policies without resorting to the precise specification of the entire reward function.
[ { "version": "v1", "created": "Wed, 9 May 2012 18:23:30 GMT" } ]
1,336,953,600,000
[ [ "Regan", "Kevin", "" ], [ "Boutilier", "Craig", "" ] ]
1205.2621
Mathias Niepert
Mathias Niepert
Logical Inference Algorithms and Matrix Representations for Probabilistic Conditional Independence
Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI2009)
null
null
UAI-P-2009-PG-428-435
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Logical inference algorithms for conditional independence (CI) statements have important applications from testing consistency during knowledge elicitation to constraintbased structure learning of graphical models. We prove that the implication problem for CI statements is decidable, given that the size of the domains of the random variables is known and fixed. We will present an approximate logical inference algorithm which combines a falsification and a novel validation algorithm. The validation algorithm represents each set of CI statements as a sparse 0-1 matrix A and validates instances of the implication problem by solving specific linear programs with constraint matrix A. We will show experimentally that the algorithm is both effective and efficient in validating and falsifying instances of the probabilistic CI implication problem.
[ { "version": "v1", "created": "Wed, 9 May 2012 17:28:17 GMT" } ]
1,336,953,600,000
[ [ "Niepert", "Mathias", "" ] ]
1205.2634
Samantha Kleinberg
Samantha Kleinberg, Bud Mishra
The Temporal Logic of Causal Structures
Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI2009)
null
null
UAI-P-2009-PG-303-312
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational analysis of time-course data with an underlying causal structure is needed in a variety of domains, including neural spike trains, stock price movements, and gene expression levels. However, it can be challenging to determine from just the numerical time course data alone what is coordinating the visible processes, to separate the underlying prima facie causes into genuine and spurious causes and to do so with a feasible computational complexity. For this purpose, we have been developing a novel algorithm based on a framework that combines notions of causality in philosophy with algorithmic approaches built on model checking and statistical techniques for multiple hypotheses testing. The causal relationships are described in terms of temporal logic formulae, reframing the inference problem in terms of model checking. The logic used, PCTL, allows description of both the time between cause and effect and the probability of this relationship being observed. We show that equipped with these causal formulae with their associated probabilities we may compute the average impact a cause makes to its effect and then discover statistically significant causes through the concepts of multiple hypothesis testing (treating each causal relationship as a hypothesis), and false discovery control. By exploring a well-chosen family of potentially all significant hypotheses with reasonably minimal description length, it is possible to tame the algorithm's computational complexity while exploring the nearly complete search-space of all prima facie causes. We have tested these ideas in a number of domains and illustrate them here with two examples.
[ { "version": "v1", "created": "Wed, 9 May 2012 15:45:06 GMT" } ]
1,336,953,600,000
[ [ "Kleinberg", "Samantha", "" ], [ "Mishra", "Bud", "" ] ]
1205.2635
Jacek Kisynski
Jacek Kisynski, David L Poole
Constraint Processing in Lifted Probabilistic Inference
Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI2009)
null
null
UAI-P-2009-PG-293-302
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
First-order probabilistic models combine representational power of first-order logic with graphical models. There is an ongoing effort to design lifted inference algorithms for first-order probabilistic models. We analyze lifted inference from the perspective of constraint processing and, through this viewpoint, we analyze and compare existing approaches and expose their advantages and limitations. Our theoretical results show that the wrong choice of constraint processing method can lead to exponential increase in computational complexity. Our empirical tests confirm the importance of constraint processing in lifted inference. This is the first theoretical and empirical study of constraint processing in lifted inference.
[ { "version": "v1", "created": "Wed, 9 May 2012 15:41:10 GMT" } ]
1,336,953,600,000
[ [ "Kisynski", "Jacek", "" ], [ "Poole", "David L", "" ] ]
1205.2637
Kristian Kersting
Kristian Kersting, Babak Ahmadi, Sriraam Natarajan
Counting Belief Propagation
Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI2009)
null
null
UAI-P-2009-PG-277-284
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major benefit of graphical models is that most knowledge is captured in the model structure. Many models, however, produce inference problems with a lot of symmetries not reflected in the graphical structure and hence not exploitable by efficient inference techniques such as belief propagation (BP). In this paper, we present a new and simple BP algorithm, called counting BP, that exploits such additional symmetries. Starting from a given factor graph, counting BP first constructs a compressed factor graph of clusternodes and clusterfactors, corresponding to sets of nodes and factors that are indistinguishable given the evidence. Then it runs a modified BP algorithm on the compressed graph that is equivalent to running BP on the original factor graph. Our experiments show that counting BP is applicable to a variety of important AI tasks such as (dynamic) relational models and boolean model counting, and that significant efficiency gains are obtainable, often by orders of magnitude.
[ { "version": "v1", "created": "Wed, 9 May 2012 15:37:58 GMT" } ]
1,336,953,600,000
[ [ "Kersting", "Kristian", "" ], [ "Ahmadi", "Babak", "" ], [ "Natarajan", "Sriraam", "" ] ]