GotAI.NET
Форум: Проблемы искусственного интеллекта
Регистрация
|
Вход
Все темы
|
Новая тема
Стр.25 (39)
<<
< Пред.
|
След. >
>>
Поиск:
Автор
Тема: На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:40
PART 9: UNCERTAINTY in AI
1883p
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:41
PART-9: UNCERTAINTY in AI
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:41
PART-9: UNCERTAINTY in AI:
=312=> 1884p
Markov Network based Ontology Matching
,
Sivan Gali Albagli, SE Shimony, Rachel Ben-Eliyahu-Zohary,
http://ijcai.org/papers09/Abstracts/312.html
iMatch is a probabilistic scheme for ontology matching based on Markov networks, which has several advantages over other probabilistic schemes. First, it uses undirected networks, which better supports the non-causal nature of the dependencies. Second, it handles the high computational complexity involved by approximate reasoning, rather then by ad-hoc pruning. Third, the probabilities that it uses are learned from matched data. Finally, iMatch naturally supports interactive semi-automatic matches. Experiments using the standard benchmark tests that compare our approach with the most promising existing systems show that iMatch is one of the top performers. text:
http://ijcai.org/papers09/Papers/IJCAI09-312.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:42
PART-9: UNCERTAINTY in AI:
=313=> 1890p
Ceteris Paribus Preference Elicitation with Predictive Guarantees
,
Yannis Dimopoulos, Loizos Michael, Fani Athienitou,
http://ijcai.org/papers09/Abstracts/313.html
CP-networks have been proposed as a simple and intuitive graphical tool for representing conditional ceteris paribus preference statements over the values of a set of variables. While the problem of reasoning with CP-networks has been receiving some attention, there are very few works that address the problem of learning CP-networks. In this work we investigate the task of learning CP-networks, given access to a set of pairwise comparisons. We first prove that the learning problem is intractable, even under several simplifying assumptions. We then present an algorithm that, under certain assumptions about the observed pairwise comparisons, identifies a CP-network that entails these comparisons. We finally show that the proposed algorithm is a PAC-learner, and, thus, that the CP-networks it induces accurately predict the user's preferences on previously unseen situations. text:
http://ijcai.org/papers09/Papers/IJCAI09-313.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:42
PART-9: UNCERTAINTY in AI:
=314=> 1896p
Fast Recommendations using GAI Models
,
Jean-Philippe Dubus, Christophe Gonzales, Patrice Perny,
http://ijcai.org/papers09/Abstracts/314.html
This paper deals with Decision-Making in the context of multiattribute utility theory and, more precisely, with the problem of efficiently determining the best alternative w.r.t. an agent's preferences (choice problem). We assume that alternatives are elements of a product set of attributes and that the agent's preferences are represented by a generalized additive decomposable (GAI) utility on this set. Such a function allows an efficient representation of interactions between attributes while preserving some decomposability of the model. GAI utilities can be compiled into graphical structures called GAI networks that can be exploited to solve choice problems using collect/distribute schemes essentially similar to those used in Bayesian networks. In this paper, rather than directly using this scheme on the GAI network for determining the most preferred alternative, we propose to work with another GAI function, acting as an upper-bound on utility values and enhancing the model's decomposability. This method still provides the exact optimal solution but speeds up significantly the search. It proves to be particularly useful when dealing with choice and ranking under constraints and within collective Decision-Making, where GAI nets tend to have a large size. We present an efficient algorithm for determining this new GAI function and provide experimental results highlighting the practical efficiency of our procedure.
text:
http://ijcai.org/papers09/Papers/IJCAI09-314.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:42
PART-9: UNCERTAINTY in AI:
=315=> 1902p
Multiobjective Optimization using GAI Models
,
Jean-Philippe Dubus, C.Gonzales, Patrice Perny,
http://ijcai.org/papers09/Abstracts/315.html
This paper deals with multiobjective optimization in the context of multiattribute utility theory. The alternatives (feasible solutions) are seen as elements of a product set of attributes and preferences over solutions are represented by generalized additive decomposable (GAI) utility functions modeling individual preferences or criteria. Due to decomposability, utility vectors attached to solutions can be compiled into a graphical structure closely related to junction trees, the so-called GAI net. We first show how the structure of the GAI net can be used to determine efficiently the exact set of Pareto-optimal solutions in a product set and provide numerical tests on random instances. Since the exact determination of the Pareto set is intractable in worst case, we propose a near admissible algorithm with performance guarantee, exploiting the GAI structure to approximate the set of Pareto optimal solutions. We present numerical experimentations, showing that both utility decomposition and approximation significantly improve resolution times in multiobjective search problems.
text:
http://ijcai.org/papers09/Papers/IJCAI09-315.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:42
PART-9: UNCERTAINTY in AI:
=316=> 1908p
Greedy Algorithms for Sequential Sensing Decisions
,
H. Hajishirzi, A.Shirazi, J.Choi, E.Amir,
http://ijcai.org/papers09/Abstracts/316.html
In many real-world situations we are charged with detecting change as soon as possible. Important examples include detecting medical conditions, detecting security breaches, and updating caches of distributed databases. In those situations, sensing can be expensive, but it is also important to detect change in a timely manner. In this paper we present tractable greedy algorithms and prove that they solve this decision problem either optimally or approximate the optimal solution in many cases. Our problem model is a POMDP that includes a cost for sensing, a cost for delayed detection, a reward for successful detection, and no-cost partial observations. Making optimal decisions is difficult in general. We show that our tractable greedy approach finds optimal policies for sensing both a single variable and multiple correlated variables. Further, we provide approximations for the optimal solution to multiple hidden or observed variables per step. Our algorithms outperform previous algorithms in experiments over simulated data and live Wikipedia WWW pages.
text:
http://ijcai.org/papers09/Papers/IJCAI09-316.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:42
PART-9: UNCERTAINTY in AI:
=317=> 1916p
Generalized First Order Decision Diagrams for First Order Markov Decision Processes
,
Saket Subhash Joshi, Kristian Kersting, Roni Khardon,
http://ijcai.org/papers09/Abstracts/317.html
First order decision diagrams (FODD) were recently introduced as a compact knowledge representation expressing functions over relational structures. FODDs represent numerical functions that, when constrained to the Boolean range, use only existential quantification. Previous work developed a set of operations over FODDs, showed how they can be used to solve relational Markov decision processes (RMDP) using dynamic programming algorithms, and demonstrated their success in solving stochastic planning problems from the International Planning Competition in the system FODD-Planner. A crucial ingredient of this scheme is a set of operations to remove redundancy in decision diagrams, thus keeping them compact. This paper makes three contributions. First, we introduce Generalized FODDs (GFODD) and combination algorithms for them, generalizing FODDs to arbitrary quantification. Second, we show how GFODDs can be used in principle to solve RMDPs with arbitrary quantification, and develop a particularly promising case where an arbitrary number of existential quantifiers is followed by an arbitrary number of universal quantifiers. Third, we develop a new approach to reduce FODDs and GFODDs using model checking. This yields a reduction that is complete for FODDs and provides a sound reduction procedure for GFODDs.
text:
http://ijcai.org/papers09/Papers/IJCAI09-317.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:43
PART-9: UNCERTAINTY in AI:
=318=> 1922p
Lifted Aggregation in Directed First-order Probabilistic Models
,
Jacek Kisyński, Dawid Poole,
http://ijcai.org/papers09/Abstracts/318.html
As exact inference for first-order probabilistic graphical models at the propositional level can be formidably expensive, there is an ongoing effort to design efficient lifted inference algorithms for such models. This paper discusses directed first-order models that require an aggregation operator when a parent random variable is parameterized by logical variables that are not present in a child random variable. We introduce a new data structure, aggregation parfactors, to describe aggregation in directed first-order models. We show how to extend Milch et al.'s C-FOVE algorithm to perform lifted inference in the presence of aggregation parfactors. We also show that there are cases where the polynomial time complexity (in domain size of logical variables) of the C-FOVE algorithm can be reduced to logarithmic time complexity using aggregation parfactors. text:
http://ijcai.org/papers09/Papers/IJCAI09-318.pdf
=======================> IJCAI Distinguished Paper Award Winner !
PART-9: UNCERTAINTY in AI:
=319=> 1930p
Learning Conditional Preference Networks with Queries
,
Frederic Koriche, Bruno Zanuttini,
http://ijcai.org/papers09/Abstracts/319.html
We investigate the problem of eliciting CP-nets in the well-known model of exact learning with equivalence and membership queries. The goal is to identify a preference ordering with a binary-valued CP-net by guiding the user through a sequence of queries. Each example is a dominance test on some pair of outcomes. In this setting, we show that acyclic CP-nets are not learnable with equivalence queries alone, while they are learnable with the help of membership queries if the supplied examples are restricted to swaps. A similar property holds for tree CP-nets with arbitrary examples. In fact, membership queries allow us to provide attribute-efficient algorithms for which the query complexity is only logarithmic in the number of attributes. Such results highlight the utility of this model for eliciting CP-nets in large multi-attribute domains.
text:
http://ijcai.org/papers09/Papers/IJCAI09-319.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:43
PART-9: UNCERTAINTY in AI:
=320=> 1936p
A Sparse Covariance Function for Exact Gaussian Process Inference in Large Datasets
,
Arman Melkumyan, Fabio Tozeto Ramos,
http://ijcai.org/papers09/Abstracts/320.html
Despite the success of Gaussian processes (GPs) in modelling spatial stochastic processes, dealing with large datasets is still challenging. The problem arises by the need to invert a potentially large covariance matrix during inference. In this paper we address the complexity problem by constructing a new stationary covariance function (Mercer kernel) that naturally provides a sparse covariance matrix. The sparseness of the matrix is defined by hyper-parameters optimised during learning. The new covariance function enables exact GP inference and performs comparatively to the squared-exponential one, at a lower computational cost. This allows the application of GPs to large-scale problems such as ore grade prediction in mining or 3D surface modelling. Experiments show that using the proposed covariance function, very sparse covariance matrices are normally obtained which can be effectively used for faster inference and less memory usage.
text:
http://ijcai.org/papers09/Papers/IJCAI09-320.pdf
=======================
PART-9: UNCERTAINTY in AI:
=321=> 1943p
CTPPL: A Continuous Time Probabilistic Programming Language
,
Avi Pfeffer,
http://ijcai.org/papers09/Abstracts/321.html
Probabilistic programming languages allow a modeler to build probabilistic models using complex data structures with all the power of a programming language. We present CTPPL, an expressive probabilistic programming language for dynamic processes that models processes using continuous time. Time is a first class element in our language; the amount of time taken by a subprocess can be specified using the full power of the language. We show through examples that CTPPL can easily represent existing continuous time frameworks and makes it easy to represent new ones. We present semantics for CTPPL in terms of a probability measure over trajectories. We present a particle filtering algorithm for the language that works for a large and useful class of CTPPL programs. text:
http://ijcai.org/papers09/Papers/IJCAI09-321.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:43
PART-9: UNCERTAINTY in AI:
=322=> 1951p
Speeding Up Exact Solutions of Interactive Dynamic Influence Diagrams
using Action Equivalence
, Yifeng Zeng, Prashant Doshi,
http://ijcai.org/papers09/Abstracts/322.html
Interactive dynamic influence diagrams (I-DIDs) are graphical models for sequential decision making in partially observable settings shared by other agents. Algorithms for solving I-DIDs face the challenge of an exponentially growing space of candidate models ascribed to other agents, over time. Previous approach for exactly solving I-DIDs groups together models having similar solutions into behaviorally equivalent classes and updates these classes. We present a new method that, in addition to aggregating behaviorally equivalent models, further groups models that prescribe identical actions at a single time step. We show how to update these augmented classes and prove that our method is exact. The new approach enables us to bound the aggregated model space by the cardinality of other agents' actions. We evaluate its performance and provide empirical results in support. text:
http://ijcai.org/papers09/Papers/IJCAI09-322.pdf
=======================
PART-9: UNCERTAINTY in AI:
1957p
Testing Edges by Truncations
,
Ilya Shpitser, Thomas S. Richardson, James M. Robins,
http://ijcai.org/papers09/Abstracts/323.html
We consider the problem of testing whether two variables should be adjacent (either due to a direct effect between them, or due to a hidden common cause) given an observational distribution, and a set of causal assumptions encoded as a causal diagram. In other words, given a set of edges in the diagram known to be true, we are interested in testing whether another edge ought to be in the diagram. In fully observable faithful models this problem can be easily solved with conditional independence tests. Latent variables make the problem significantly harder since they can imply certain non-adjacent variable pairs, namely those connected by so called inducing paths, are not independent conditioned on any set of variables. We characterizewhich variable pairs can be determined to be non-adjacent by a class of constraints due to dormant independence, that is conditional independence in identifiable interventional distributions. Furthermore, we show that particular operations on joint distributions, which we call truncations are sufficient for exhibiting these non-adjacencies.This suggests a causal discovery procedure taking advantage of these constraints in the latent variable case can restrict itself to truncations. text:
http://ijcai.org/papers09/Papers/IJCAI09-323.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:43
PART-9: UNCERTAINTY in AI:
=324=> 1964p
Variable and Value Ordering for MPE Search
,
Sajjad Ahmed Siddiqi, Jinbo Huang,
http://ijcai.org/papers09/Abstracts/324.html
In Bayesian networks, a most probable explanation (MPE) is a most likely instantiation of all network variables given a piece of evidence. Solving (the decision version of) an MPE query is NP-hard. Recent work proposed a branch-and-bound search algorithm that finds exact solutions to MPE queries, where bounds are computed on a relaxed network obtained by a technique known as node splitting. In this work we study the impact of variable and value ordering on such a search algorithm. We study several heuristics based on the entropies of variables and on the notion of nogoods, and propose a new meta-heuristic that combines their strengths. Experiments indicate that search efficiency is significantly improved, allowing many hard problems to be solved for the first time. text:
http://ijcai.org/papers09/Papers/IJCAI09-324.pdf
=======================
PART-9: UNCERTAINTY in AI:
=325=> 1970p
Parameter Identification in a Class of Linear Structural Equation
Models
, Jin Tian,
http://ijcai.org/papers09/Abstracts/325.html
Linear causal models known as structural equation models (SEMs) are widely used for data analysis in the social sciences, economics, and artificial intelligence, in which random variables are assumed to be continuous and normally distributed. This paper deals with one fundamental problem in the applications of SEMs -- parameter identification. The paper uses the graphical models approach and provides a procedure for solving the identification problem in a special class of SEMs.
text:
http://ijcai.org/papers09/Papers/IJCAI09-325.pdf
=======================
PART-9: UNCERTAINTY in AI:
=326=> 1976p
Learning a Value Analysis Tool For Agent Evaluation
,
Martha White, Michael Bowling,
http://ijcai.org/papers09/Abstracts/326.html
Evaluating an agent's performance in a stochastic setting is necessary for agent development, scientific evaluation, and competitions. Traditionally, evaluation is done using Monte Carlo estimation; the magnitude of the stochasticity in the domain or the high cost of sampling, however, can often prevent the approach from resulting in statistically significant conclusions. Recently, an advantage sum technique has been proposed for constructing unbiased, low variance estimates of agent performance. The technique requires an expert to define a value function over states of the system, essentially a guess of the state's unknown value. In this work, we propose learning this value function from past interactions between agents in some target population. Our learned value functions have two key advantages: they can be applied in domains where no expert value function is available and they can result in tuned evaluation for a specific population of agents (e.g., novice versus advanced agents). We demonstrate these two advantages in the domain of poker. We show that we can reduce variance over state-of-the-art estimators for a specific population of limit poker players as well as construct the first variance reducing estimators for no-limit poker and multi-player limit poker.
text:
http://ijcai.org/papers09/Papers/IJCAI09-326.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:43
PART-9: UNCERTAINTY in AI:
=327=> 1982p
Efficient Computation of Jointree Bounds for Systematic MAP Search
,
Changhe Yuan, Eric A. Hansen,
http://ijcai.org/papers09/Abstracts/327.html
The MAP (maximum a posteriori assignment) problem in Bayesian networks is the problem of finding the most probable instantiation of a set of variables given partial evidence for the remaining variables. The state-of-the-art exact solution method is depth-first branch-and-bound search using dynamic variable ordering and a jointree upper bound proposed by Park and Darwiche [2003]. Since almost all search time is spent computing the jointree bounds, we introduce an efficient method for computing these bounds incrementally. We point out that, using a static variable ordering, it is only necessary to compute relevant upper bounds at each search step, and it is also possible to cache potentials of the jointree for efficient backtracking. Since the jointree computation typically produces bounds for joint configurations of groups of variables, our method also instantiates multiple variables at each search step, instead of a single variable, in order to reduce the number of times that upper bounds need to be computed. Experiments show that this approach leads to orders of magnitude reduction in search time. text:
http://ijcai.org/papers09/Papers/IJCAI09-327.pdf
=======================
PART-9: UNCERTAINTY in AI:
=328=> 1990p
A Syntax-based Framework for Merging Imprecise Probabilistic Logic Programs
,
Anbu Yue, Weiru Liu,
http://ijcai.org/papers09/Abstracts/328.html
In this paper, we address the problem of merging multiple imprecise probabilistic beliefs represented as Probabilistic Logic Programs (PLPs) obtained from multiple sources. Beliefs in each PLP are modeled as conditional events attached with probability bounds. The major task of syntax-based merging is to obtain the most rational probability bound for each conditional event from the original PLPs to form a new PLP. We require the minimal change principle to be followed so that each source gives up its beliefs as little as possible. Some instantiated merging operators are derived from our merging framework. Furthermore, we propose a set of postulates for merging PLPs, some of which extend the postulates for merging classical knowledge bases, whilst others are specific to the merging of probabilistic beliefs.
text:
http://ijcai.org/papers09/Papers/IJCAI09-328.pdf
=======================
PART-9: UNCERTAINTY in AI:
=329=> 1996p
Speeding Up Exact Solutions of Interactive Dynamic Influence Diagrams
using Action Equivalence
, Yifeng Zeng, Prashant Doshi,
http://ijcai.org/papers09/Papers/IJCAI09-329.pdf
Interactive dynamic influence diagrams (I-DIDs) are graphical models for sequential decision making in partially observable settings shared by other agents. Algorithms for solving I-DIDs face the challenge of an exponentially growing space of candidate models ascribed to other agents, over time. Previous approach for exactly solving IDIDs groups together models having similar solutions into behaviorally equivalent classes and updates these classes. We present a new method that, in addition to aggregating behaviorally equivalent models, further groups models that prescribe identical actions at a single time step. We show how to update these augmented classes and prove that our method is exact. The new approach enables us to bound the aggregated model space by the cardinality of other agents’ actions. We evaluate its performance and provide empirical results in support. text:
http://ijcai.org/papers09/Papers/IJCAI09-329.pdf
=======================
PART-9: UNCERTAINTY in AI:
=330=> 2002p
A General Approach to Environment Design with One Agent
,
Haoqi Zhang, Yiling Chen, David C. Parkes,
http://ijcai.org/papers09/Abstracts/330.html
The problem of environment design considers a setting in which an interested party aims to influence an agent's decisions by making limited changes to the agent's environment. Zhang and Parkes [2008] first introduced the environment design concept for a specific problem in the Markov Decision Process setting. In this paper, we present a general framework for the formulation and solution of environment design problems. We consider both the case in which the agent's local decision model is known and partially unknown to the interested party, and illustrate the framework and results on a linear programming setting. For the latter problem, we formulate an active, indirect elicitation method and provide conditions for convergence and logarithmic convergence. We relate to the problem of inverse optimization and also offer a game-theoretic interpretation of our methods.
text:
http://ijcai.org/papers09/Papers/IJCAI09-330.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:44
PART-9: UNCERTAINTY in AI
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 25 авг 09 5:44
[
Ответ
][
Цитата
]
Стр.25 (39)
:
1
...
21
22
23
24
[25]
26
27
28
29
...
39
<<
< Пред.
|
След. >
>>
Главная
|
Материалы
|
Справочник
|
Гостевая книга
|
Форум
|
Ссылки
|
О сайте
Вопросы и замечания направляйте нам по
Copyright © 2001-2022, www.gotai.net