GotAI.NET
Форум: Проблемы искусственного интеллекта
Регистрация
|
Вход
Все темы
|
Новая тема
Стр.15 (39)
<<
< Пред.
|
След. >
>>
Поиск:
Автор
Тема: На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:10
PART 4: M A C H I N E - L E A R N I N G
0973p
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:10
======================
PART-4: MACHINE LEARNING...
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:10
PART-4: MACHINE LEARNING:
=165=> 0974p
Exponential Family Hybrid Semi-Supervised Learning
,
Arvind Agarwal, Hal Daume III,
http://ijcai.org/papers09/Abstracts/165.html
We present an approach to semi-supervised learning based on an exponential family characterization. Our approach generalizes previous work on coupled priors for hybrid generative/discriminative models. Our model is more flexible and natural than previous approaches. Experimental results on several data sets show that our approach also performs better in practice. text:
http://ijcai.org/papers09/Papers/IJCAI09-165.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:11
PART-4: MACHINE LEARNING:
=166=> 0980p
Active Policy Iteration: Efficient Exploration through Active Learning
for Value Function Approximation in Reinforcement Learning
,
Takayuki Akiyama, Hirotaka Hachiya, Masashi Sugiyama,
http://ijcai.org/papers09/Abstracts/166.html
Appropriately designing sampling policies is highly important for obtaining better control policies in reinforcement learning. In this paper, we first show that the least-squares policy iteration (LSPI) framework allows us to employ statistical active learning methods for linear regression. Then we propose a design method of good sampling policies for efficient exploration, which is particularly useful when the sampling cost of immediate rewards is high. We demonstrate the usefulness of the proposed method, named active policy iteration (API), through simulations with a batting robot.
text: NO!!! //need to be:
http://ijcai.org/papers09/Papers/IJCAI09-166.pdf
//!!!
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:11
PART-4: MACHINE LEARNING:
=167=> 0986p
Relational Random Forests Based on Random Relational Rules
,
Grant Anderson, Bernhard Pfahringer,
http://ijcai.org/papers09/Abstracts/167.html
Random Forests have been shown to perform very well in propositional learning. FORF is an upgrade of Random Forests for relational data. In this paper we investigate shortcomings of FORF and propose an alternative algorithm, RF, for generating Random Forests over relational data. RF employs randomly generated relational rules as fully self-contained Boolean tests inside each node in a tree and thus can be viewed as an instance of dynamic propositionalization. The implementation of RF allows for the simultaneous or parallel growth of all the branches of all the trees in the ensemble in an efficient shared, but still single-threaded way. Experiments favorably compare RF to both FORF and the combination of static propositionalization together with standard Random Forests. Various strategies for tree initialization and splitting of nodes, as well as resulting ensemble size, diversity, and computational complexity of RF are also investigated. text:
http://ijcai.org/papers09/Papers/IJCAI09-167.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:11
PART-4: MACHINE LEARNING:
=168=> 0992p
Adaptive Cluster Ensemble Selection
,
Javad Azimi, Xiaoli Fern,
http://ijcai.org/papers09/Abstracts/168.html
Cluster ensembles generate a large number of different clustering solutions and combine them into a more robust and accurate consensus clustering. On forming the ensembles, the literature has suggested that higher diversity among ensemble members produces higher performance gain. In contrast, some studies also indicated that medium diversity leads to the best performing ensembles. Such contradicting observations suggest that different data, with varying characteristics, may require different treatments. We empirically investigate this issue by examining the behavior of cluster ensembles on benchmark data sets. This leads to a novel framework that selects ensemble members for each data set based on its own characteristics. Our framework first generates a diverse set of solutions and combines them into a consensus partition P*. Based on the diversity between the ensemble members and P*, a subset of ensemble members is selected and combined to obtain the final output. We evaluate the proposed method on benchmark data sets and the results show that the proposed method can significantly improve the clustering performance, often by a substantial margin. In some cases, we were able to produce final solutions that significantly outperform even the best ensemble members.
text:
http://ijcai.org/papers09/Papers/IJCAI09-168.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:11
PART-4: MACHINE LEARNING:
=169=> 0998p
Self-Managing Associative Memory for Dynamic Acquisition
of Expertise in High-Level Domains
,
Jacob Beal,
http://ijcai.org/papers09/Abstracts/169.html
Self-organizing maps can be used to implement an associative memory for an intelligent system that dynamically learns about new high-level domains over time. SOMs are an attractive option for implementing associative memory: they are fast, easily parallelized, and digest a stream of incoming data into a topographically organized collection of models where more frequent classes of data are represented by higher-resolution collections of models. Typically, the distribution of models in an SOM, once developed, remains fairly stable, but developing expertise in a new high-level domain requires altering the allocation of models. We use a mixture of analysis and empirical studies to characterize the behavior of SOMs for high-level associative memory, finding that new high-resolution collections of models develop quickly. High-resolution areas of the SOM decay rapidly unless actively refreshed, but in a large SOM, the ratio between growth rate and decay rate may be high enough to support both fast learning and long-term memory. text:
http://ijcai.org/papers09/Papers/IJCAI09-169.pdf
======================
PART-4: MACHINE LEARNING:
=170=> 1004p
Angluin-Style Learning of NFA
,
Benedikt Bollig, Peter Habermehl, Carsten Kern, Martin Leucker,
http://ijcai.org/papers09/Abstracts/170.html
We introduce NL*, a learning algorithm for inferring non-deterministic finite-state automata using membership and equivalence queries. More specifically, residual finite-state automata (RFSA) are learned similarly as in Angluin's popular L* algorithm, which, however, learns deterministic finite-state automata (DFA). Like in a DFA, the states of an RFSA represent residual languages. Unlike a DFA, an RFSA restricts to prime residual languages, which cannot be described as the union of other residual languages. In doing so, RFSA can be exponentially more succinct than DFA. They are, therefore, the preferable choice for many learning applications. The implementation of our algorithm is applied to a collection of examples and confirms the expected advantage of NL* over L*. text:
http://ijcai.org/papers09/Papers/IJCAI09-170.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:11
PART-4: MACHINE LEARNING:
=171=> 1010p
Locality Preserving Nonnegative Matrix Factorization
,
Deng Cai, Xiaofei He, Xuanhui Wang, Hujun Bao, Jiawei Han,
http://ijcai.org/papers09/Abstracts/171.html
Matrix factorization techniques have been frequently applied in information processing tasks. Among them, Non-negative Matrix Factorization (NMF) have received considerable attentions due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts-based in human brain. On the other hand, from geometric perspective the data is usually sampled from a low dimensional manifold embedded in high dimensional ambient space. One hopes then to find a compact representation which uncovers the hidden topics and simultaneously respects the intrinsic geometric structure. In this paper, we propose a novel algorithm, called {\em Locality Preserving Non-negative Matrix Factorization} (LPNMF), for this purpose. For two data points, we use KL-divergence to evaluate their similarity on the hidden topics. The optimal maps are obtained such that the feature values on hidden topics are restricted to be non-negative and vary smoothly along the geodesics of the data manifold. Our empirical study shows the encouraging results of the proposed algorithm in comparisons to the state-of-the-art algorithms on two large high-dimensional databases. text:
http://ijcai.org/papers09/Papers/IJCAI09-171.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:12
PART-4: MACHINE LEARNING:
=172=> 1116p
Selecting Informative Universum Sample for Semi-Supervised Learning
,
Shuo Chen, Changshui Zhang,
http://ijcai.org/papers09/Abstracts/172.html
The Universum sample, which is defined as the sample that doesn't belong to any of the classes the learning task concerns, has been proved to be helpful in both supervised and semi-supervised settings. The former works treat the Universum samples equally. Our research found that not all the Universum samples are helpful, and we propose a method to pick the informative ones, i.e., in-between Universum samples. We also set up a new semi-supervised framework to incorporate the in-between Universum samples. Empirical experiments show that our method outperforms the former ones.
text:
http://ijcai.org/papers09/Papers/IJCAI09-172.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:12
PART-4: MACHINE LEARNING:
=173=> 1022p
Bayesian Extreme Components Analysis
,
Yutian Chen, Max Welling,
http://ijcai.org/papers09/Abstracts/173.html
Extreme Components Analysis (XCA) is a statistical method based on a single eigenvalue decomposition to recover the optimal combination of principal and minor components in the data. Unfortunately, minor components are notoriously sensitive to overfitting when the number of data items is small relative to the number of attributes. We present a Bayesian extension of XCA by introducing a conjugate prior for the parameters of the XCA model. This Bayesian-XCA is shown to outperform plain vanilla XCA as well as Bayesian-PCA and XCA based on a frequentist correction to the sample spectrum. Moreover, we show that minor components are only picked when they represent genuine constraints in the data, even for very small sample sizes. An extension to mixtures of Bayesian XCA models is also explored.
text:
http://ijcai.org/papers09/Papers/IJCAI09-173.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:12
PART-4: MACHINE LEARNING:
=174=> 1028p
Inverse Reinforcement Learning in Partially Observable Environments
,
Jaedeug Choi, Kee-Eung Kim,
http://ijcai.org/papers09/Abstracts/174.html
Inverse reinforcement learning (IRL) is the problem of recovering the underlying reward function from the behaviour of an expert. Most of the existing algorithms for IRL assume that the expert's environment is modeled as a Markov decision process (MDP), although they should be able to handle partially observable settings in order to widen the applicability to more realistic scenarios. In this paper, we present an extension of the classical IRL algorithm by Ng and Russell to partially observable environments. We discuss technical issues and challenges, and present the experimental results on some of the benchmark partially observable domains. text:
http://ijcai.org/papers09/Papers/IJCAI09-174.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:12
PART-4: MACHINE LEARNING:
=175=> 1034p
Knowledge Driven Dimension Reduction For Clustering
,
Ian Davidson,
http://ijcai.org/papers09/Abstracts/175.html
As A.I. algorithms are applied to more complex domains that involve high dimensional data sets there is a need to more saliently represent the data. However, most dimension reduction approaches are driven by objective functions that may not or only partially suit the end users requirements. In this work, we show how to incorporate general-purpose domain expertise encoded as a graph into dimension reduction in way that lends itself to an elegant generalized eigenvalue problem. We call our approach Graph-Driven Constrained Dimension Reduction via Linear Projection (GCDR-LP) and show that it has several desirable properties.
text:
http://ijcai.org/papers09/Papers/IJCAI09-175.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:12
PART-4: MACHINE LEARNING:
=176=> 1040p
Search Techniques for Fourier-Based Learning
,
Adam Drake, Dan Ventura,
http://ijcai.org/papers09/Abstracts/176.html
Fourier-based learning algorithms rely on being able to efficiently find the large coefficients of a function's spectral representation. In this paper, we introduce and analyze techniques for finding large coefficients. We show how a previously introduced search technique can be generalized from the Boolean case to the real-valued case, and we apply it in branch-and-bound and beam search algorithms that have significant advantages over the best-first algorithm in which the technique was originally introduced.
text:
http://ijcai.org/papers09/Papers/IJCAI09-176.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:12
PART-4: MACHINE LEARNING:
=177=> 1046p
Local Learning Regularized Nonnegative Matrix Factorization
,
Quanquan Gu, Jie Zhou,
http://ijcai.org/papers09/Abstracts/177.html
Nonnegative Matrix Factorization (NMF) has been widely used in machine learning and data mining. It aims to find two nonnegative matrices whose product can well approximate the nonnegative data matrix, which naturally lead to parts-based representation. In this paper, we present a local learning regularized nonnegative matrix factorization (LLNMF) for clustering. It imposes an additional constraint on NMF that the cluster label of each point can be predicted by the points in its neighborhood. This constraint encodes both the discriminative information and the geometric structure, and is good at clustering data on manifold. An iterative multiplicative updating algorithm is proposed to optimize the objective, and its convergence is guaranteed theoretically. Experiments on many benchmark data sets demonstrate that the proposed method outperforms NMF as well as many state of the art clustering methods.
text:
http://ijcai.org/papers09/Papers/IJCAI09-177.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:13
PART-4: MACHINE LEARNING:
=178=> 1052p
Learning Optimal Subsets with Implicit User Preferences
,
Yunsong Guo, Carla Gomes,
http://ijcai.org/papers09/Abstracts/178.html
We study the problem of learning an optimal subset from a larger ground set of items, where the optimality criterion is defined by an unknown preference function. We model the problem as a discriminative structural learning problem and solve it using a Structural Support Vector Machine (SSVM) that optimizes a set accuracy performance measure representing set similarities. Our approach departs from previous approaches since we do not explicitly learn a pre-defined preference function. Experimental results on both a synthetic problem domain and a real-world face image subset selection problem show that our method significantly outperforms previous learning approaches for such problems.
text:
http://ijcai.org/papers09/Papers/IJCAI09-178.pdf
[
Ответ
][
Цитата
]
Стр.15 (39)
:
1
...
11
12
13
14
[15]
16
17
18
19
...
39
<<
< Пред.
|
След. >
>>
Главная
|
Материалы
|
Справочник
|
Гостевая книга
|
Форум
|
Ссылки
|
О сайте
Вопросы и замечания направляйте нам по
Copyright © 2001-2022, www.gotai.net