GotAI.NET
Форум: Проблемы искусственного интеллекта
Регистрация
|
Вход
Все темы
|
Новая тема
Стр.17 (39)
<<
< Пред.
|
След. >
>>
Поиск:
Автор
Тема: На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:16
PART-4: MACHINE LEARNING:
=195=> 1156p
Large Margin Boltzmann Machines
,
Xu Miao, Rajesh P. N. Rao,
http://ijcai.org/papers09/Abstracts/195.html
Boltzmann Machines are a powerful class of undirected graphical models. Originally proposed as artificial neural networks, they can be regarded as a type of Markov Random Field in which the connection weights between nodes are symmetric and learned from data. They are also closely related to recent models such as Markov logic networks and Conditional Random Fields. A major challenge for Boltzmann machines (as well as other graphical models) is speeding up learning for large-scale problems. The heart of the problem lies in efficiently and effectively approximating the partition function. In this paper, we propose a new efficient learning algorithm for Boltzmann machines that allows them to be applied to problems with large numbers of random variables. We introduce a new large-margin variational approximation to the partition function that allows Boltzmann machines to be trained using a support vector machine (SVM) style learning algorithm. For discriminative learning tasks, these large margin Boltzmann machines provide an alternative approach to structural SVMs. We show that these machines have low sample complexity and derive a generalization bound. Our results demonstrate that on multi-label classification problems, large margin Boltzmann machines achieve orders of magnitude faster performance than structural SVMs and also outperform structural SVMs on problems with large numbers of labels. text:
http://ijcai.org/papers09/Papers/IJCAI09-195.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:16
PART-4: MACHINE LEARNING:
=196=> 1163p
Transfer Learning from Minimal Target Data by Mapping across Relational Domains
,
Lilyana Mihalkova, Raymond J. Mooney,
http://ijcai.org/papers09/Abstracts/196.html
A central goal of transfer learning is to enable learning when training data from the domain of interest is limited. Yet, work on transfer across relational domains has so far focused on the case where there is a significant amount of target data. This paper bridges this gap by studying transfer when the amount of target data is minimal and consists of information about just a handful of entities. In the extreme case, only a single entity is known. We present the SR2LR algorithm that finds an effective mapping of predicates from a source model to the target domain in this setting and thus renders pre-existing knowledge useful to the target task. We demonstrate SR2LR's effectiveness in three benchmark relational domains on social interactions and study its behavior as information about an increasing number of entities becomes available.
text:
http://ijcai.org/papers09/Papers/IJCAI09-196.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:16
PART-4: MACHINE LEARNING:
=197=> 1169p
Semi-Supervised Learning of Visual Classifiers from Web Images and Text
,
Nicholas Morsillo, Christopher Pal, Randal Nelson,
http://ijcai.org/papers09/Abstracts/197.html
The web holds tremendous potential as a source of training data for visual classification. However, web images must be correctly indexed and labeled before this potential can be realized. Accordingly, there has been considerable recent interest in collecting imagery from the web using image search engines to build databases for object and scene recognition research. While search engines can provide rough sets of image data, results are noisy and this leads to problems when training classifiers. In this paper we propose a semi-supervised model for automatically collecting clean example imagery from the web. Our approach includes both visual and textual web data in a unified framework. Minimal supervision is enabled by the selective use of generative and discriminative elements in a probabilistic model and a novel learning algorithm. We show through experiments that our model discovers good training images from the web with minimal manual work. Classifiers trained using our method significantly outperform analogous baseline approaches on the Caltech-256 dataset. text:
http://ijcai.org/papers09/Papers/IJCAI09-197.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:16
PART-4: MACHINE LEARNING:
=198=> 1175p
Autonomously Learning an Action Hierarchy
Using a Learned Qualitative State Representation
,
Jonathan Mugan, Benjamin Kuipers,
http://ijcai.org/papers09/Abstracts/198.html
There has been intense interest in hierarchical reinforcement learning as a way to make Markov decision process planning more tractable, but there has been relatively little work on autonomously learning the hierarchy, especially in continuous domains. In this paper we present a method for learning a hierarchy of actions in a continuous environment. Our approach is to learn a qualitative representation of the continuous environment and then to define actions to reach qualitative states. Our method learns one or more options to perform each action. Each option is learned by first learning a dynamic Bayesian network (DBN). We approach this problem from a developmental robotics perspective. The agent receives no extrinsic reward and has no external direction for what to learn. We evaluate our work using a simulation with realistic physics that consists of a robot playing with blocks at a table. txt:
http://ijcai.org/papers09/Papers/IJCAI09-198.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:16
PART-4: MACHINE LEARNING:
=199=> 1181p
Spectral Embedded Clustering
,
Feiping Nie, Dong Xu, Ivor Wai-Hung Tsang, Changshui Zhang,
http://ijcai.org/papers09/Abstracts/199.html
In this paper, we propose a new spectral clustering method, referred to as Spectral Embedded Clustering (SEC), to minimize the normalized cut criterion in spectral clustering as well as control the mismatch between the cluster assignment matrix and the low dimensional embedded representation of the data. SEC is based on the observation that the cluster assignment matrix of high dimensional data can be represented by a low dimensional linear mapping of data. We also discover the connection between SEC and other clustering methods, such as spectral clustering, Clustering with local and global regularization, K-means and Discriminative K-means. The experiments on many real-world data sets show that SEC significantly outperforms the existing spectral clustering methods as well as K-means clustering related methods.
http://ijcai.org/papers09/Papers/IJCAI09-199.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:17
PART-4: MACHINE LEARNING:
=200=> 1187p
Domain Adaptation via Transfer Component Analysis
,
Sinno Jialin Pan, Ivor W. Tsang, James T. Kwok, Qiang Yang,
http://ijcai.org/papers09/Abstracts/200.html
Domain adaptation solves a learning problem in a target domain by utilizing the training data in a different but related source domain. Intuitively, discovering a good feature representation across domains
is crucial. In this paper, we propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a Reproducing Kernel Hilbert Space (RKHS) using Maximum Mean Discrepancy (MMD). In the subspace spanned by these transfer components, data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. The main contribution of our work is that we propose a novel feature representation in which to perform domain adaptation via a new parametric kernel using feature extraction methods, which can dramatically minimize the distance between domain distributions by projecting data onto the learned transfer components. Furthermore, our approach can handle large datsets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach in are verified by experiments on two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.
text:
http://ijcai.org/papers09/Papers/IJCAI09-200.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:17
PART-4: MACHINE LEARNING:
=201=> 1193p
Semi-Supervised Classification using Sparse Gaussian Process Regression
,
Amrish Patel, S. Sundararajan, Shirish Shevade,
http://ijcai.org/papers09/Abstracts/201.html
Gaussian Processes (GPs) are promising Bayesian methods for classification and regression problems. They have also been used for semi-supervised learning tasks. In this paper, we propose a new algorithm for solving semi-supervised binary classification problem using sparse GP regression (GPR) models. It is closely related to semi-supervised learning based on support vector regression (SVR) and maximum margin clustering. The proposed algorithm is simple and easy to implement. It gives a sparse solution directly unlike the SVR based algorithm. Also, the hyperparameters are estimated easily without resorting to expensive cross-validation technique. Use of sparse GPR model helps in making the proposed algorithm scalable. Preliminary results on synthetic and real-world data sets demonstrate the efficacy of the new algorithm. text:
http://ijcai.org/papers09/Papers/IJCAI09-201.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:17
PART-4: MACHINE LEARNING:
=202=> 1199p
Expanding Domain Sentiment Lexicon through Double Propagation
,
Guang Qiu, Bing Liu, Jiajun Bu, Chun Chen,
http://ijcai.org/papers09/Abstracts/202.html
In most sentiment analysis applications, the sentiment lexicon plays a key role. However, it is hard, if not impossible, to collect and maintain a universal sentiment lexicon for all application domains because different words may be used in different domains. The main existing technique extracts such sentiment words from a large domain corpus based on different conjunctions and the idea of sentiment coherency in a sentence. In this paper, we propose a novel propagation approach that exploits the relations between sentiment words and topics or product features that the sentiment words modify, and also sentiment words and product features themselves to extract new sentiment words. As the method propagates information through both sentiment words and features, we call it double propagation. The extraction rules are designed based on relations described in dependency trees. A new method is also proposed to assign polarities to newly discovered sentiment words in a domain. Experimental results show that our approach is able to extract a large number of new sentiment words. The polarity assignment method is also effective.
text:
http://ijcai.org/papers09/Papers/IJCAI09-202.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:17
PART-4: MACHINE LEARNING:
=203=> 1205p
Goal-Driven Learning in the GILA Integrated Intelligence Architecture
,
Jainarayan Radhakrishnan, Santiago Ontanon, Ashwin Ram,
http://ijcai.org/papers09/Abstracts/203.html
Goal Driven Learning (GDL) focuses on systems that determine by themselves what has to be learnt and how to learn it. Typically GDL systems use meta-reasoning capabilities over a base {\em reasoner}, identifying learning goals and devising strategies. In this paper we present a novel GDL technique to deal with complex AI systems where the meta-reasoning module has to analyze the reasoning trace of multiple components with potentially different learning paradigms. Our approach works by distributing the generation of learning strategies among the different modules instead of centralizing it in the meta-reasoner. We implemented our technique in the GILA system, that works in the airspace task orders domain, showing an increase in performance. text:
http://ijcai.org/papers09/Papers/IJCAI09-203.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:17
PART-4: MACHINE LEARNING:
=204=> 1211p
Streamed Learning: One-Pass SVMs
,
Piyush Rai, Hal Daume, Suresh Venkatasubramanian,
http://ijcai.org/papers09/Abstracts/204.html
We present a streaming model for large-scale classification (in the context of 2-SVM) by leveraging connections between learning and computational geometry. The streaming model imposes the constraint
that only a single pass over the data is allowed. The 2-SVM is known to have an equivalent formulation in terms of theminimumenclosing ball (MEB) problem, and an efficient algorithm based on the idea of core sets exists (CVM) [Tsang et al., 2005]. CVM learns a (1+ε)-approximateMEB for a set of points and yields an approximate solution to corresponding SVM instance. However CVM works in batch mode requiringmultiple passes over the data. This paper presents a single-pass SVM which is based on the minimum enclosing ball of streaming data. We show that the MEB updates for the streaming case can be easily adapted to learn the SVM weight vector in a way similar to using online stochastic gradient updates. Our algorithm performs polylogarithmic computation at each example, and requires very small and constant storage. Experimental results show that, even in such restrictive settings, we can learn efficiently in just one pass and get accuracies comparable to other stateof- the-art SVM solvers (batch and online). We also give an analysis of the algorithm, and discuss some open issues and possible extensions.
text:
http://ijcai.org/papers09/Papers/IJCAI09-204.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:17
PART-4: MACHINE LEARNING:
=205=> 1217p
Semi-Supervised Metric Learning Using Pairwise Constraints
,
Mahdieh Soleymani Baghshah, Saeed Bagheri Shouraki,
http://ijcai.org/papers09/Abstracts/205.html
Distance metric has an important role in many machine learning algorithms. Recently, metric learning for semi-supervised algorithms has received much attention. For semi-supervised clustering, usually a set of pairwise similarity and dissimilarity constraints is provided as supervisory information. Until now, various metric learning methods utilizing pairwise constraints have been proposed. The existing methods that can consider both positive (must-link) and negative (cannot-link) constraints find linear transformations or equivalently global Mahalanobis metrics. Additionally, they find metrics only according to the data points appearing in constraints (without considering other data points). In this paper, we consider the topological structure of data along with both positive and negative constraints. We propose a kernel-based metric learning method that provides a non-linear transformation. Experimental results on synthetic and real-world data sets show the effectiveness of our metric learning method.
text:
http://ijcai.org/papers09/Papers/IJCAI09-205.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:18
PART-4: MACHINE LEARNING:
=206=> 1223p
Predictive Projections
,
Nathan Sprague,
http://ijcai.org/papers09/Abstracts/206.html
This paper addresses the problem of learning control policies in very high dimensional state spaces. We propose a linear dimensionality reduction algorithm that discovers predictive projections: projections in which accurate predictions of future states can be made using simple nearest neighbor style learning. The goal of this work is to extend the reach of existing reinforcement learning algorithms to domains where they would otherwise be inapplicable without extensive engineering of features. The approach is demonstrated on a synthetic pendulum balancing domain, as well as on a robot domain requiring visually guided.
text:
http://ijcai.org/papers09/Papers/IJCAI09-206.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:18
PART-4: MACHINE LEARNING:
=207=> 1230p
On the Equivalence Between Canonical Correlation Analysis
and Orthonormalized Partial Least Squares
,
Liang Sun, Shuiwang Ji, Shipeng Yu, Jieping Ye,
http://ijcai.org/papers09/Abstracts/207.html
Canonical correlation analysis (CCA) and partial least squares (PLS) are well-known techniques for feature extraction from two sets of multi-dimensional variables. The fundamental difference between CCA and PLS is that CCA maximizes the correlation while PLS maximizes the covariance. Although both CCA and PLS have been applied successfully in various applications, the intrinsic relationship between them remains unclear. In this paper, we attempt to address this issue by showing the equivalence relationship between CCA and orthonormalized partial least squares (OPLS), a variant of PLS. We further extend the equivalence relationship to the case when regularization is employed for both sets of variables. In addition, we show that the CCA projection for one set of variables is independent of the regularization on the other set of variables. We have performed experimental studies using both synthetic and real data sets and our results confirm the established equivalence relationship. The presented analysis provides novel insights into the connection between these two existing algorithms as well as the effect of the regularization.
text:
http://ijcai.org/papers09/Papers/IJCAI09-207.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:18
PART-4: MACHINE LEARNING:
=208=> 1236p
Latent Variable Perceptron Algorithm for Structured Classification
,
Xu Sun, Takuya Matsuzaki, Daisuke Okanohara, Jun'ichi Tsujii,
http://ijcai.org/papers09/Abstracts/208.html
We propose a perceptron-style algorithm for fast discriminative training of structured latent variable model. This method extends the perceptron algorithm for the learning with latent dependencies, as an alternative to existing probabilistic latent variable models. It relies on Viterbi decoding over latent variables, combined with simple additive updates. Its training cost is significantly lower than that of probabilistic latent variable models, while it gives comparable or even superior classification accuracy on our tasks. Experiments on natural language processing problems demonstrate that its results are among those good reports on corresponding data sets. text:
http://ijcai.org/papers09/Papers/IJCAI09-208.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:18
PART-4: MACHINE LEARNING:
=209=> 1243p
Succinct Approximate Counting of Skewed Data
,
David Talbot,
http://ijcai.org/papers09/Abstracts/209.html
Practical data analysis relies on the ability to count observations of objectssuccinctly and efficiently. Unfortunately the space usage of an exact estimator grows with the size of the a priori set from which objects are drawn while the time required to maintain such an estimator grows with the size of the data set. We present static and on-line approximation schemes that avoid these limitations when approximate frequency estimates are acceptable. Our Log-Frequency Sketch extends the approximate counting algorithm of Morris [Morris1978] to estimate frequencies with bounded relative error via a single pass over a data set. It uses constant space per object when the frequencies follow a power law and can be maintained in constant time per observation. We give an (epsilon, delta)-approximation scheme which we verify empirically on a large natural language data set where, for instance, 95 percent of frequencies are estimated with relative error less than 0.25 using fewer than 11 bits per object in the static case and 15 bits per object on-line. text:
http://ijcai.org/papers09/Papers/IJCAI09-209.pdf
[
Ответ
][
Цитата
]
Стр.17 (39)
:
1
...
13
14
15
16
[17]
18
19
20
21
...
39
<<
< Пред.
|
След. >
>>
Главная
|
Материалы
|
Справочник
|
Гостевая книга
|
Форум
|
Ссылки
|
О сайте
Вопросы и замечания направляйте нам по
адресу
Copyright © 2001-2022, www.gotai.net